id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.17446 | L2CEval: Evaluating Language-to-Code Generation Capabilities of Large
Language Models | Recently, large language models (LLMs), especially those that are pretrained
on code, have demonstrated strong capabilities in generating programs from
natural language inputs in a few-shot or even zero-shot manner. Despite
promising results, there is a notable lack of a comprehensive evaluation of
these models language-to-code generation capabilities. Existing studies often
focus on specific tasks, model architectures, or learning paradigms, leading to
a fragmented understanding of the overall landscape. In this work, we present
L2CEval, a systematic evaluation of the language-to-code generation
capabilities of LLMs on 7 tasks across the domain spectrum of semantic parsing,
math reasoning and Python programming, analyzing the factors that potentially
affect their performance, such as model size, pretraining data, instruction
tuning, and different prompting methods. In addition to assessing model
performance, we measure confidence calibration for the models and conduct human
evaluations of the output programs. This enables us to identify and analyze the
typical failure modes across various tasks and models. L2CEval offers a
comprehensive understanding of the capabilities and limitations of LLMs in
language-to-code generation. We also release the evaluation framework and all
model outputs, hoping to lay the groundwork for further future research in this
domain. | Ansong Ni, Pengcheng Yin, Yilun Zhao, Martin Riddell, Troy Feng, Rui Shen, Stephen Yin, Ye Liu, Semih Yavuz, Caiming Xiong, Shafiq Joty, Yingbo Zhou, Dragomir Radev, Arman Cohan | 2023-09-29T17:57:00Z | http://arxiv.org/abs/2309.17446v2 | # L2CEval: Evaluating Language-to-Code Generation
###### Abstract
Recently, large language models (LLMs), especially those that are pretrained on code, have demonstrated strong capabilities in generating programs from natural language inputs in a few-shot or even zero-shot manner. Despite promising results, there is a notable lack of a comprehensive evaluation of these models' language-to-code generation capabilities. Existing studies often focus on specific tasks, model architectures, or learning paradigms, leading to a fragmented understanding of the overall landscape. In this work, we present **L2CEval**, a systematic evaluation of the language-to-code generation capabilities of LLMs on 7 tasks across the domain spectrum of semantic parsing, math reasoning and Python programming, analyzing the factors that potentially affect their performance, such as model size, pretraining data, instruction tuning, and different prompting methods. In addition to assessing model performance, we measure confidence calibration for the models and conduct human evaluations of the output programs. This enables us to identify and analyze the typical failure modes across various tasks and models. **L2CEval** offers a comprehensive understanding of the capabilities and limitations of LLMs in language-to-code generation. We also release the evaluation framework1 and all model outputs, hoping to lay the groundwork for further future research in this domain.
Footnote 1: All future releases will be updated on the project website: [https://l2c-eval.github.io/](https://l2c-eval.github.io/)
## 1 Introduction
Language-to-code (**L2C2**) is a type of task that aims to automatically map natural language descriptions to programs, which are later executed to satisfy the user's demand (Yin and Neubig, 2017; Austin et al., 2021). As illustrated in Fig. 1, language-to-code is the foundation of many applications in AI, such as _task-oriented dialogue systems_(Andreas et al., 2020), _coding assistant_(Agashe et al., 2019; Lai et al., 2022), _language interfaces to databases_(Pasupat and Liang, 2015; Yu et al., 2018), and _robotic control_(Zhou et al., 2021; Shridhar et al., 2020). It has also served as a great testbed for evaluating various language understanding capabilities of NLP systems, such as _logical and math reasoning_(Gao et al., 2022; Han et al., 2022), _grounded language understanding_(Xie et al., 2022; Huang et al., 2022), and _tool use_(Schick et al., 2023; Paranjape et al., 2023).
Recent progress on large language models (LLMs) (OpenAI, 2023; Chowdhery et al., 2022; Touvron et al., 2023), especially those that are specifically trained for coding (Fried et al., 2022; Nijkamp et al., 2022; Chen et al., 2021; Li et al., 2023), has shown that such LLMs that are trained on a mixture of text and code are able to perform language-to-code generation under few-shot or even zero-shot learning settings (Rajkumar et al., 2022; Ni et al., 2023). However, the modeling factors that affect the performance of LLMs for such **L2C** tasks, such as model size, training data mixture, prompting methods, and instruction tuning are poorly understood.In addition, there lacks a consistent evaluation of different LLMs on the same spectrum of language-to-code tasks, making it difficult for the users to decide which models to use for certain tasks or if they should resort to finetuning their own model. Beyond model performance, model properties such as robustness to prompt and confidence calibration are also crucial for understanding the reliability of the LLMs, but such properties have not been systematically studied for **L2C** tasks in previous work.
In this work, we present **L2CEval**, providing a systematic evaluation of the language-to-code
generation capabilities of LLMs. **L2CEval** includes a wide range of state-of-the-art models, specifically 54 models from 13 different organizations, all evaluated on three core domains of language-to-code generation tasks. **L2CEval** includes extensive evaluations of models as small as 1 billion parameters, to significantly larger ones such as davinci and GPT-4 models from OpenAI, with estimated size of 170B+ parameters. We also benchmark models that are trained on different mixtures of data of varying sizes (35B \(\sim\) 1T tokens), as well as models that are instruction-tuned, from both open-source and open-access proprietary categories. Our work is the first to conduct extensive and thorough comparisons of LLMs for language-to-code generation across multiple dimensions of variation. To summarize, we release **L2CEval** and its main contributions are as follows:
* We standardize the evaluation (_e.g._ prompts, metrics) of **7**L2C** tasks across domains of semantic parsing, math reasoning, and Python programming to allow controlled comparisons among **54** models from **13** organizations;
* We study the model size and training data scaling laws and measure the effects of several recent modeling contributions (_e.g._ instruction-tuning, zero/few-shot prompting) for **L2C** tasks;
* We analyze the robustness and calibration measurements of the model outputs, and identify the common error cases for models of different capabilities;
* We release the outputs (_i.e._ texts and logits) of all models on all datasets to facilitate future studies.
Through our work, we hope to provide insight into applying LLMs to **L2C** applications, as well as building future LLMs.
## 2 Background
### Language-to-Code Generation
While language-to-code generation covers a wide range of tasks as shown in Fig. 1, here we attempt to give a unified problem formulation. Given the user's intent described in natural language \(x\) (_e.g._ description of a Python function) and optionally some programming context \(c\) (_e.g._ existing function definitions, open test cases), an **L2C** model aims to automatically map the input to a program \(y\) (_e.g._ a Python function). This generation process can be directly modeled as:
\[\hat{y}=\arg\max_{y}P(y|x,c)\]
Such program \(\hat{y}\), sometimes accompanied with additional execution context \(e\) (_e.g._ connection to DB) is later executed by an executor \(\mathcal{E}(\cdot)\) (_e.g._ Python interpreter). We can evaluate execution accuracy by checking if it matches the gold execution results \(z^{*}\) upon execution:
\[\text{Acc.}=\mathds{1}(\hat{z},z^{*})\ \ \text{where}\ \ \hat{z}=\mathcal{E}(\hat{y},e)\]
We use execution accuracy as a proxy for whether the user's original intent is satisfied3.
Footnote 3: Execution-based evaluation typically results in false-positives thus an overestimate of the performance. See the limitation section Β§ 6 for more details.
### Few-shot Prompting with LLMs
Recent works on **L2C** generation find that LLMs are capable of few-shot learning from a couple of exemplars presented in the prompt via in-context learning (Rajkumar et al., 2022; Xie et al., 2022;
Figure 1: Language-to-code (**L2C**) generation is the cornerstone for many applications in AI. It is also the key to enabling direct communication between the users and the computers with natural language.
Ni et al., 2023). For **L2C** tasks, such few-shot exemplars can be represented as \(\{(x_{i},y_{i},c_{i})\}_{i<m}\), where \(m\) is the number of exemplars.4 Moreover, recent progress on instruction tuning (Ouyang et al., 2022) shows that adding a natural language instruction for the task improves the performance of LLMs, especially for the instruction-tuned models under the zero-shot setting where no exemplars are presented in the prompt. We therefore add a task-specific instruction \(I\) to the beginning of the prompt. Specifically, a prompt to an LLM is the concatnation of a task instruction, \(m\) few-shot exemplars, as well as the intent \(x\) and its programming context \(c\) of a test problem:
Footnote 4: This also recoveres zero-shot prompting when \(m=0\).
\[\textbf{prompt}=f(I,\{(x_{i},y_{i},c_{i})\}_{i<m},c,x)\]
where \(f(\cdot)\) is a "promptify" function that concatenates those inputs into a string. Examples of task-specific instructions and prompts are listed in SS A.1. We can then prompt an LLM to draw predictions (programs) \(\hat{y}\sim P_{\textbf{LM}}(y|\textbf{prompt})\).
## 3 Tasks
We evaluate the language-to-code capabilities of LLMs in three representative application scenarios shown in Fig. 1: _semantic parsing_, _math reasoning_, and _Python programming_. Particularly, these tasks collectively assess the capabilities of models in language-to-code generation to understand natural language in different contexts, reason about the steps for solving the problem, and convert it into executable code (see Fig. 1). Semantic parsing focuses on the transformation of natural language queries into structured, domain-specific languages; math reasoning challenges the models' numerical and logical reasoning abilities by requiring them to solve problems that involve multiple steps of calculation and reasoning; and Python programming tests the models' proficiency in generating functional code that aligns with a user's intent, reflecting a real-world application of LLMs in software development. Below we discuss each of these tasks in detail.
Semantic parsing.Semantic parsing considers the task of translating a user's natural language utterance (_e.g. who averaged the most pots in the last season?_ in Fig. 1) into machine-executable programs (_e.g._ an SQL database query), and has been a long-standing problem in NLP (Zetttlemoyer and Collins, 2005; Berant et al., 2013). A prompt to an LLM consists of an NL utterance and descriptions of relevant structured context, such as the schema information of a database (_e.g._ columns in each table). The target output is a program defined in some domain-specific languages, such as SQL. Intuitively, semantic parsing challenges LLMs on grounded language understanding (Xie et al., 2022; Cheng et al., 2022), where a model needs to associate NL concepts in utterances (_e.g._ "_last season_") with relevant structured knowledge (_e.g._ superlative operation on column season) in order to synthesize the program (Yin et al., 2020; Yu et al., 2018; Pasupat and Liang, 2015). In this work, we choose to use text-to-SQL as a representative task as it closely ties with applications such as natural language interface to databases (Affolter et al., 2019; Androutsopoulos et al., 1995). Recent work (Rajkumar et al., 2022; Ni et al., 2023) shows that LLMs are effective in performing text-to-SQL parsing. In this work, we use two widely-used text-to-SQL datasets, **Spider**(Yu et al., 2018) and **WikiTQ**(Pasupat and Liang, 2015), as our datasets for benchmarking semantic parsing capabilities of LLMs. We follow (Xie et al., 2022) and provide the database schema or the table headers as the extra input to an LLM in addition to the natural language utterance.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Domain** & **Dataset** & **Split** & **Size** & **Input** & **Output** \\ \hline \multirow{3}{*}{_Semantic Parsing_} & Spider (Yu et al., 2018) & Dev & 1,000 & DB schema + NL & SQL Query \\ & WikiTQ (Pasupat and Liang, 2015) & Dev & 2,828 & Table headers\({}^{*}\) + NL & SQL Query \\ \cline{1-1} \multirow{3}{*}{_Math Reasoning_} & GSMSk (Cobbe et al., 2021) & All & 1,494 & Math problem in NL & Python solution \\ & SVAMP (Patel et al., 2021) & All & 996 & Math problem in NL & Python solution \\ \cline{1-1} \multirow{3}{*}{_Python Programming_} & MBPP (Austin et al., 2021) & Test & 500 & NL spec. + 1 test & Python function \\ & HumanEval (Chen et al., 2021) & All & 164 & NL spec. + 1-3 test & Python function \\ \cline{1-1} & DS-1000 (Lai et al., 2022) & All & 1,000 & NL spec. & Python lines \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of all the datasets being evaluated. \({}^{*}\): the BRIDGE format (Lin et al., 2020) is used.
Math reasoning.To solve a math word problem, a model needs to abstract the mathematical relations from the natural language description, and reason about the potential steps for solving it. Compared to semantic parsing where the target programs are table-lookup queries, programs for math reasoning tasks usually require multiple steps of calculation and numerical and logical reasoning. Because of this, math word problems are widely adopted as testbeds for evaluating the reasoning abilities of LLMs (Cobbe et al., 2021; Wei et al., 2022; Ni et al., 2022; Welleck et al., 2022). In this paper, we choose the **GSM8k** dataset (Cobbe et al., 2021) for this evaluation, which contains \(\sim\)8K grade-school level math problems and solutions described in natural language. In addition, we also evaluate the models on the **SVAMP** dataset (Patel et al., 2021) which contains 1k examples of math word problems. Following previous work, (Ni et al., 2022; Welleck et al., 2022; Gao et al., 2022), we prompt the models to answer math word problems by generating Python programs as solutions, which are later executed by a Python interpreter to output the answer.
Python programming.One of the most important applications for LLMs trained on code is to assist programmers in developing software. Typically, a model is given a developer's natural language intent (_e.g. write a merge sort function_) with optional additional specifications such as input/output examples or unit tests (_e.g._ assert merge_sort([5,7,3])==[3,5,7]))
(Austin et al., 2021), in order to generate the code that implements the user's intent (_e.g._ a Python function). To evaluate the basic programming skills of the LLMs, we use the **MBPP**(Austin et al., 2021), **HumanEval**(Chen et al., 2021) and **DS-1000**(Lai et al., 2022) datasets.
More task-specific settings are described in SS A.2, and example input outputs for different tasks are shown in SS A.1.
## 4 Models
We evaluate 54 models that vary in size, training data mixture, architecture context length, and training methods. Tab. 2 summarizes the open
\begin{table}
\begin{tabular}{c l c c c c c c} \hline \hline
**Organization** & **Model Name** & **Release** & **Size** & **\# All** & **\# Code** & **Ctx.** & **Code** & **Inst.** \\ & & **Time** & & **Tokens** & **Tokens** & **Leng.** & **Specific** & **Tuned** \\ \hline \multirow{5}{*}{Salesforce} & CodeGen-multi & \multirow{5}{*}{2022-3} & \multirow{5}{*}{6.1/16.1B} & 505B & 119B & 2,048 & β & β \\ & CodeGen-mono & & & 577B & 191B & 2,048 & β & β \\ & CodeGen-2.5-multi & \multirow{5}{*}{2023-7} & \multirow{5}{*}{7B} & 1.4T & 1.4T & 2,048 & β & β \\ & CodeGen-2.5-mono & & & - & - & 2,048 & β & β \\ & CodeGen-2.5-instruct & & & - & - & 2,048 & β & β \\ \hline \multirow{3}{*}{Eleuther AI} & GPT-J & 2021-5 & 6.1B & 402B & 46B & 2,048 & β & β \\ & GPT-NeoX & 2022-4 & 20.6B & 472B & 54B & 2,048 & β & β \\ & Pythia & 2023-4 & 1.4/6.9/12B & 300B & 35B & 2,048 & β & β \\ \hline Databricks & Dolly-v2 & 2023-4 & 6.9/12B & - & - & 2,048 & β & β \\ \hline \multirow{3}{*}{BigCode} & SantaCoder & 2023-1 & 1.1B & 236B & 236B & 2,048 & β & β \\ & StarCoder & 2023-5 & 15.5B & 1T & 1T & 8,192 & β & β \\ & StarCoderPlus & 2023-6 & 15.5B & 1.6T & 1T & 8,192 & β & β \\ \hline \multirow{3}{*}{Meta AI} & InCoder & 2022-4 & 1.3/6.7B & 52B & 52B & 2,048 & β & β \\ & LLAA & & 6.7/13B & 1T & 45B & 2,048 & β & β \\ & LLAMA-30B & & 32.5B & 1.4T & 63B & 2,048 & β & β \\ & LLAMA-2 & 2023-7 & 7/13/70B & 2T & - & 4,096 & β & β \\ & CodeLLaMA & 2023-7 & 7/13/34B & 2.5T & 435B & 16,384 & β & β \\ \hline Stanford & Alpaca & 2023-3 & 6.7/13/32.5B & - & - & 2,048 & β & β \\ \hline LMSYS & Vincuna & 2023-3 & 6.7/13/32.5B & - & - & 2,048 & β & β \\ \hline Replit & Replit-code-v1-3b & 2023-5 & 2.7B & 525B & 525B & 2,048 & β & β \\ \hline \multirow{3}{*}{MosaicML} & MPT-7B & \multirow{3}{*}{2023-5} & \multirow{3}{*}{7B} & 1T & 135B & 2,048 & β & β \\ & MPT-7B-instruct & & & - & - & 2,048 & β & β \\ & MPT-30B & & & 1T & 135B & 8,192 & β & β \\ & MPT-30B-instruct & & & - & - & 8,192 & β & β \\ \hline \multirow{3}{*}{MistralAI} & Mistral-7B-v0.1 & \multirow{3}{*}{2023-9} & \multirow{3}{*}{7B} & - & - & 32,768 & β & β \\ & Mistral-7B-instruct-v0.1 & & & - & - & 32,768 & β & β \\ \hline \hline \end{tabular}
\end{table}
Table 2: Information table for the open-source models evaluated in this work. -: no information on training data size is available, or the model is further tuned on top of other models.
source models we evaluated and several key properties.
### Model Selection
While it is not possible to evaluate every single LLM on these tasks, we strive to provide a comprehensive evaluation of the current LLMs in **L2C** generation, by covering a diversified selection of LLMs of varying sizes and are trained on different mixtures of data. For example, the size of the models we consider ranges from 1B (_e.g._ SantaCoder Allal et al. (2023)) to 170B+ (_e.g._ davinci models from OpenAI). Though we prioritize the evaluation of code-specific models, which means that the majority of the training tokens are from code (_e.g._ CodeLLaMA Roziere et al. (2023), StarCoder Li et al. (2023)), we also include the most competitive general LLMs such as LLaMA2-70B Touvron et al. (2023) and MPT-30B5 for comparison. To evaluate the effect of instruction-tuning and its data mixtures on **L2C** tasks, we also include several instruct-tuned versions of the LLMs, such as Alpaca Stanford), Dolly (Databricks), etc.
Footnote 5: [https://www.mosaicml.com/blog/mpt-30b](https://www.mosaicml.com/blog/mpt-30b)
Footnote 6: [https://huggingface.co/models](https://huggingface.co/models)
### Model Access
For all the open-source models, we access them through huggingface model hub7 and run them locally on our machines with RTX Ada A6000 48GiB GPUs, using Lightning8 as our underlying framework. For proprietary Open AI models we access them through the public API9. In this paper we primarily focus on evaluation and analysis of open-source models, as we are unclear about the technical details of proprietary models (_e.g._ model size, training data mixture).
Footnote 7: [https://lightning.ai/](https://lightning.ai/)
Footnote 8: [https://platform.openai.com/docs/api-reference](https://platform.openai.com/docs/api-reference)
### Evaluation Details
When generating programs, we use greedy decoding for all models10. To optimize for a fair comparison, we standardize the prompting methods by following previous work Ni et al. (2023); Ben Allal et al. (2022) and avoid prompts that are tailored for specific models. Using the formulation in SS 2, we evaluate **execution accuracy** for all tasks with all models. This is also consistent with previous work on **L2C**Xie et al. (2022); Yin and Neubig (2017); Zhang et al. (2022).
Footnote 10: Previous work Austin et al. (2021) has found that greedy decoding leads to degenerated outputs but we do not observe this upon human inspection of outputs. For other limitations of using greedy-decoding, see Β§ 6.
## 5 Results and Analysis
We organize the experiment results and analysis as follows. We first discuss the scaling effects of model size, training data and compute in SS 5.1,
\begin{table}
\begin{tabular}{c|l c|c c c c c|c} \hline \hline \multirow{2}{*}{**Group**} & \multirow{2}{*}{**Model (Size)**} & **Code** & **Spider** & **WikiTQ** & **GSM8k** & **MBPP** & **HumanEval** & \multirow{2}{*}{**MWR**} \\ & & **LLM** & (2-shot) & (2-shot) & (8-shot) & (3-shot) & (0-shot) & \\ \hline \multirow{3}{*}{Other} & gpt-4 (unknown) & β & 77.2 & 56.2 & 92.4 & 74.0 & 76.8 & 100\% \\ & text-davinci-003 (unknown) & β & 68.3 & 45.4 & 64.1 & 63.6 & 52.4 & 94\% \\ & gpt-3.4-turbo (unknown) & β & 72.7 & 38.4 & 74.7 & 66.6 & 39.0 & 91\% \\ \hline \multirow{3}{*}{20B \(\sim\) 100B} & CodeLLaMA-base (34B) & β & 61.7 & 32.3 & 43.6 & 45.6 & 44.5 & 88\% \\ & LLaMA-2 (70B) & β & 58.5 & 37.3 & 56.0 & 36.8 & 28.7 & 81\% \\ & Alpaca (30B) & β & 46.2 & 39.7 & 19.4 & 32.0 & 23.8 & 70\% \\ \hline \multirow{3}{*}{10B \(\sim\)20B} & WizardCoder (15.5B) & β & 58.6 & 29.4 & 25.8 & 47.4 & 51.2 & 86\% \\ & CodeLLaMA (13B) & β & 58.5 & 35.6 & 30.7 & 44.0 & 34.2 & 85\% \\ & StarCoder-15.5B & β & 52.1 & 27.4 & 22.1 & 46.6 & 34.2 & 78\% \\ \hline \multirow{3}{*}{2B \(\sim\)10B} & Mistral-v0.1 (7B) & β & 53.3 & 31.4 & 38.4 & 37.8 & 25.0 & 79\% \\ & CodeLLaMA-base (7B) & β & 54.3 & 29.5 & 25.5 & 40.0 & 31.1 & 75\% \\ & CodeGen2.5-multi (7B) & β & 53.8 & 29.6 & 14.9 & 38.2 & 31.1 & 71\% \\ \hline \multirow{3}{*}{\(<\)2B} & SantaCoder (1.3B) & β & 19.0 & 11.4 & 2.8 & 26.2 & 17.7 & 33\% \\ & InCoder (1.1B) & β & 13.4 & 6.2 & 1.0 & 13.8 & 8.5 & 11\% \\ \cline{1-1} & Pythia (1.4B) & β & 5.7 & 4.4 & 1.5 & 5.8 & 3.7 & 5\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Top-3 models at different size ranges. Evaluated with head-to-head performance comparison on each task, then the mean win rate (**MWR**) is computed across tasks.
then in SS 5.2 we analyze how the fraction of code data in the training mixture affects the performance of models for **L2C** tasks. In SS 5.3, we compare the instruction-tuned models and their base models to study the effect of instruction-tuning, especially on zero-shot results. Lastly, we evaluate the sensitivity of the models on the prompts in SS 5.4 and confidence calibration in SS 5.6.
### Scaling
Here we study the correlation between model performance and the scales of the model parameter count as well as the size of training data. While most of the findings here are consistent with previous work on scaling laws, we focus on properties that are more related to **L2C** tasks.
Model size.We show the top-3 models at different size ranges based on mean win rate (MWR) in Tab. 3. MWR is defined as the fraction of a model outperforming other models, averaged across the five tasks. From this table, we can observe a clear discrepancy between models of different size groups. However, such scaling effect also differs for different tasks. For tasks that are more similar to the pretraining data (_e.g._ MBPP), the scaling curves are much smoother, while for tasks that require more reasoning skills (_e.g._ GSM8k), the scaling curve appears to be more "emergent" (Wei et al., 2022). This can be better observed from SS B.4 as we plot the scaling curve independently for each task.
Training data and compute.We plot the average model performance with the number of tokens seen as well as the FLOPS of compute used during training in Fig. 1(a) and Fig. 1(b), respectively. Comparing models of similar sizes (_e.g._ CodeGen-16B vs. StarCoder-15.5B, Pythia-6.9B vs. LLaMA-7B), those that are trained with more tokens generally have better performance for **L2C**, which is also consistent with previous findings (Kaplan et al., 2020). It is also suggested in Fig. 1(b) that some models are under-trained, such as InCoder-6B and CodeGen-16B models.
### Data Mixture
Though all of the models we evaluated have seen code tokens during pretraining, the distributions of their training data mixture are quite different as we can see from Tab. 2. From Tab. 3 we can see that code-specific LLMs are typically better at **L2C** tasks, as most of the top models in every size category are code-specific LLMs. While it is less surprising that code LLMs register better performance on programming tasks such as MBPP, they are also better on tasks that focus more on logical reasoning (_e.g._ GSM8K) and grounded language understanding (_e.g._ WikiTQ, Spider). Notably, StarCoder-15.5B, which is only trained on code-related tokens11, achieves far better performances than LLaMA-13B, which is a similar-sized model trained on a similar number of tokens but only
Figure 2: Pretraining data and compute scaling across selected models. Average execution accuracy is calculated across selected tasks (_i.e._ Spider, WikiTQ, GSM8k, and MBPP). More scaling curves are shown in Β§ B.4.
4.5% of which is code.
From Fig. 1(b), we can also find that training on code tokens is more compute-efficient for **L2C** tasks, as the dashed line clearly separates the code-specific models (_e.g._ StarCoder, and CodeGen) and the general LLMs (_e.g._ Pythia and LLaMA). The only exceptions are CodeGen-multi models, as they are initialized from general LMs (_i.e._ CodeGen-nl) thus the majority of the compute is still spent on text tokens. This is expected as general LLMs are also optimized for many other natural language tasks that are not related to code. This shows that for **L2C** tasks, training on more code tokens instead of text tokens improves the compute efficiency during pretraining.
### Instruction-tuning
Instruction tuning (Ouyang et al., 2022) is a type of method that enhances the instruction following abilities of LLMs. Here we compare the few- and zero-shot performance of instruction-tuned models and their base models in Tab. 4. To better understand the model performance, we also include the execution rate in Tab. 4, defined as the percentage of programs that successfully produce an execution result, regardless of its correctness.12 From the results, we can see that instruction-tuned models achieve much higher execution rates, especially for zero-shot settings, which is likely to lead to better execution accuracy. This suggests that instruction-tuned models are better at following the instructions and generate less deformed (inexcutable) programs, when few-shot exemplars are not present in the prompt.
Footnote 12: For semantic parsing and MBPP tasks, this is simply defined as executability. For GSM8k, the program also needs to produce an βanswerβ variable for it to be considered as well-formed.
Though it was mentioned in (Ouyang et al., 2022) that instruction-tuning generally decreases few-shot performance, as it shifts the attention of the model from the few-shot exemplars to the instructions, we do not observe similar effects consistently for **L2C** tasks in our experiments. From Tab. 4, we observe improvements over non-instruction-tuned models for both few- and zero-shot settings for most scenarios. We also note that the zeros-shot performances for GSM8k are all zeros for the selected models. By inspecting the model outputs, we find that the models fail to follow the instructions and provide the answer by ending the Python solution with answer = x.
### Sensitivity to Prompt
Here we perform several ablation studies on the few-shot prompting methods. By varying the number of exemplars or the exemplars themselves, we aim to test the sensitivity of different models to the few-shot prompts. In Fig. 3, we plot the performance of the models as a function of the number of exemplars in the prompt. From the results, we can see that while increasing the number of few-shot exemplars in the prompt generally improves execution accuracy, such improvement is not consistent with different models and tasks. For example, on the MBPP dataset, increasing from 3 to 8 exemplars in the prompt actually decreases the performance for most of the selected models, _e.g._ by 4.0% for codex-cushman. We hypothesize that this is because the programs in the prompt will bias the model into generating similar programs and ignore the specification. This effect is also found in (Li et al., 2022). Moreover, we also show the sensitivity of the models to different exemplars and present the results in Tab. 5 by showing the variance of model performance across different runs using different exemplars in the prompt. While the variances differ for
\begin{table}
\begin{tabular}{l r r r r r r} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{3}{c}{**Few-Shot**} & \multicolumn{3}{c}{**Zero-Shot**} \\ \cline{2-7} & Spider & GSM8k & MBPP & Spider & GSM8k & MBPP \\ \hline Pythia-6.9B & 12.5 / **33.9** & 2.6 / **74.5** & **13.2 / 97.6** & 2.8 / 8.0 & 0 / 0 & 1.2 / 15.0 \\ Dolly-v2-7b & **13.1 / 31.7** & 2.6 / 52.3 & 12.0 / 97.2 & **5.2 / 15.0** & 0 / 0.1 & **9.4 / 62.6** \\ \hline LLaMA-7B & 13.1 / 36.1 & **8.0 / 71.3** & **16.6 /** 96.6 & 5.7 / 22.2 & 0 / 0 & 5.0 / 29.8 \\ Alpaca-7B & **16.1 / 37.8** & 3.5 / 37.1 & 14.4 / **98.4** & **20.5 / 45.2** & 0 / 0 & **13.2 / 58.4** \\ \hline LLaMA-13B & 15.2 / 41.5 & 15.7 / 72.7 & 22.8 / 97.6 & 15.2 / 41.6 & 0 / 0 & 2.2 / 7.0 \\ Alpaca-13B & **24.3 / 51.9** & **18.5 / 80.3** & **23.4 /** 97.6 & **26.1 / 55.5** & 0 / 0 & **6.8 / 20.6** \\ \hline \hline \end{tabular}
\end{table}
Table 4: How instruction-tuning affects few- and zero-shot performances. Underlined models are instruction-tuned from the model above them. Performance shown as βexec. acc. / exec. rateβ.
different models and tasks, none of them are significant enough to alter the ranking of the models, nor threaten the conclusions presented in this work.
### Error Modes
In Fig. 4, we present an error analysis on the four best models, by manually examining a fixed set of 100 examples from the GSM8k and MBPP datasets across selected models that are the best in its size group. More specifically, we categorize the errors into 5 cases:
1. _execution error_, where deformed programs are generated;
2. _missing/extra steps_, where some key steps are missing or extraneous lines are generated in predicted code;
3. _wrong steps_, where the model only makes subtle mistakes in certain steps in the code;
4. when the NL specification itself is ambiguous and _unclear_. From the results shown in Fig. 4, we can see that for GSM8k, compared with stronger models (_e.g._ code-davinci and GPT-4), while a similar number of errors are made for missing and generating extra steps for solving the math problem, StarCoder and code-cushman make more mistakes in predicting intermediate steps, or generating deformed programs. On MBPP however, weaker models are also prone to miss crucial steps in the implementation, which shows a lack of understanding of the problem as well as planning abilities. Though hallucination [11] is a common issue in natural language generation, we do not observe similar effects for code generation as shown in Fig. 4, as it is quite rare for the models to generate lines of code that are extraneous in solving the problem.
### Model Calibration
A good model not only produces high-quality outputs, but also should be well-calibrated, meaning that it should be uncertain about its predictions when such predictions are wrong. Following recent work [10], we evaluate model calibration using _expected calibration error_
Figure 4: Error analysis for the best models on GSM8k and MBPP. \(y\)-axis denotes the percentage of all examples.
Figure 3: Models performance with different numbers of exemplars in the prompt.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Models** & **Spider (2)** & **GSM8k (2)** & **MBPP (3)** \\ \hline code-davinci-002 & 73.7\(\pm\)0.3 & 66.4\(\pm\)1.0 & 59.0\(\pm\)1.9 \\ code-cushman-001 & 50.4\(\pm\)0.7 & 24.2\(\pm\)1.1 & 39.3\(\pm\)3.3 \\ CodeGen-6B-mono & 32.4\(\pm\)0.6 & 13.8\(\pm\)0.2 & 35.5\(\pm\)0.5 \\ StarCoder-15.5B & 54.9\(\pm\)2.7 & 32.3\(\pm\)0.8 & 44.1\(\pm\)2.2 \\ Alpaca-7B & 20.1\(\pm\)3.5 & 7.3\(\pm\)1.2 & 13.6\(\pm\)0.6 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Mean and std for few-shot performance of different models over 3 runs, where random exemplars are chosen at each run.
(Naeini et al., 2015; Guo et al., 2017) and _selective classification_(El-Yaniv et al., 2010), with the results shown in Fig. 5 and SS B.3, respectively. From Fig. 5, we observe that while model calibration is generally correlated with model performance, the best-performing models are not the ones with the best calibration. Note that with a well-calibrated model, methods such as voting (Li et al., 2022; Wang et al., 2022) and confidence-based reranking (Ni et al., 2023) may be used to further improve their performance. Moreover, a better-calibrated model is safer to use in practice, especially for applications as coding assistants, as its confidence can be used as an indicator of the generation quality.
## 6 Limitations
While we strive to provide a comprehensive and fair evaluation of the capabilities of LLMs on **L2C** tasks, here we also discuss several limitations of **L2CEval**.
Generation using greedy-decoding.In this work, we use greedy decoding to generate a single program for each example as the models' output. While this is the most efficient way of generation and ensures fair comparison for different models as it is not affected by factors like sampling temperature, it is also relatively noisy (Nijkamp et al., 2022; Chen et al., 2021). For tasks such as MBPP or Python programming in general, _pass@k_ or _n@k_ are better as they give the model \(k\) tries to generate the correct program. More specifically, _pass@k_ measures if _any_ of the \(k\) program samples is correct and _n@k_ measures the number of correct programs in the \(k\) samples. For Python programming tasks, such methods are closer to practical use cases as we typically have test cases that can filter out some incorrect programs in the samples. For other tasks, having a better _pass@k_ also provides opportunities for post-generation reranking methods such as (Shi et al., 2022; Zhang et al., 2022; Ni et al., 2023). However, the cost for evaluating _pass@k_ or _n@k_ is \(k\) times of the compute compared with greedy decoding, thus we choose to only evaluate greedy decoding results in this work and leave sampling-based evaluation to future work.
Execution-based evaluation.Moreover, we mainly rely on execution-based evaluation (_i.e._ execution accuracy) for this work. However, such evaluation may produce spurious programs, _i.e._ false-positive programs that achieve the correct execution result by chance (Zhong et al., 2020; Xie et al., 2022). In this work, we adopt human evaluation to measure the problem of spuriousness and found non-trivial portion of "correct" programs being spurious for Spider but not for other datasets. More details on this can be found in SS B.2. In addition, execution may not always be straightforward in practice, especially when complex dependencies and potentially harmful programs are considered (Chen et al., 2021). Thus for future work, we would like to add a surface-form-based evaluation for code, such as (Zhou et al., 2023).
Confounding factors during comparison.When comparing different models, especially models from different model series, there are typically multiple performance-impacting factors that are in effect at the same time, such as model
Figure 5: Average models performance across selected datasets (_i.e._ Spider, WikiTQ, GSM8k and MBPP) and their calibration score rankings.
size, pretraining data, model architecture, pre-training objective, etc. Such confounding factors may limit the validity of the conclusions that we draw from model comparisons. In this work, we try to mitigate this by fixing as many variables about the models as possible during a comparison, such as making observations within the same model series. While the general trend can still be observed across different model series, we should also note that when interpreting the results, readers should be mindful of such confounding factors when comparing different models.
**Lack of information for proprietary models.** For the open-access proprietary LLMs (_e.g._ OpenAI models), due to the lack of basic information and mismatches between the models described in the papers and the actual API engines, very few scientific conclusions can be drawn from their results. In this work, we evaluate such models with the open-access APIs and compare them with all other models, in the hope of helping practitioners in choosing models for their use cases. We also present human evaluations on codex-cushman, codex-davinci, and gpt-4, which are the three strongest models for code generation, to discuss differences in common error modes. However, when making our findings, we generally rely on open-source models instead, to avoid being misled by speculative model details of such closed-source models.
## 7 Related Work
**Code generation evaluation.** Several code generation benchmarks are collected from raw data from GitHub and StackOverflow, and involve professional annotators to enhance the quality of the data Iyer et al. (2018); Agashe et al. (2019); Yin et al. (2018). While such benchmarks focus more on lexical-based evaluation, ODEX Wang et al. (2022) introduces execution-based evaluation, which has also been widely applied in recent code generation evaluation benchmarks, such as DS-1000 Lai et al. (2022), HumanEval Chen et al. (2021), and MBPP Austin et al. (2021). More recently, there has been an increasing focus on assessing the generalization capabilities of code generation models across multiple programming languages Athiwaratkun et al. (2023), and benchmarks such as CodeGeeX Zheng et al. (2023) and MultiPL-E Cassano et al. (2023) are created.
**Other code-related tasks.** Large language models have also shown significant success in other code-related directions. One popular direction is code understanding. For example, CodeXGLUE Lu et al. (2021) comprises three widely-used code understanding tasks including defect detection, clone detection, and code search. BigCloneBench Krinke and Raghitwetsagul (2022) tasks to measure the similarity between code pairs to predict whether they have the same functionality. CodeSearchNet Husain et al. (2019) is a benchmark of semantic code search given natural language queries. Besides code understanding, there have been other tasks such as code translation Lachaux et al. (2020) and program repair Gupta et al. (2017). We leave systematic evaluation of LLMs on those tasks as important future work.
## 8 Conclusions
In this paper, we present **L2CEval**, a comprehensive evaluation of LLMs for natural language to code generation, along a variety of axes such as model scale, training data, sensitivity to few-shot exemplars as well as the impact of instruction tuning, _etc_. We also present an analysis on the model calibration and conduct a human evaluation of common error modes across different models. We hope our study will provide useful insights for the community into applying LLMs for downstream code applications and future model development efforts.
## Acknowledgements
We would like to thank Rui Zhang and Tao Yu for the initial discussions of this project. Ansong would like to thank Hailey Schoelkopf and Zhangir Azerbayev for their suggestions for this work. This work is supported in part by a gift from Salesforce Research. |
2309.07655 | Phase shift rule with the optimal parameter selection | The phase shift rules enable the estimation of the derivative of a quantum
state with respect to phase parameters, providing valuable insights into the
behavior and dynamics of quantum systems. This capability is essential in
quantum simulation tasks where understanding the behavior of complex quantum
systems is of interest, such as simulating chemical reactions or condensed
matter systems. However, parameter shift rules are typically designed for
Hamiltonian systems with equidistant eigenvalues. For systems with closely
spaced eigenvalues, effective rules have not been established. We provide
insights about the optimal design of a parameter shift rule, tailored to
various sorts of spectral information that may be available. The proposed
method lets derivatives be calculated for any system, regardless of how close
the eigenvalues are to each other. It also optimizes the number of phase
shifts, which reduces the amount of gate resources needed. | L. A. Markovich, S. Malikis, S. Polla, J. T. BruguΓ©s | 2023-09-14T12:20:28Z | http://arxiv.org/abs/2309.07655v1 | # Phase shift rule with the optimal parameter selection
###### Abstract
The phase shift rules enable the estimation of the derivative of a quantum state with respect to phase parameters, providing valuable insights into the behavior and dynamics of quantum systems. This capability is essential in quantum simulation tasks where understanding the behavior of complex quantum systems is of interest, such as simulating chemical reactions or condensed matter systems. However, parameter shift rules are typically designed for Hamiltonian systems with equidistant eigenvalues. For systems with closely spaced eigenvalues, effective rules have not been established. We provide insights about the optimal design of a parameter shift rule, tailored to various sorts of spectral information that may be available. The proposed method lets derivatives be calculated for any system, regardless of how close the eigenvalues are to each other. It also optimizes the number of phase shifts, which reduces the amount of gate resources needed.
## 1 Introduction
Many near-term quantum computing methods are based on Variational Circuits [1, 2], sequences of quantum gates tuned recursively for addressing specific tasks based on classical parameters.For example, the Quantum Approximate Optimization Algorithm (QAOA) [3] and Variational Quantum Eigensolver (VQE) [1, 4] heavily rely on derivative estimation to guide the parameter optimization process, leading to improved efficiency and better outcomes. Beyond optimization, derivative estimation finds its importance in scientific and engineering fields, where solving differential equations and numerical integration are paramount. Accurate knowledge of derivatives enables precise modeling and simulation of complex systems.Moreover, quantum machine learning algorithms, such as quantum neural networks or quantum support vector machines, strongly rely on derivative estimations and enhance learning capabilities.
Since the output of a variational quantum circuit provides a probabilistic result, the expectation value of an observable is considered an estimate of the variable.The mean values of the simple variables can be determined by taking the average over measurement results, but finding the expectation value of multi-qubit observables is more complicated and can be done by different approaches involving the quantum phase estimation algorithm [5, 6, 7, 8, 9], quantum energy (expectation) estimation method of decomposing the observable into a weighted sum of multi-qubit Pauli strings [10], some intermediary approaches between both [11, 12, 13] or using the recently introduced single qubit quantum memory approach [14].
Therefore, there is a desire to formally define the gradient as a derivative of these averages.with respect to the variational parameters of the circuit. In literature, one can find different mathematical
ways of calculating the underlined derivatives, like simply taking the finite difference methods or the more advanced robust polynomial interpolation technique [15, 16].However, if we are talking about the exact calculation of the derivative, this definition is difficult to implement on hardware. The reason is that we can't really "take the derivative from the gates" that realize the necessary quantum hardware because such mathematical objects can't be unitary and hence can't be realized like quantum gates. That brings us to the need for realization of the derivative using combinations of quantum implementable operations.
Parameter-shift rules (PSRs) are the recipes for how to get partial derivatives by evaluating parameter-shifted instances of a variational circuit. They were originally introduced to quantum machine learning in [17, 18].The PSRs relate the gradient of the mean value \(f\) by some parameter \(t\) to evaluations of the function itself at different points:
\[\frac{\partial f(t)}{\partial t}=\sum_{x=1}^{m}b_{x}(\vec{\phi})f(t+\phi_{x}), \tag{1}\]
where the \(m\)-vector of the shift parameters is \(\vec{\phi}=\{\phi_{x}\}_{x=1}^{m}\), and \(b(\vec{\phi})=\{b_{x}(\vec{\phi})\}_{x=1}^{m}\) is a vector of coefficients. The original two-term PSR is based on the gates with two distinct eigenvalues [19, 20, 21]. Different variations of the original PSRs can be found in literature [22, 23] preserving the restriction on the amount of the Hamiltonians eigenvalues. In [24] the stochastic parameter-shift rule is introduced that, in combination with the generalized shift rule [25] allows for the differentiation of any unitary with equidistant frequencies. The strong point of these rules is the unbiased estimate of the derivative without any additional hardware costs.
However, we point out that the latter rules are restricted to the evenly spaced phase shifts, and no attention is paid to Hamilton's eigenvalue structure. For example, in the case of non-equidistant eigenvalues of the Hamiltonian, the latter can be close to each other, and the phase-shift rules can provide poor-quality results. That is why it is important to introduce a phase shift rule suitable for different Hamiltonians, solving the problem even in the degenerate case.
As a resource measure the number of distinct circuits that need to be evaluated to obtain all terms of a shift rule is considered. Hence, an open question is how to select the shifting parameters. Some attempts to study the different shifts are done in [26] for a standard parameter-shift rules considering symmetric and distinct shifts.An experimental demonstration of practical on-chip PQC training with PSR is provided in [27].In [28] the parameter shift rule is derived for the case of integer equidistant eigenvalues. However, to our best knowledge, no study is provided on the optimal selection of the phase shifts, and no analyses is done for the Hamiltonian systems with close eigenvalues.
### Contributions of this paper
In this manuscript, we introduce the parameter shift rule method with shift selection to derive any order derivative and its linear combinations. Writing the unitary evolution \(e^{iHt}\) as a sum of finite powers of the Hamiltonian \(H\) (see Appendix A), we reduce the problem to solving the operator equation of the type:
\[E_{m}(\vec{\phi})b(\vec{\phi})=\mu_{m}. \tag{2}\]
Here \(E_{m}(\vec{\phi})\) is a \(m\times m\) matrix and \(\mu_{m}\) is a \(m\) size vector, both dependent on the differences between every pair of eigenvalues \(\{\lambda_{i}\}_{i=1}^{n}\) of the Hamiltonian. To optimize the shift, one needs to solve (2) depending on the differences between the eigenvalue couples.
It is known that the problem of searching for the solution of the operator equation (2) is called correct by Hadamard (well-posed) if the solution exists, is unique, and is stable. If the solution
does not satisfy at least one of these three conditions, it is ill-posed [29]. In general, finding the optimal phase shifts can be an ill-posed problem by Hadamard due to the fact that some eigenvalues may be close to each other, which will give similar differences between the different couples of eigenvalues. In the case of the well-posed problem, we provide the set of \(b(\vec{\phi})\) giving the best estimate of \(f^{\prime}(t)\) and any of its higher derivatives \(f^{(p)}(t)\) and its linear combinations. In the complicated case of the ill-posed problem, we start from the ideal case of perfectly equidistant eigenvalues (see Fig. 1 a)). In this case, the matrix \(E_{m}(\vec{\phi})\) becomes singular since some of the differences between the eigenvalues of the Hamiltonian will coincide. We show that one can reduce the dimension of the system of linear equations to make it well-posed solvable problem. The exact solution to the problem and the best set of phase shifts are provided. It is interesting to mention that this solution was intuitively introduced in [25]. However, we prove that this is the only possible solution for such a problem, hence being the optimal one. If the eigenvalues are not perfectly equidistant but slightly perturbed (see Fig. 1 b) from the equidistant positions, one can still use the provided solution.
This case relates to a realistic situation since the eigenvalues are not estimated perfectly and their values can be corrupted by the estimation errors and measurement noises. The distance of the obtained solution from the optimal one is provided and is dictated by the rate of perturbation. After we consider the case of equidistant sets of eigenvalues (see Fig. 1 c)). The physical scenario is related to the previous case, corresponding to the case of different sets of experiments to estimate the eigenvalues. Unfortunately, if the system is far from the equidistant case, we can't use the latter results. The last case we consider is the most general one, where no structure in the position of the eigenvalues is detected (see Fig. 1 d)). We show how to solve such an ill-posed problem using the regularization method [29]. By introducing the regularization parameter that provides the possibility to find an approximate solution of (2) tending to the true one, we give a recipe to
Figure 1: Different cases of the eigenvalue structures a) the equidistant eigenvalues; b) perturbed equidistant eigenvalues; c) sets of equidistant eigenvalues; d) no structure eigenvalues
find the best coefficients and phase shifts numerically. Hence, our work fully covers all the cases of Hamiltonians, providing the optimal solution for the equidistant eigenvalue case and giving a tool to find ones for non-equidistant eigenvalues.
### Organization of the paper
The paper is organized as follows: In Sec. 2 we briefly recall the notion of the known parameter-shift rules. In Sec. 3 we discuss the general parameter shift rule and deduce the optimal coefficient for a well-posed problem. In Sec. 4 we study the ill-posed problem. The cases of equidistant eigenvalues of the Hamiltonian, equidistant eigenvalues except for one, slightly perturbed equidistant eigenvalues, and highly non-equidistant eigenvalues forming equidistant sets are studied in detail. In Sec. 5 the phase-shift solution for an ill-posed by Hadamard problem for no-structure eigenvalues is proposed. We end the main text in Sec.6 with a discussion. Finally, in the appendix, we summarize some technical derivations.
## 2 Overview of Known Parameter-Shift Rules
Let \(\left|\psi\right\rangle\) denote the quantum state in the Hilbert space. Consider the unitary operator \(U(t)=e^{iHt}\), defined by a Hamiltonian \(H\) and a parameter \(t\). The eigenvalues of \(U(t)\) are given by \(\{\exp(i\lambda_{j}t)\}_{j\in 1}^{n}\) with real-valued \(\{\lambda_{j}\}_{j\in 1}^{n}\) and have sorted the \(j\) to be non-decreasing. We are interested in the mean value of a measurable observable \(C\) defined as follows:
\[f(t)\equiv\left\langle\psi\right|U(t)^{\dagger}CU(t)\left|\psi\right\rangle. \tag{3}\]
The expectation value \(f(t)\) can be written as a finite-term Fourier series
\[f(t)=a_{0}+\sum_{l=1}^{m}a_{l}\cos\left(\Omega_{l}t\right)+b_{l}\sin\left( \Omega_{l}t\right), \tag{4}\]
where we have \(m\) unique positive differences \(\{\Omega_{l}\}_{l\in m}=\{|\lambda_{j}-\lambda_{k}|,j,k\in[1,n],\lambda_{j}> \lambda_{k}\}\).
For functions \(f(t)\) with a single frequency \(\Omega_{1}=\Omega\) (i.e., \(U\) has two eigenvalues), the derivative can be computed via the parameter-shift rule [19, 20, 21]:
\[\frac{\partial f(t)}{\partial t}\Big{|}_{t=0}=\frac{\Omega}{2\sin\left( \Omega\phi_{1}\right)}(f(\phi_{1})-f(-\phi_{1})),\quad\phi_{1}\in(0,\pi). \tag{5}\]
In [23] the latter rule is generalized to gates with eigenvalues \(\{-1,0,1\}\), which leads to \(m=2\) frequencies:
\[\frac{\partial f(t)}{\partial t}\Big{|}_{t=0}=y_{1}(f(\phi_{1})-f(-\phi_{1})) -y_{2}(f(\phi_{2})-f(-\phi_{2})),\quad\phi_{1},\phi_{2}\in(0,\pi), \tag{6}\]
and \(y_{1,2}\) are the corresponding coefficients. In [23]\(\phi_{1,2}=\pi/2\mp\pi/4\) and \(y_{1,2}=(\sqrt{2}\pm 1)/2\sqrt{2}\) is studied. On the other hand, in [30] for the same eigenvalues, the following rule is introduced:
\[\frac{\partial f(t)}{\partial t}\Big{|}_{t=0}=\frac{1}{4}(f_{+}^{+}-f_{-}^{+} +f_{+}^{-}-f_{-}^{-}), \tag{7}\]
where \(f_{\pm}^{\alpha}\) is the measured energy when replacing the gate \(U(t)\) in question by \(U(t\pm\pi/2)\exp(\mp\alpha i\pi/4\Pi_{0})\), where \(\Pi_{0}\) is the projector onto the zero-eigenspace of the generator of \(U\).
For the perturbed quantum evolution \(U_{F}(t)=\exp\left(i(tH+F\right)\) the stochastic parameter-shift rule is introduced in [24]
\[\frac{\partial f(t)}{\partial t}\Big{|}_{t=\phi_{0}}=\frac{\Omega}{2\sin\left( \Omega\phi_{1}\right)}\int\limits_{0}^{1}(f_{+}(r)-f_{-}(r))dr, \tag{8}\]
where \(f_{\pm}(t)\) is the energy measured in the state prepared by a modified circuit that splits \(U_{F}(r_{0})\) into \(U_{F}(r\phi_{0})\) and \(U_{F}((1-r)\phi_{0})\), and interleaves these two gates with \(U_{F=0}(\pm\phi_{0})\). These results were further developed to introduce the Nyquist shift rule in [31]. A parameter-shift rule for higher-order derivatives based on repeatedly applying the original rule, been proposed in [22].
In [25] the so-called general parameter shift rules are defined for the case of evenly spaced phase shifts \(\phi_{j}=(2j-1)\pi/2n\) (\(\phi_{j}=j\pi/n\)), \(j\in\overline{1,n}\) to reconstruct odd (even) functions:
\[f^{\prime}(0) = \sum_{j=1}^{2n}f\left(\frac{(2j-1)\pi}{2n}\right)\frac{(-1)^{j-1 }}{4n\sin^{2}\left(\frac{(2j-1)\pi}{4n}\right)}, \tag{9}\] \[f^{\prime\prime}(0) = -f(0)\frac{2n^{2}+1}{6}+\sum_{j=1}^{2n-1}f\left(\frac{j\pi}{n} \right)\frac{(-1)^{j-1}}{2\sin^{2}\left(\frac{j\pi}{2n}\right)},\]
The latter result coincides with (6) and (8) in parameter-shift rules for \(n=1\) and \(n=2\), respectively.
The selection of phase shifts depends on various factors, including the specific problem being solved, the available resources, and the desired accuracy. Phase shifts may be chosen based on mathematical considerations or analytical insights into the problem structure. In other cases, numerical methods or optimization techniques can be employed to find optimal phase shift values that minimize errors or maximize efficiency. Further, we introduce the optimal parameter shift selection method suitable for any structure of the Hamiltonian system.
## 3 Parameter Shift Rule for a Well-Posed Problem
Any function from \(H\) can be written as a finite sum of \(H\) powers (see Appendix A), namely
\[e^{\mathrm{i}Ht}=\sum_{k=0}^{n-1}a_{k}(t)H^{k}, \tag{10}\]
hold. Here the coefficients are \(\left|a(t)\right\rangle=\Lambda^{-1}\left|e(t)\right\rangle\), where \(\Lambda\) is the \(n\times n\) Vandermonde matrix containing the \(\vec{\lambda}\), and \(\left\langle k|e(t)\right\rangle=e^{\mathrm{i}\hat{\mu}k\cdot t}\), holds. Then, we can rewrite (3) as follows:
\[f(t)=\mathrm{Tr}[|e(t)\rangle\left\langle e(t)|\,(\Lambda^{-1})^{\dagger} \tilde{C}\Lambda^{-1}], \tag{11}\]
where \((\tilde{C})_{k,l}\equiv\left\langle\psi\right|H^{k}CH^{l}\left|\psi\right\rangle\).
The parameter-shift rules relate derivatives of a quantum function to evaluations of the function itself at different points. Using (11), we can rewrite the PSR (1) as follows:
\[\left|e^{\prime}(t)\right\rangle\left\langle e(t)\right|-\left|e(t)\right\rangle \left\langle e^{\prime}(t)\right|=\sum_{x=1}^{m}b_{x}(\vec{\phi})\left|e(t+ \phi_{x})\right\rangle\left\langle e(t+\phi_{x})\right|. \tag{12}\]
The latter is equivalent to solving the following system of equations
\[\mathrm{i}(\lambda_{k}-\lambda_{l})e^{\mathrm{i}(\lambda_{k}-\lambda_{l})t}= \sum_{x=1}^{m}b_{x}(\vec{\phi})e^{\mathrm{i}(\lambda_{k}-\lambda_{l})t}e^{ \mathrm{i}(\lambda_{k}-\lambda_{l})\phi_{x}},\quad\forall k,l\in\overline{1,n}, \tag{13}\]
where we used the notation \(\langle(k,l)|e(t)\rangle=e^{\mathrm{i}(\lambda_{k}-\lambda_{l})t}\). Since the latter equation must be satisfied for every \(t\), the compatibility equation reads as
\[\sum_{x=1}^{m}b_{x}(\vec{\phi})e^{\mathrm{i}(\lambda_{k}-\lambda_{l})\phi_{x}}= \mathrm{i}(\lambda_{k}-\lambda_{l}),\quad\forall k,l\in\overline{1,n}. \tag{14}\]
This system is highly nonlinear in the \(\phi\) variables but it is nevertheless linear in the \(b\)'s.
Let \(U\) and \(V\) be Hermitian metric spaces with metrics \(\rho_{U}\) and \(\rho_{V}\). The continues one-to-one operator \(E\) from \(U\) to \(V\) corresponds to the \(m\times m\) matrix \(E_{m}\) such that its elements are \(E_{(k,l),x}=e^{\mathrm{i}\mu_{(k,l)}\phi_{x}}\), \(k,l\in\overline{1,n}\). Here we introduce the distance between two eigenvalues as \(\mu_{(k,l)}\equiv\lambda_{k}-\lambda_{l}\), \(k,l=\overline{1,n}\). The function \(\mu\) corresponds to the \(m\times 1\) vector with elements \(\mathrm{i}(\lambda_{k}-\lambda_{l})\). Hence, (14) in the operator form is
\[E(\vec{\phi})b(\vec{\phi})=\mu,\quad b\in U,\quad\mu\in V. \tag{15}\]
In this section, we assume that the phases are selected in such a way that the problem is well-posed by Hadamard. The constraints on them we observe further. Then we can determine the vector of solutions of (15) as follows:
\[b(\vec{\phi})=E^{-1}(\vec{\phi})\mu. \tag{16}\]
Using Cramer's rule, the closed-form expression for every element \(b_{x}(\vec{\phi})\) can be written as
\[b_{x}(\vec{\phi})=\det E(\vec{\phi}/\phi_{x})\cdot(\det E(\vec{\phi}))^{-1}, \tag{17}\]
where
\[E(\vec{\phi}/\phi_{x})=\begin{bmatrix}1&1&\dots&0&\dots&1\\ e^{i\mu_{(1,2)}\phi_{1}}&e^{i\mu_{(1,2)}\phi_{2}}&\dots&i\mu_{(1,2)}&\dots&e^{ i\mu_{(1,2)}\phi_{m}}\\ e^{-i\mu_{(1,2)}\phi_{1}}&e^{-i\mu_{(1,2)}\phi_{2}}&\dots&-i\mu_{(1,2)}&\dots&e^ {-i\mu_{(1,2)}\phi_{m}}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ e^{i\mu_{(n-1,n)}\phi_{1}}&e^{i\mu_{(n-1,n)}\phi_{2}}&\dots&i\mu_{(n-1,n)}& \dots&e^{i\mu_{(n-1,n)}\phi_{m}}\\ e^{-i\mu_{(n-1,n)}\phi_{1}}&e^{-i\mu_{(n-1,n)}\phi_{2}}&\dots&-i\mu_{(n-1,n) }&\dots&e^{-i\mu_{(n-1,n)}\phi_{m}}\end{bmatrix}\]
is the matrix \(E(\vec{\phi})\) with the \(x\) column substituted by the \(\mu\) vector and not depending on \(\phi_{x}\). The determinants are not equal to zero if the matrix does not contain equal columns or rows or all zero columns or rows. To this end, we get that \(\phi_{i}\neq\pm\phi_{j}+2\pi c\), \(c\in Z\) and all distances \(\mu_{(k,l)}\) must be different. That means that the equidistant \(n\) eigenvalue problem is ill-posed by Hadamard. The solution to this problem will be provided in the next section.
Using Jacobi's formula, we get the alternative expression
\[b_{x}(\vec{\phi})=\frac{\partial\mathrm{det}\,E(\vec{\phi})}{\partial\phi_{x} }|_{\phi_{x}=0}\cdot(\det E(\vec{\phi}))^{-1}. \tag{18}\]
The solution is exact when \(m=n(n-1)+1\), holds. This number is obtained by counting every \(\mu_{(k,l)}\), \(k\neq l\). The \(k=l\) case always yields the same equation \(\sum_{x}b_{x}(\vec{\phi})=0\).
We can write \(b_{x}(\vec{\phi})\) as follows:
\[b_{x}(\vec{\phi})=\frac{\begin{vmatrix}|&&|&&|\\ \vec{v}(\phi_{0})&\dots&\frac{\partial\vec{v}(\phi_{x})}{\partial\phi_{x}}|_{ \phi_{x}=0}&\dots&\vec{v}(\phi_{m})\\ |&&|&&|\\ \hline|&&|&&|\\ \vec{v}(\phi_{0})&\dots&\vec{v}(\phi_{x})&\dots&\vec{v}(\phi_{m})\\ |&&|&&|\\ \end{vmatrix}},\quad x\in\overline{1,m}, \tag{19}\]
where the vectors forming the matrix \(E(\vec{\phi})\) are denoted as
\[\vec{v}(\phi_{i})=\begin{pmatrix}1&\exp(\mathrm{i}\mu_{(12)}\phi_{i})&\exp(- \mathrm{i}\mu_{(12)}\phi_{i})&\cdots&\exp(\mathrm{i}\mu_{(n-1,n)}\phi_{i})&\exp( -\mathrm{i}\mu_{(n-1,n)}\phi_{i})\end{pmatrix}, \tag{20}\]
This automatically yields a PSR for the derivatives of arbitrary order without increasing the number of function evaluations. The following Theorem holds:
**Theorem 3.1**.: _Let \(n\) be the number of distinct, not equidistant, eigenvalues of \(H\). Let \(\vec{\phi}\in\mathbb{R}^{m}\) with \(m=n(n-1)+1\) and \(\phi_{i}\neq\pm\phi_{j}+2\pi c\), \(c\in Z\), \(\forall i,j\in\overline{1,m}\). Then, the following parameter shift rule_
\[\frac{\partial^{p}f(t)}{\partial t^{p}}=\sum_{x=0}^{m-1}b_{x}^{(p)}(\vec{\phi} )f(t+\phi_{x}),\quad p\geq 1, \tag{21}\]
_holds, if, and only if, the vector \(b_{x}(\vec{\phi})\) satisfies_
\[b_{x}^{(p)}(\vec{\phi})=\begin{vmatrix}|&\big{|}&&|\\ \vec{v}(\phi_{0})&\cdots&\frac{\partial^{p}\vec{v}(\phi_{x})}{\partial\phi_{x }^{p}}|_{\phi_{x}=0}&\cdots&\vec{v}(\phi_{m})\\ |&&|&&|\\ \hline&\big{|}&&|\\ \vec{v}(\phi_{0})&\cdots&\vec{v}(\phi_{x})&\cdots&\vec{v}(\phi_{m})\\ |&&|&&|\end{vmatrix}. \tag{22}\]
_Proof:_ Left as an exercise.
The latter statement can be generalized. In particular, any linear combination of high-order derivatives can be expressed similarly as follows:
\[\sum_{i}a_{i}f^{(i)}(t)=\sum_{x=0}^{m-1}\tilde{b}_{x}(\vec{\phi})f (t+\phi_{x}),\quad\text{where} \tag{23}\] \[\tilde{b}_{x}(\vec{\phi})=\frac{\left|\begin{array}{ccccc}|&|&| &|\\ \vec{v}(\phi_{0})&\cdots&\sum_{i}a_{i}\frac{\partial^{i}\vec{v}(\phi_{x})|_{ \phi_{x}=0}}{\partial\phi_{x}^{i}}&\cdots&\vec{v}(\phi_{m})\\ |&&|&&|\\ \hline&|&|&|&|\\ &\vec{v}(\phi_{0})&\cdots&\vec{v}(\phi_{x})&\cdots&\vec{v}(\phi_{m})\\ |&&|&&|\end{array}\right|},\]
holds. Even though it is not generalizable for any other algebraic expression \(F=F(f,f^{\prime},f^{\prime\prime}\,\dots)\) involving non-linear terms, we can always re-write it as products of functions we can compute.
The Theorem 3.1 works for any shift vector \(\vec{\phi}\) such that the problem (16) is well posed. The variance of the estimate of the derivative \(\widehat{\frac{\partial f(t,\vec{\phi})}{\partial t}}\) is
\[\sigma^{2}\left(\widehat{\frac{\partial f(t,\vec{\phi})}{\partial t}}\right)= \sum_{x=1}^{m}b_{x}^{2}(\vec{\phi})\sigma^{2}(\hat{f}(t+\phi_{x})). \tag{24}\]
The Chebyshev's inequality can be written
\[\mathbb{P}\left(\left|\frac{\partial f(t)}{\partial t}-\widehat{\frac{ \partial f(t,\vec{\phi})}{\partial t}}\right|\geq\nu\right)\leqslant\frac{1} {\nu^{2}}\sum_{x=1}^{m}b_{x}^{2}(\vec{\phi})\sigma^{2}(\hat{f}(t+\phi_{x})), \tag{25}\]
where \(\nu>0\) is a real number. For the probability \(\eta\in(0,1)\), we get
\[\nu=\left(\frac{1}{\eta}\sum_{x=1}^{m}b_{x}^{2}(\vec{\phi})\sigma^{2}(\hat{f}(t+ \phi_{x}))\right)^{\frac{1}{2}}. \tag{26}\]
Then with probability \(\eta\) the confidence interval for the estimate of the derivative is
\[\frac{\partial f(t)}{\partial t}-\nu\leq\frac{\widehat{\partial f (t,\vec{\phi})}}{\partial t}\leq\frac{\partial f(t)}{\partial t}+\nu. \tag{27}\]
We take the derivative of the variance by \(\phi_{y}\in\vec{\phi}\), \(y=\overline{1,m}\). Equating it to zero, we get the condition
\[2\sum_{x=1}^{m}b_{x}(\vec{\phi})\frac{\partial b_{x}(\vec{\phi}) }{\partial\phi_{y}}\sigma^{2}(\hat{f}(t+\phi_{x}))=-b_{y}^{2}(\vec{\phi}) \frac{\partial\sigma^{2}(\hat{f}(t+\phi_{y}))}{\partial\phi_{y}}. \tag{28}\]
We can assume that the variances \(\sigma^{2}(\hat{f}(t+\phi_{y}))\) are not dependent on the phase and are equal. Then we get
\[\sum_{x=1}^{m}b_{x}(\vec{\phi})\frac{\partial b_{x}(\vec{\phi}) }{\partial\phi_{y}}=\mathbf{0},\quad\forall y\in\overline{1,m}. \tag{29}\]
We can rewrite (29) as (see Appendix D)
\[\sum_{x=1}^{m}\det(E(\vec{\phi}/\phi_{x}))\det(E_{y}(\vec{\phi}/ \phi_{x}))=(\det E(\vec{\phi}))^{-1}\det(E_{y}(\vec{\phi}))\sum_{x=1}^{m} \left(\det(E(\vec{\phi}/\phi_{x}))\right)^{2}. \tag{30}\]
Solving the latter system of equations with respect to all \(\phi_{y}\), \(y\in\overline{1,m}\), one can find the optimal \(\vec{\phi}\) minimizing the variance (24).
## 4 Optimal Phase Shift Parameters Selection for Ill-Posed Problem
In this section, we assume that the problem (15) is ill-posed. In this case, the solution (16) is unstable.
Let us first look for a set of phase shifts such that the vectors forming the matrix \(E_{m}\)
\[\vec{v}(\phi_{i})=\begin{pmatrix}1&\exp(\mathrm{i}\mu_{(12)}\phi_ {i})&\exp(-\mathrm{i}\mu_{(12)}\phi_{i})&\cdots&\exp(\mathrm{i}\mu_{(n-1,n)} \phi_{i})&\exp(-\mathrm{i}\mu_{(n-1,n)}\phi_{i})\end{pmatrix}, \tag{31}\]
would be orthogonal to each other. Here we use that \(\mu_{(k,l)}=-\mu_{(l,k)}\) holds, so we use the notation \(\mu_{(t,p)}\), \(t<p\), \(\forall t,p\in\overline{1,n}\). In this case, the inversion of \(E_{m}\) is equal to its hermitian conjugation.
We impose the orthogonality condition on the columns of \(E_{m}\), namely \(\vec{v}^{*}(\phi_{j})\vec{v}(\phi_{i})=0\). Then we get
\[1+2\sum_{t,p=1,t<p}^{n}\cos\left(\mu_{(t,p)}(\phi_{i}-\phi_{j}) \right)=0,\quad\forall\phi_{i},\phi_{j}\in\vec{\phi}. \tag{32}\]
Let us denote the difference between phase shifts as \(\Phi_{ij}\equiv\phi_{i}-\phi_{j}\) and rewrite the latter condition as follows
\[1+2\sum_{p=2}^{n}\cos\left(\mu_{(1,p)}\Phi_{ij}\right)+2\sum_{p =3}^{n}\cos\left(\mu_{(2,p)}\Phi_{ij}\right)+\ldots \tag{33}\] \[\ldots +2\sum_{p=m-1}^{n}\cos\left(\mu_{(n-2,p)}\Phi_{ij}\right)+2\cos \left(\mu_{(n-1,n)}\Phi_{ij}\right)=0.\]
To solve the latter equation, we need to make some assumptions on the eigenvalue distances \(\mu_{t,p}\). Below, we first discuss the equidistant Hamiltonian eigenvalues case, moving on to the perturbed case in the following subsection.
#### 4.0.1 Equidistant Eigenvalues
Let us assume that all eigenvalues \(\{\lambda_{i}\}_{i=1}^{n}\) are equidistant (see Fig.2 a) and denote the distance between two neighboring eigenvalues as \(\Delta\). One can see that \(\mu_{(1,2)}=1\Delta\), \(\mu_{(1,3)}=2\Delta\) and \(\mu_{(1,n)}=(n-1)\Delta\). Similarly, \(\mu_{(2,4)}=2\Delta\) and \(\mu_{(2,n)}=(n-2)\Delta\), hold. So, we can conclude that \(\mu_{(t,p)}=(p-t)\Delta\), \(t<p\). One can see that in this case some rows of the matrix \(E_{m}\) will coincide and it will become singular. The problem (15) is ill-posed.
Since we are interested in the inversion of the matrix \(E_{m}\), we exclude all the similar rows, reducing the matrix \(E_{m}\) to the matrix of a smaller size \(E_{2n-1}\) which is non-singular:
\[E_{2n-1}(\vec{\phi})=\begin{bmatrix}1&1&\cdots&1\\ e^{\mathrm{i}1\Delta\phi_{1}}&e^{\mathrm{i}1\Delta\phi_{2}}&\cdots&e^{\mathrm{ i}1\Delta\phi_{2n-1}}\\ e^{-\mathrm{i}1\Delta\phi_{1}}&e^{-\mathrm{i}1\Delta\phi_{2}}&\cdot&e^{- \mathrm{i}1\Delta\phi_{2n-1}}\\ \vdots&\vdots&\ddots&\vdots\\ e^{-\mathrm{i}(n-1)\Delta\phi_{1}}&e^{-\mathrm{i}(n-1)\Delta\phi_{2}}&\cdots &e^{-\mathrm{i}(n-1)\Delta\phi_{2n-1}}\end{bmatrix},\ \ \vec{\mu}_{2n-1}=\mathrm{i}\Delta \begin{bmatrix}0\\ 1\\ -1\\ \vdots\\ -(n-1)\end{bmatrix}. \tag{34}\]
Here \(\vec{\mu}_{2n-1}\) is a vector of all unique distances \(\mu_{(1,i)}\), \(i=\overline{1,n}\). In this case the condition (33) can be reduced to the following one
\[1+2\sum_{k=1}^{n-1}\cos\left(k\Delta\Phi_{ij}\right)=0. \tag{35}\]
The Dirichlet kernel is defined as follows
\[D_{n}(x)=1+2\sum_{k=1}^{n}\cos\left(kx\right)=\frac{\sin\left( \left(n+\frac{1}{2}\right)x\right)}{\sin\left(\frac{1}{2}x\right)}, \tag{36}\]
where its zeros are at the points \(x_{t}=\frac{2\pi t}{2n+1}\), \(t\in\mathbb{Z}\). Hence, the condition (35) can be rewritten as
\[D_{n-1}(\Delta\Phi_{ij})=0, \tag{37}\]
and the solution is given by
\[\Delta\Phi_{ij}=\frac{2\pi t_{ij}}{2n-1},\quad t_{ij}\in\mathbb{ Z}. \tag{38}\]
Finally, we have a system of equations
\[\phi_{i}-\phi_{j}=\frac{2\pi}{(2n-1)\Delta}t_{ij},\quad t_{ij}\in Z,\quad\forall i,j=[1,2n-1],\quad i<j. \tag{39}\]
To solve the latter system of equations we first consider the case of equidistant phase-shifts \(\phi_{j}\), \(\forall j\).
From (39) we conclude:
\[\phi_{j}-\phi_{j+1}=\frac{2\pi}{(2n-1)\Delta},\quad\forall j=[1,2 n-2],\quad t_{j,j+1}=1. \tag{40}\]
It is straightforward to verify, that the solution of the latter system in the equidistant phases case is given by
\[\phi_{j}=-\frac{2\pi j}{(2n-1)\Delta},\quad\forall j=[1,2n-1]. \tag{41}\]
One can see that if \(j=2n-1\) holds, then \(\phi_{2n-1}=-2\pi/\Delta\), and \(\exp j\Delta\phi_{2n-1}=1\). Then the matrix (34) reduces to
\[E_{2n-1}=\begin{bmatrix}1&1&1&\ldots&1\\ e^{-i\tau}&e^{-2i\tau}&e^{-3i\tau}&\ldots&1\\ e^{i\tau}&e^{2i\tau}&e^{3i\tau}&\ldots&1\\ e^{-2i\tau}&e^{-4i\tau}&e^{-6i\tau}&\ldots&1\\ e^{2i\tau}&e^{4i\tau}&e^{6i\tau}&\ldots&1\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ e^{-(n-1)i\tau}&e^{-2(n-1)i\tau}&e^{-3(n-1)i\tau}&\ldots&1\\ e^{(n-1)i\tau}&e^{2(n-1)i\tau}&e^{3(n-1)i\tau}&\ldots&1\\ \end{bmatrix}, \tag{42}\]
where we used the notation \(\tau=\frac{2\pi}{(2n-1)}\). Let us normalise the latter matrix, introducing
\[\tilde{E}_{2n-1}\equiv E_{2n-1}/\sqrt{2n-1}. \tag{43}\]
This matrix is unitary since its rows and columns are orthonormal. Then its inverse matrix is \(\tilde{E}_{2n-1}^{\dagger}\) and the solution of our problem (16) is the following
\[b_{2n-1}(\phi)=\tilde{E}_{2n-1}^{\dagger}\vec{\mu}_{2n-1},\quad\vec{\mu}_{2n- 1}\equiv\frac{1}{\sqrt{2n-1}}\vec{\mu}_{2n-1}. \tag{44}\]
Since we know the form of the matrix (66) explicitly, we can write the solution:
\[b_{2n-1}(\tau,\Delta)=-\frac{2\Delta}{2n-1}\begin{bmatrix}\sum\limits_{j=0}^{n -1}2^{j}\sin\left((j+1)\tau\right)\\ \sum\limits_{j=0}^{n-1}2^{j}\sin\left((j+1)2\tau\right)\\ \sum\limits_{j=0}^{n-1}2^{j}\sin\left((j+1)3\tau\right)\\ \vdots\\ \sum\limits_{j=0}^{n-1}2^{j}\sin\left((j+1)(2n-2)\tau\right)\\ 0\end{bmatrix}. \tag{45}\]
Hence, we found the explicit solution to our problem in the case of equidistant eigenvalues and phase shifts. However, is it possible to solve the system (39) without imposing the latter constraint? The general solution is provided in Appendix B. However, due to the periodicity of the complex exponent, using this solution, the matrix \(E_{2n-1}\) becomes singular in all cases except for equidistant phases one.
We can conclude that in the case of equidistant eigenvalues, the amount of unique distances between them reduces to \(2n-1\) and only the equidistant phase shifts given by (41) guarantee non-singularity of \(E_{2n-1}\). In this case, the system (15) has a unique solution (45). To find the function derivative, one needs \(2n-2\) phase shifts, where \(n\) is the number of eigenvalues.
#### 4.0.2 Equidistant Eigenvalues Except of One
Let us assume that all eigenvalues \(\{\lambda_{i}\}_{i=2}^{n}\) are equidistant with the distance between every neighboring one denoted by \(\Delta\), namely \(\mu_{(2,3)}=1\Delta\), \(\mu_{(2,4)}=2\Delta\) and \(\mu_{(2,n)}=(n-2)\Delta\). The first
one is distant from all the others, where \(\mu_{(1,2)}=\Delta_{1}\), \(\mu_{(1,3)}=\Delta_{1}+\Delta\), \(\mu_{(1,4)}=\Delta_{1}+2\Delta\) and \(\mu_{(1,n)}=\Delta_{1}+(n-2)\Delta\). Then (33) is reducing to
\[1+2\cos\left(\Delta_{1}\Phi_{ij}\right)+2\sum_{k=1}^{n-2}\left[\cos\left(k \Delta\Phi_{ij}\right)+\cos\left((\Delta_{1}+k\Delta)\Phi_{ij}\right)\right] =0. \tag{46}\]
It can be rewritten as
\[\frac{1}{2}+\cos\left(\Delta_{1}\Phi_{ij}\right)+\sum_{k=1}^{n-2}\left[\cos \left(k\Delta\Phi_{ij}\right)(1+\cos\left(\Delta_{1}\Phi_{ij}\right))-\sin \left(\Delta_{1}\Phi_{ij}\right)\sin\left(k\Delta\Phi_{ij}\right)\right]=0. \tag{47}\]
According to the definition of the Dirichlet and the conjugate Dirichlet kernels the latter expression can be rewritten as
\[\left(1+\cos\left(\Delta_{1}\Phi_{ij}\right)\right)D_{n-2}\left(\Delta\Phi_{ ij}\right)+\frac{1}{2}\cos\left(\Delta_{1}\Phi_{ij}\right)=\sin\left(\Delta_{1} \Phi_{ij}\right)\tilde{D}_{n-2}\left(\Delta\Phi_{ij}\right). \tag{48}\]
The general solution to the latter expression can be found, having the form \(\Delta_{1}\Phi_{ij}=f(n,\Delta\Phi_{ij})\), where \(f(\cdot)\) is a combination of trigonometric functions. One can see that in this case \(\Delta_{1}\) is different for every \(\Phi_{ij}\), however, the distance must be the same \(\forall i,j\) pairs of phases.
For example, one of the solutions to the latter equation is
\[\Phi_{ij}=\frac{2\pi t_{ij}}{(2n-3)\Delta},\quad t_{ij}\in\mathbb{ Z}, \tag{49}\] \[\Delta_{1}=\frac{(2n-3)}{2t_{ij}}(\cot^{-1}\left(\frac{1}{2}\left( \cos\left(\frac{2\pi t_{ij}}{3-2n}\right)+(-1)^{t_{ij}+1}\right)\csc\left( \frac{\pi t_{ij}}{2n-3}\right)\right)+\pi c)\Delta,\quad c\in\mathbb{Z},\]
meaning equidistant phases and the distance of the out eigenvalue to scale with \(n>2\). However, it is not possible to find all the phase shifts since \(\Delta_{1}\) is dependent on \(t_{ij}\) which is different for every \(\Phi_{ij}\). This is contradictory to the fact that \(\Delta_{1}\) must be constant.
We can conclude that in the case of all equidistant eigenvalues except one, the orthogonality condition on the vectors forming the matrix \(E_{m}\) is not fulfilled. That means that the orthogonality property is a specific feature of equidistant eigenvalue systems.
#### 4.0.3 Slightly Perturbed Equidistant Eigenvalues
An equidistant eigenvalue case is a theoretical assumption that is not the case in any of the realistic scenarios. However, the eigenvalues can be close to the ideal equidistant positions. This case can be treated using the perturbation theory [32].
Let us perturb the equidistant system \(\tilde{E}_{2n-1}(\vec{\phi})b(\vec{\phi})=\vec{\tilde{\mu}}_{2n-1}\), namely
\[(\tilde{E}_{2n-1}(\vec{\phi})+\varepsilon\tilde{R}_{2n-1}(\vec{\phi}))b( \varepsilon,\vec{\phi})=\vec{\tilde{\mu}}_{2n-1}+\varepsilon\tilde{r}_{2n-1}. \tag{50}\]
This corresponds to the case when the eigenvalues of the Hamiltonian are not ideally equidistant but slightly shifted from equidistant positions. In Appendix C we deduce the perturbation matrices to be
\[\tilde{R}_{2n-1}\equiv\frac{R_{2n-1}}{\sqrt{2n-1}},\quad\tilde{r }_{2n-1}\equiv\frac{\mathrm{i}I_{2n-1}}{\sqrt{2n-1}}, \tag{51}\] \[R_{2n-1}=\frac{\mathrm{i}\tau}{\Delta}\begin{bmatrix}0&0&\ldots& 0\\ e^{-\mathrm{i}\tau}&2e^{-2\mathrm{i}\tau}&\ldots&(2n-1)\\ -e^{\mathrm{i}\tau}&-2e^{\mathrm{i}\tau}&\cdot&-(2n-1)\\ \vdots&\vdots&\ddots&\vdots\\ -e^{\mathrm{i}(n-1)\tau}&-2e^{\mathrm{i}(n-1)\tau}&\ldots&-(2n-1)\end{bmatrix}.\]
Here \(\varepsilon>0\) is a perturbation parameter. For a nonsingular matrix \(E_{2n-1}\) the perturbed matrix \(E_{2n-1}+\varepsilon R_{2n-1}\) is also nonsingular if the perturbation \(\varepsilon R\) is sufficiently small. Further in this subsection, we omit the \(\vec{\phi}\) in the brackets, \(2n-1\) subscripts and the \(\sim\) superscript.
Differentiating by \(\varepsilon\) (we suppose that this derivative exists), one can derive
\[E\dot{b}(\epsilon)+Rb(\varepsilon)+\varepsilon R\dot{b}(\varepsilon)=r. \tag{52}\]
Then for \(\epsilon=0\) we get the following expression
\[E\dot{b}(0)+Rb(0)=r\longrightarrow\dot{b}(0)=E^{-1}(r-Rb(0)). \tag{53}\]
Note that \(b(0)\) is the solution (45) of the not-perturbed problem. Using the Taylor expansion
\[b(\varepsilon)=b(0)+\varepsilon\dot{b}(0)+o(\varepsilon), \tag{54}\]
we can write
\[\frac{\|b(\varepsilon)-b(0)\|}{\|b(0)\|}=\epsilon\frac{\|E^{-1}( r-Rb(0))\|}{\|b(0)\|}+o(\varepsilon)\leq\|E^{-1}\|\left(\frac{\|\varepsilon r \|}{\|b(0)\|}+\|\varepsilon R\|\right)+o(\varepsilon) \tag{55}\] \[= \|E^{-1}\|\|E\|\left(\frac{\|\varepsilon r\|}{\|Eb(0)\|}+\frac{ \|\varepsilon R\|}{\|E\|}\right)+o(\epsilon)\leq k(E)\left(\frac{\|\varepsilon r \|}{\|\vec{\mu}\|}+\frac{\|\varepsilon R\|}{\|E\|}\right)+o(\varepsilon),\]
where \(k(E)\equiv\|E^{-1}\|\|E\|\geq 1\) is the condition number. An ill-conditioned system is one with a large condition number. If the system is ill-conditioned, then a small perturbation to the RHS can lead to large changes in the solution. When \(k(E)\) is large, this implies that \(b(\varepsilon)\) can be very far from \(b(0)\).
The distance between the solutions is given by
\[\|b(\varepsilon)-b(0)\|\approx\|E^{-1}\|\left(\|\varepsilon r\|+\| \varepsilon R\|\right)\|b(0)\|. \tag{56}\]
In the case of equidistant eigenvalues and phases, which we discussed in the previous section, the matrix \(\tilde{E}_{2n-1}\) is unitary. The norm of the unitary matrix is equal to one, and we can write
\[\|b(\varepsilon)-b(0)\|\approx\varepsilon\left(\|\tilde{r}_{2n-1}\|+\|\tilde{ R}_{2n-1}\|\right)\|b(0)\|. \tag{57}\]
**Example 4.1**.: _Let us calculate the latter distance for the case of \(l_{2}\) norm. By definition_
\[\|R\|_{2}=\sup_{x\in\mathbb{R}^{2n-1}\ \{0\}}\frac{\|Rx\|_{2}}{\|x\|_{2}},\quad\|x\|_{2}=\sqrt{\sum_{i=1}^{2n-1}x_{i}^{2}}\geq\frac{1}{\sqrt{2n-1}}\sum _{i=1}^{2n-1}|x_{i}|, \tag{58}\]
_hold. We can write_
\[\|\tilde{r}\|_{2}=\sqrt{\sum_{i=1}^{2n-1}|\tilde{r}_{i}|^{2}}=1. \tag{59}\]
_Let us introduce a constant \(\gamma_{0}>0\) such that vector \(R_{i}(x)\in\mathbb{R}^{2n-1\times 1}\) is bounded by norm as \(\|R_{i}(x)\|_{2}\leq\gamma_{0}\). The matrix \(R_{2n-1}=[R_{1},R_{2},\ldots,R_{2n-1}]\in\mathbb{R}^{2n-1\times 2n-1}\) holds. The constant is_
\[\gamma_{0}\leq\sqrt{2n-1}R_{max},\quad R_{max}=\sqrt{(\max_{j}(R_{ i}(x))_{j})^{2}},\quad\forall i\in\overline{1,2n-1}. \tag{60}\]
_Then we can write_
\[\|Rx\|_{2}=\Bigg{\|}\sum_{i=1}^{2n-1}x_{i}R_{i}\Bigg{\|}_{2}\leq\sum_{i=1}^{2n-1}|x _{i}|\|R_{i}\|_{2}\leq\gamma_{0}\sum_{i=1}^{2n-1}|x_{i}|. \tag{61}\]
_Substituting it in (58), we get_
\[\|R\|_{2}\leq\gamma_{0}\sqrt{2n-1}\leq(2n-1)R_{max}. \tag{62}\]
_Then_
\[\|\tilde{R}\|_{2}\leq\sqrt{2n-1}R_{max}. \tag{63}\]
_The \(l_{2}\) norm of (45) is_
\[\|b_{2n-1}(0,t,\Delta)\|_{2}=\frac{2\Delta}{\sqrt{2n-1}}\sqrt{\sum_{k=1}^{2n- 2}\left(\sum_{j=0}^{n-1}2j\sin\left((j+1)kt\right)\right)^{2}} \tag{64}\]
_We can upper bound it as_
\[\|b_{2n-1}(0,t,\Delta)\|_{2}\leq\frac{4(n-1)(2^{n}-1)^{2}\Delta}{\sqrt{2n-1}}. \tag{65}\]
_Then (57) can be bounded by_
\[\|b(\varepsilon,t,\Delta)-b(0,t,\Delta)\|_{2}\leq 4\varepsilon\Delta\left(1+ \sqrt{2n-1}R_{max}\right)\frac{(n-1)(2^{n}-1)^{2}}{\sqrt{2n-1}}. \tag{66}\]
_Then, for example, one can select_
\[\varepsilon\approx(4\Delta n(n-1)(2^{n}-1)^{2})^{-1} \tag{67}\]
_and (66) tends to zero while \(n\rightarrow\infty\)._
#### 4.0.4 Eigenvalues Forming Equidistant Sets
If we have \(k\) realizations of the Hamiltonian, the sets of eigenvalues \((\{\lambda_{i}\}_{i=1}^{n})_{k}\) can be considered perturbed from each other. Let us assume that we can sort all eigenvalues from \(k\) realizations into \(n\) equidistant sets (see Fig. 2) with median values denoted as \(\Lambda_{i}\), \(i=\overline{1,n}\). The distance between the median values of every two neighboring clusters is
\[\mu_{(i,i+1)}\equiv|\Lambda_{i}-\Lambda_{i+1}|\approx\Delta,\quad\forall i\in \overline{1,n}. \tag{68}\]
We demand the width of every set to be \(\epsilon_{i}<<\Delta\), \(\forall i\in\overline{1,n}\).
Let us consider the median eigenvalues. For the set of equidistant \(\Lambda_{i}\), \(i=\overline{1,n}\) the solution of the problem (15) is \(b_{2n-1}(\tau,\Delta)\) and is given by (45).
We sort the eigenvalues into equidistant groups in such a way that from every set, only one eigenvalue is picked. The eigenvalues in every group are denoted as \(\tilde{\Lambda}_{l,i}\), \(\forall l\in\overline{1,k}\), \(i\in\overline{1,n}\), where the first index is the number of the group and the second is the number of eigenvalues in it. The distance between any eigenvalue in one set and its median is
\[|\tilde{\Lambda}_{l,i}-\Lambda_{i}|=\Delta_{l,i}<\epsilon_{i}. \tag{69}\]
Then the collection of eigenvalues \(\{\tilde{\Lambda}_{l,i}\}_{i=1}^{n}\) are slightly shifted from the centers of the sets, but not more than the width \(\epsilon_{i}\) according to (69).
First, we consider the case when we picked the shifted from the median collection of eigenvalues in such a way that the new collection \(\{\tilde{\Lambda}_{l,i}\}_{i=1}^{n}\) is equidistant too (see Fig. 2). That means that
\[|\tilde{\Lambda}_{l,i}-\tilde{\Lambda}_{l,i-1}|=\tilde{\Delta}_{l}. \tag{70}\]
For these \(n\) eigenvalues, the solution of the reduced problem (15) is \(b_{2n-1}(\tau,\tilde{\Delta}_{l})\) and is given by (45). If \(\Delta=\tilde{\Delta}\) the solutions from the median set and from the shifted set are coincident. In the real case these values can be slightly different. The shift is
\[\Delta+\Delta_{l,i}-\Delta_{l,i-1}=\tilde{\Delta}_{l}, \tag{71}\]
where we considered the case when \(\tilde{\Lambda}_{l,i}>\Lambda_{l}\). Hence, we can write
\[b_{2n-1}(\tau,\tilde{\Delta}_{l})=b_{2n-1}(\tau,\Delta+\Delta_{l,i}-\Delta_{ l,i-1})=b_{2n-1}(\tau,\Delta)+b_{2n-1}(\tau,\Delta_{l,i})-b_{2n-1}(t,\Delta_{l,i-1}). \tag{72}\]
Using different collections of equidistant eigenvalues we can get a series of estimates of \(b_{2n-1}(\tau,\Delta)\):
\[b_{2n-1}(\tau,\Delta)=b_{2n-1}(\tau,\tilde{\Delta}_{l})-b_{2n-1}(\tau,\Delta_ {l,i})+b_{2n-1}(t,\Delta_{l,i-1}). \tag{73}\]
However, if we sort the real data, we will see that the eigenvalues are slightly not equidistant, corresponding to the case of perturbed equidistant eigenvalues we considered in the previous subsection. Then, one has to do the same analysis, taking into account the amount of perturbation from the equidistant positions.
Figure 2: Equidistant shifted clusters of eigenvalues.
Phase Shift Rule for an Ill-Posed Problem
In this section, we solve the problem (15) being ill-posed by Hadamard. As we mentioned, it can happen for multiple reasons. First, the eigenvalues of the Hamiltonian can be close to each other, such that different \(\mu_{(k,p)}\) would be equal. This causes singularity in the matrix \(E_{m}\) (further, we omit the \(m\) index, assuming all matrices are of size \(m\times m\)). Secondly, we solve (15) for the case when the operators \(E\) and the functions \(\mu\) are not known precisely but one knows their approximations \(\hat{E}_{l}\) and \(\hat{\mu}_{l}\) instead. Here index \(l\in\overline{1,L}\), \(L>0\) denotes the realization number. The approximates \(\hat{E}_{l}\) and \(\hat{\mu}_{l}\) are defined on a probability space \((\Omega,\mathcal{A},P)\) and are close to \(E\) and \(\mu\) in some probabilistic sense. Here, \(\hat{\mu}_{l}\in V\) and the operator \(\hat{E}_{l}\) is continuous \(\forall\omega\in\Omega\).
Since \(\hat{b}(\vec{\phi})=\hat{E}_{l}^{-1}(\vec{\phi})\hat{\mu}_{l}\) is unstable with respect to fluctuations in the empirical data, it cannot be utilized as an approximation of \(b(\vec{\phi})\). To be more precise, slight variations in the values of \(\hat{\mu}_{l}\) from \(\mu\) have the potential to result in significant variations in \(\hat{b}\). This implies that the inverse operator \(\hat{E}_{l}^{-1}\) may not be continuous and the problem is ill-posed.
In our specific case, \(\mu\) is a \((m\times 1)\) vector. If \(E\) is an \(m\times m\) matrix and \(\det E\neq 0\) (or \(\text{rank}(E)=m\)) then \(E^{-1}\) exists. However, the problem can still be ill-posed. One can define an orthogonal transformation \(b=Vb^{\star}\) and \(\mu=V\mu^{\star}\) such that \(E\) will be represented in a diagonal form \((l_{1},\ldots,l_{m})\), where \(\{l_{i}\}_{i=1}^{m}\) are the eigenvalues of \(E\). When some differences \(\mu_{(k,p)}\) between the eigenvalues of the Hamiltonian are equal, the \(\text{rank}(E)=r<m\) and then the \(m-r\) eigenvalues \(l_{i}\) of the matrix \(E\) are zero. Then the matrix is not invertible. Let \(l_{i}=0\), \(i=\overline{1,r}\) and \(l_{i}\neq 0\) for \(i\in\overline{r+1,m}\). For a given approximations \(\hat{E}_{l}\) and \(\hat{\mu}_{l}\) such that
\[\|\hat{E}_{l}-E\|\leq\varepsilon,\quad\varepsilon>0, \tag{74}\] \[\|\hat{\mu}_{l}-\mu\|\leq\delta,\quad\delta>0,\]
the eigenvalues \(\tilde{l}_{i}\), \(i\in\overline{r+1,m}\) of \(\hat{E}_{l}\) may be close to zero for a sufficiently small \(\varepsilon\). Then \(\hat{b}_{i}^{\star}=\hat{\lambda}_{i}^{\star}/\tilde{l}_{i}\) may be large for a small perturbation of \(\hat{E}_{l}\) and \(\hat{\mu}_{l}\). This implies that the solution of the system of linear equations (15) is unstable.
In this case, we use the regularization technique introduced by Tikhonov and Arsenin (1977) [29] that entails the stabilization of solutions by limiting the set of feasible solutions \(\mathcal{D}\in U\) to a compact set \(\mathcal{D}^{\star}\), due to the subsequent lemma:
**Lemma 5.1**.: _The inverse operator \(E^{-1}\) is continuous on the set \(N^{\star}=E\mathcal{D}^{\star}\) if the continuous one-to-one operator \(E\) is defined on the compact \(\mathcal{D}^{\star}\in\mathcal{D}\subseteq U\)._
The reduction of solutions is provided by the stabilizing functional, which is defined on \(\mathcal{D}\). One can notice that the regularization method is similar to the Lagrange method in the sense that we are looking for a solution \(\hat{b}\) that minimizes a functional \(\Omega(\hat{b}):\|\hat{E}_{l}\hat{b}-\hat{\mu}_{l}\|\leq\varepsilon\), \(\varepsilon>0\).
In this paper, to find the solution of (15) we propose to use the extension of the regularization method from a deterministic operator equation to the case of stochastic ill-posed problems. The function that minimizes the functional
\[R_{\gamma}(\hat{\mu}_{l},b)=\|\hat{E}_{l}b-\hat{\mu}_{l}\|_{V}^{2}+\gamma \Omega(b), \tag{75}\]
in a set \(\mathcal{D}\) of functions \(b\in U\) is taken as an approximate solution of (15). The parameter \(\gamma>0\) is called the regularization parameter and \(\Omega(b)\) is a stabilizing functional that satisfies the following conditions:
* \(\Omega(b)\) is defined on the set \(\mathcal{D}\).
* \(\Omega(b)\) assumes real nonnegative values and is lower semi-continuous on \(\mathcal{D}\).
* All set \(M_{c}=\{b:\Omega(b)\leq c\}\) are compact in \(U\).
Further Theorems 5.1 and 5.2[33, 34] provide the theoretical background of the statistical regularization method for the case of an accurately given operator \(E\), and Theorem 5.3[35] for the case of an inaccurately given operator \(E\).
**Theorem 5.1**.: _If, for each \(l\), a positive \(\gamma=\gamma(l)\) is chosen such that \(\gamma\to 0\) as \(l\to\infty\), then for any positive \(\alpha\) and \(\beta\) there will be a number \(N=N(\alpha,\beta)\) such that, for all \(l>N\), the elements \(\hat{b}^{\gamma}(x)\) that minimize the functional (75) satisfy the inequality_
\[P\{\rho_{U}(\hat{b}^{\gamma},b)>\alpha\}\leq P\{\rho_{V}^{2}(\hat{\mu}_{l},\mu )>\beta\gamma\}, \tag{76}\]
_where \(b\) is the precise solution of (15) with the right-hand side \(\mu\), and \(\rho(f,g)=\|f-g\|\)._
For our concrete case, all spaces are Hilbert ones. The following theorems state:
**Theorem 5.2**.: _Let \(U\) be a Hilbert space, \(E\) be a linear operator, and \(\Omega(b)=\|b\|_{U}^{2}\). Then, \(\forall\varepsilon\), there exists a number \(l(\varepsilon)\) such that \(\forall k>k(\varepsilon)\) the inequality_
\[P\{\|\hat{b}^{\gamma}-b\|_{U}^{2}>\varepsilon\}\leq 2P\{\rho_{V}^{2}(\hat{\mu}_ {l},\mu)>(\varepsilon/2)\gamma\}, \tag{77}\]
_holds._
**Theorem 5.3**.: _Let \(U\) and \(V\) be normed spaces. For any \(\varepsilon>0\) and any constants \(c_{1},c_{2}>0\), there exists a number \(\gamma_{0}>0\) such that \(\forall\gamma\geq\gamma_{0}\),_
\[P\{\omega:\|\hat{b}^{\gamma}-b\|_{U}>\varepsilon\}\leq P\{\omega:\frac{\|\hat{ \mu}_{l}-\mu\|_{V}}{\sqrt{\gamma}}>c_{1}\}+P\{\omega:\frac{\|\hat{E}_{l}-E\|}{ \sqrt{\gamma}}>c_{2}\}, \tag{78}\]
_where_
\[\|\hat{E}_{l}-E\|=\sup_{g\in\mathcal{D}}\frac{\|\hat{E}_{l}b-Eb\|_{V}}{\sqrt{ \Omega(b)}}. \tag{79}\]
These theorems imply that the minimization of (75) is a stable problem, i.e. close functions \(\hat{\mu}_{l}\) and \(\mu\) (and close operators \(\hat{E}_{l}\) and \(E\)) correspond to close (in probabilistic sense) regularized solutions \(\hat{b}^{\gamma}\) and \(b\) that minimize the functionals \(R_{\gamma}(\hat{\mu}_{l},b)\) and \(R_{\gamma}(\mu,b)\), respectively.
For the Hilbert spaces \(U\) and \(V\), the solution of (15) with \(\Omega(b)=\|b\|_{U}^{2}\) has a simple form
\[\hat{b}^{\gamma}=(\gamma I+\hat{E}_{l}^{\dagger}\hat{E}_{l})^{-1}\hat{E}_{l}^ {\dagger}\hat{\mu}_{l}, \tag{80}\]
where \(I\) is a unit operator.
The stability of the approximation \(\hat{b}^{\gamma}\) to \(b\) is ensured by an appropriate choice of \(\gamma\). For selecting the regularization parameter, see [36, 37, 38]. For example, the mismatch method [36] determines \(\gamma\) from the equality
\[\|\hat{E}_{l}\hat{b}^{\gamma}-\hat{\mu}_{l}\|_{V}=\varepsilon(l) +\eta(l,b), \tag{81}\] \[\|\hat{\mu}_{l}-\mu\|_{V}\leq\varepsilon(l),\quad\|\hat{E}_{l}b-Eb \|_{V}\leq\eta(l,b),\]
where \(\varepsilon(l)\) and \(\eta(l,b)\) are known estimates of the data error. The stochastic analog of the mismatch method is the discrepancy method [39, 40]. If the operator is defined precisely (\(\eta(l,b)=0\)), then the choice of \(\gamma\) from (81) provides a rate of convergence of the regularized estimate \(\hat{b}^{\gamma}\) to \(b\) that is no better than \(O(\varepsilon^{1/2})\) (see [37]).
### Minimization of the Square Norm
Ones we know the approximate solution of (15) defined by (80), we can solve the minimization problem (29) to minimize the variance (24). Using the form of the regularized solution (80) of (15), we can write
\[\hat{b}_{x}^{\gamma}(\vec{\phi})=\sum_{j,i=1}^{m}(\gamma I+\hat{E}^{\dagger}( \vec{\phi})\hat{E}(\vec{\phi}))^{-1}_{xj}\hat{E}^{\dagger}_{ji}(\vec{\phi}) \hat{\mu}_{i}, \tag{82}\]
where we omit the index \(l\) meaning we treat one experimental realization of \(E\) and \(\mu\). The derivative is
\[\left(\frac{\partial\hat{b}^{\gamma}(\phi)}{\partial\phi_{y}}\right)_{x}=\sum _{j,s=1}^{m}\left(\left(\frac{\partial(\gamma I+\hat{E}^{\dagger}\hat{E})^{-1 }}{\partial\phi_{y}}\right)_{xj}\hat{E}^{\dagger}_{js}+(\gamma I+\hat{E}^{ \dagger}\hat{E})^{-1}_{xj}\left(\frac{\partial\hat{E}^{\dagger}}{\partial \phi_{y}}\right)_{js}\right)\hat{\mu}_{s}, \tag{83}\]
where we use the short notation omitting \(\phi\) dependence. Using \(\frac{\partial Y^{-1}(x)}{\partial x}=-Y^{-1}\frac{\partial Y(x)}{\partial x} Y^{-1}\), we get
\[\left(\frac{\partial(\gamma I+\hat{E}^{\dagger}\hat{E}}{\partial\phi_{y}} \right)_{xj}=-\sum_{l,p=1}^{m}(\gamma I+\hat{E}^{\dagger}\hat{E})^{-1}_{xl} \left(\frac{\partial(\gamma I+\hat{E}^{\dagger}\hat{E})}{\partial\phi_{y}} \right)_{lp}(\gamma I+\hat{E}^{\dagger}\hat{E})^{-1}_{pj}. \tag{84}\]
The derivative is
\[\left(\frac{\partial(\gamma I+\hat{E}^{\dagger}\hat{E})}{\partial\phi_{y}} \right)_{lp}=\sum_{v=1}^{m}\left(\frac{\partial E^{\dagger}}{\partial\phi_{y} }\right)_{lv}E_{vp}\delta_{l,y}+E^{\dagger}_{lv}\left(\frac{\partial E}{ \partial\phi_{y}}\right)_{vp}\delta_{p,y}, \tag{85}\]
where we used the fact that the derivatives on the right hand side are non zero only in one raw or column.
Finally, the expression (29) for the regularised solution can be written as follows
\[\sum_{x=1}^{m}\sum_{j,i=1}^{m}(\gamma I+\hat{E}^{\dagger}\hat{E} )^{-1}_{xj}\hat{E}^{\dagger}_{ji}\lambda_{i}\sum_{l=1}^{m}(\gamma I+\hat{E}^{ \dagger}\hat{E})^{-1}_{xl}\sum_{s=1}^{m}\left(\left(\frac{\partial\hat{E}^{ \dagger}}{\partial\phi_{y}}\right)_{ls}\delta_{l,y}\right.\] \[- \sum_{p,v,t=1}^{m}\left(\left(\frac{\partial\hat{E}^{\dagger}}{ \partial\phi_{y}}\right)_{lv}\hat{E}_{vp}\delta_{l,y}+\hat{E}^{\dagger}_{lv} \left(\frac{\partial\hat{E}}{\partial\phi_{y}}\right)_{vp}\delta_{p,y}\right) (\gamma I+\hat{E}^{\dagger}\hat{E})^{-1}_{pt}\hat{E}^{\dagger}_{ts}\right) \lambda_{s}=\mathbf{0}.\]
Solving this system of equation with respect to all \(\phi_{y}\), \(y\in\overline{1,m}\), one can find the optimal \(\vec{\phi}\) minimizing the variance (24).
A possible solution is, when
\[\left(\frac{\partial E^{\dagger}}{\partial\phi_{y}}\right)_{ls}\delta_{l,y}= \sum_{p,v,t=1}^{m}\left(\left(\frac{\partial E^{\dagger}}{\partial\phi_{y}} \right)_{lv}E_{vp}\delta_{l,y}+E^{\dagger}_{lv}\left(\frac{\partial E}{ \partial\phi_{y}}\right)_{vp}\delta_{p,y}\right)(\gamma I+E^{\dagger}E)^{-1}_ {pt}E^{\dagger}_{ts},\quad\forall l,s=\overline{1,m},\]
holds. Then
\[\left(\frac{\partial E^{\dagger}}{\partial\phi_{y}}\right)_{ys}= \sum_{p,v=1}^{m}\left(\frac{\partial E^{\dagger}}{\partial\phi_{y}}\right)_{ vv}E_{vp}\sum_{t=1}^{m}(\gamma I+E^{\dagger}E)^{-1}_{pt}E^{\dagger}_{ts}\] \[+ \sum_{v=1}^{m}E^{\dagger}_{pv}\left(\frac{\partial E}{\partial\phi _{y}}\right)_{vy}\sum_{t=1}^{m}(\gamma I+E^{\dagger}E)^{-1}_{yt}E^{\dagger}_{ ts},\quad\forall s=\overline{1,m}\quad\mbox{and}\quad l=y;\] \[\sum_{v=1}^{m}E^{\dagger}_{lv}\left(\frac{\partial E}{\partial \phi_{y}}\right)_{vy}\sum_{t=1}^{m}(\gamma I+E^{\dagger}E)^{-1}_{yt}E^{ \dagger}_{ts}=\mathbf{0},\quad\forall s=\overline{1,m}\quad\mbox{and}\quad l \neq y.\]
Discussion and Conclusion
We propose the phase shift rule, with the optimal parameter selection dependent on the Hamiltonian eigenvalue structure. Our method is suitable for big Hamiltonian systems with known eigenvalues. Dependent on the distance between the eigenvalues, the problem can be well- or ill-posed by Hadamard, which makes it non-trivial for optimization in the case when some distances are close to each other.
In the case of a well-posed problem, an explicit solution is proposed, and the recipe for finding the optimal phases is provided. For the ill-posed problem arising, for example, when the eigenvalues of the Hamiltonian are close to each other and the distances between them can coincide, we find the explicit solution as well. We show that it is unique and that the phases must be picked equidistantly. We observe the realistic case of slightly perturbed equidistant eigenvalues arising in practice and the case of equidistant clusters formed by the different realizations of the Hamiltonian. We provide the regularized solution for the ill-posed problem that does not have a particular eigenvalue structure, as well as the method of optimum phase shift selection.
In addition to a full reconstruction of the derivative, the presented approach offers parameter-shift rules for derivatives of arbitrary order and any linear combination of them.
## 7 Acknowledgments
L.M. was supported by the Netherlands Organisation for Scientific Research (NWO/OCW), as part of the Quantum Software Consortium program (project number 024.003.037 / 3368). This work has received support from the European Union's Horizon Europe research and innovation programme through the ERC StG FINE-TEA-SQUAD (Grant No. 101040729). This work is supported by the Dutch National Growth Fund (NGF), as part of the Quantum Delta NL programme.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.
|
2309.03433 | Improving Open Information Extraction with Large Language Models: A
Study on Demonstration Uncertainty | Open Information Extraction (OIE) task aims at extracting structured facts
from unstructured text, typically in the form of (subject, relation, object)
triples. Despite the potential of large language models (LLMs) like ChatGPT as
a general task solver, they lag behind state-of-the-art (supervised) methods in
OIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevant
context from relevant relations and generate structured output due to the
restrictions on fine-tuning the model. Second, LLMs generates responses
autoregressively based on probability, which makes the predicted relations lack
confidence. In this paper, we assess the capabilities of LLMs in improving the
OIE task. Particularly, we propose various in-context learning strategies to
enhance LLM's instruction-following ability and a demonstration uncertainty
quantification module to enhance the confidence of the generated relations. Our
experiments on three OIE benchmark datasets show that our approach holds its
own against established supervised methods, both quantitatively and
qualitatively. | Chen Ling, Xujiang Zhao, Xuchao Zhang, Yanchi Liu, Wei Cheng, Haoyu Wang, Zhengzhang Chen, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao | 2023-09-07T01:35:24Z | http://arxiv.org/abs/2309.03433v1 | Improving Open Information Extraction with Large Language Models: A Study on Demonstration Uncertainty
###### Abstract
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text, typically in the form of (subject, relation, object) triples. Despite the potential of large language models (LLMs) like ChatGPT as a general task solver, they lag behind state-of-the-art (supervised) methods in OIE tasks due to two key issues. First, LLMs struggle to distinguish irrelevant context from relevant relations and generate structured output due to the restrictions on fine-tuning the model. Second, LLMs generates responses autoregressively based on probability, which makes the predicted relations lack confidence. In this paper, we assess the capabilities of LLMs in improving the OIE task. Particularly, we propose various in-context learning strategies to enhance LLM's instruction-following ability and a demonstration uncertainty quantification module to enhance the confidence of the generated relations. Our experiments on three OIE benchmark datasets show that our approach holds its own against established supervised methods, both quantitatively and qualitatively. The code and data can be found at: [https://github.com/lingchen0331/demonstration_uncertainty](https://github.com/lingchen0331/demonstration_uncertainty).
Chen Ling \({}^{1,2}\) Xujiang Zhao\({}^{2}\) Xuchao Zhang\({}^{3}\) Yanchi Liu\({}^{2}\) Wei Cheng\({}^{2}\) Haoyu Wang\({}^{2}\) Zhengzhang Chen\({}^{2}\) Takao Osaki\({}^{4}\) Katsushi Matsuda\({}^{4}\) Haifeng Chen\({}^{2}\) Liang Zhao\({}^{1}\)\({}^{1}\)Emory University, \({}^{2}\)NEC Labs, \({}^{3}\)Microsoft, \({}^{4}\)NEC Corporation Open Information Extraction, Large Language Model
## 1 Introduction
Open Information Extraction (OIE) [1] involves the identification and extraction of novel relations and their components (e.g., subjective, action, objective, and adverbs) from unstructured text. It enables the creation of large-scale knowledge graphs from diverse sources [2], aiding in tasks like question answering [3], knowledge-augmented reasoning [4], and semantic search [5]. As a frontier technology, ChatGPT [6] and other large language models (LLMs) [7] excel at comprehending and producing a wide variety of intricate natural language constructs. Therefore, they naturally present a promising solution for solving the OIE task without the need for substantial training.
The dominant OIE methods [8, 9, 10] are trained on labeled data, where each entity and their relations are explicitly annotated. This allows them to learn precise patterns and directly map input to specific output tags, resulting in high accuracy. Despite the potential of LLMs like ChatGPT as a general task solver, they lag behind tagging-based methods in OIE tasks due to two key issues [7]. First, LLMs as a generative model are trained to generate human-like text and not specifically for information extraction. While they have a broad understanding of language and can generate coherent responses, they may not be as accurate or consistent in extracting specific pieces of information from the text as supervised models trained specifically for that task. Second, the responses generated by LLMs are based on the input prompt and are probabilistic in nature, which can result in outputs with lower confidence. This lack of confidence can engender inconsistencies, such as the same relation being extracted differently in varying contexts or not being extracted at all in certain instances. Furthermore, this diminished confidence can lead to the extraction of incorrect or irrelevant relations, thereby reducing confidence in interpreting the extracted relations.
While zero-shot LLMs cannot solve complex OIE problems solely with original task instructions [7], there are a few attempts [11, 12, 13] trying to solve OIE with LLMs in different ways. A recent method [13] is proposed to tackle the information extraction with ChatGPT in an interactive manner by decomposing the framework into several parts and then combining the results of each round into a final structured result, but they can only handle OIE tasks with fixed relations. Lu et al. also proposed to leverage instruction tuning to enhance LLMs on a hand-crafted dataset, however, their method has to be extensively fine-tuned on hand-crafted datasets [11]. Another recent approach [12] focuses on investigating the capability of ChatGPT in the OIE task from various aspects. However, none of the existing works have considered enhancing the robustness and confidence of the response.
We summarize the contributions of our work as follows: (1) We propose a novel framework that allows LLMs to solve OIE tasks with various few-shot demonstration strategies without extensive fine-tuning; (2) We include an uncertainty quantification module to increase the confidence of the predicted answers. (3) A series of experiments have shown the effectiveness of the proposed method as well as each component in the proposed framework. |
2310.01421 | Using Focus Group Interviews to Examine Biased Experiences in
Human-Robot-Interaction | When deploying interactive agents like (social) robots in public spaces they
need to be able to interact with a diverse audience, with members each having
individual diversity characteristics and prior experiences with interactive
systems. To cater for these various predispositions, it is important to examine
what experiences citizens have made with interactive systems and how these
experiences might create a bias towards such systems. To analyze these
bias-inducing experiences, focus group interviews have been conducted to learn
of citizens individual discrimination experiences, their attitudes towards and
arguments for and against the deployment of social robots in public spaces.
This extended abstract focuses especially on the method and measurement of
diversity. | Lukas Erle, Lara Timm, Carolin StraΓmann, Sabrina C. Eimler | 2023-09-27T07:06:23Z | http://arxiv.org/abs/2310.01421v1 | # Using Focus Group Interviews to Examine Biased Experiences in Human-Robot-Interaction
###### Abstract
When deploying interactive agents like (social) robots in public spaces they need to be able to interact with a diverse audience, with members each having individual diversity characteristics and prior experiences with interactive systems. To cater for these various predispositions, it is important to examine what experiences citizens have made with interactive systems and how these experiences might create a bias towards such systems. To analyze these bias-inducing experiences, focus group interviews have been conducted to learn of citizens individual discrimination experiences, their attitudes towards and arguments for and against the deployment of social robots in public spaces. This extended abstract focuses especially on the method and measurement of diversity.
## I Introduction
Social robots are a special category of interactive agents, which are usually designed with human-like appearances or abilities [1] and who interact with people in a social context, often assisting them in various ways [2, 3]. One possible deployment field are public spaces, such as libraries, city administration offices [4, 5]. They can be used to instruct people and communicate important information, increase perceived safety and creativity in social interactions [6] and more generally act as embodied intermediaries [7] between citizens and public institutions.
However, in these public spaces, social robots are often faced with a diverse audience as citizens exhibit unique combinations of diversity characteristics [8]. Additionally, different citizens will have made different experiences with various technologies, some of them negative. These negative experiences can be multifaceted: Some citizens might have used certain devices or functions and simply encountered technological hurdles, user errors or incomplete or unreliable programs. At the same time, however, some citizens will already have encountered discriminations carried out by technological systems. These discriminations are often referred to as algorithmic bias, a term describing the existence of biases in algorithms and devices towards certain cultural, religious or other groups of people [9]. Those whilst often not intended by a systems developers [10] can have a significant negative impact on affected peoples perception of and interaction with these systems. For example, people who have been subjected to algorithmic bias might evaluate the system as less fair and, as a result, do not trust its recommendations or even oppose using the system altogether [11]. The decision to not use a system especially when it is a public system like a social robot in a public space then also forces citizens into a digital divide. This should be avoided at all costs, as it might lead to a reduced participation in public life [12, 13], as well as negatively impact social proximity of citizens [14].
The goal of this research is to examine what experiences citizens have made with different technologies and how instances of algorithmic bias might lead to biased interactions with social robots, as well as how they behave in biased interactions and what strategies might be used to cope with such bias. To this end we are conducting multiple focus group interviews with citizens of the Ruhr area in Germany, thereby gathering individual experiences, hurdles, motivators, and coping strategies for regular and biased interactions with social robots.
## II Diversity in HRI Research
Diversity has found different definitions across extant literature. Generally, diversity is split into two subcategories [15], the first of which being activity-based diversity. This subcategory differentiates people purely based on their occupation. The second subcategory, relational diversity, describes peoples ethnic origin, religious affiliations and other often unchangeable aspects of their identity. In line with [15] and [16], we argue that personal identity as a whole consists of both these subcategories and therefore needs to be treated as a complex construct that requires further definition and examination [17].
While extant literature agrees that diversity is a relevant factor in the design and conceptualization of virtual agents [18, 19, 20], few research has been carried out on how to quantify and measure users diversity in HRI research. Different approaches have been developed over the years: For example, modelling techniques such as the _repertory grid technique_ or _multi-dimensional scaling_ can be used to analyze users perceptions about a system [21]. However, these models share a common shortcoming: They are largely quantitative ways of describing and defining diverse user groups, often in the context of a concrete system. As a result, the qualitative nature of diversity characteristics is ignored by those models.
A more fitting concept aimed at describing humans diversity characteristics is a representation of those characteristics in the form of a diversity wheel [22]. Its four layers of diversity are _personality, internal dimensions, external dimensions,_ and _organizational dimensions_. In order of mention, these dimensions become more and more flexible and
changeable: Organizational dimensions like _work content_ or _department_ can more easily be changed by an individual than their external dimensions, such as their educational background or appearance. Vitally, the internal dimensions also describe diversity characteristics that are almost immutable, for example (biological) gender, sexual orientation, and age. The diversity wheel thereby summarizes both activity-based diversity characteristics (organizational dimensions) and relational diversity characteristics (external and internal dimensions), while also adding an individuals personality at the core. Initially developed for a business context, the wheel has been used in recent research on diversity in companies and organizations [23, 24]. Since the wheel has been used only occasionally in HRI research, we intend to focus it more strongly. This research therefore follows a novel approach by attempting to use its dimensions to describe the diversity characteristics of users interacting with social robots.
## III Method
To carry out our research agenda, we decided to conduct focus group interviews with different citizens from the Ruhr area in Germany. Focus groups have been proven to contribute to the understanding of multiple opinions and experiences regarding a certain topic [25, 26] while also allowing unique new perspectives through discussions between participants [27]. They therefore are a suitable method for examining citizens experiences with and attitudes towards modern technologies. Because the specific functions of social robots in a public space might be subject to change depending on the situation and space they are being deployed in, we decided to not only examine the participants experiences with and attitudes towards (social) robots, but modern technologies and functions more generally. Specifically, we aggregated a list of popular devices and functions. For devices, we asked examined participants experience with _laptops/computers, smartphones, tablets, smartwatches, VR/AR glasses, chatbots, phone bots, voice assistants, robots, touch terminals, digital cameras, TVs,_ and _e-readers_. For functions, we examined _facial recognition, voice recognition, voice and video calls, fingerprint recognition,_ and _headtracking_.
### _Focus Group Setup_
The focus group interviews were planned in groups of eight participants each (which was not possible in every focus group due to last minute cancellations). The first cohort of participants consisted of university students and was recruited through the university's e-learning platform and divided into different time slots. Each focus group interview was scheduled for two and a half hours, their audio was recorded, and they followed a semi-structured interview guide. This method was chosen to ensure a comparable proceeding of each focus group interview whilst still allowing us to dive deeper into some of the participants experiences if needed. For this purpose, semi-structured interview guides have proven to be a suitable method in various disciplines [28, 29, 30]. Participants were provided with a printed booklet, including blank sheets with the different questions asked as part of the interview guide and a demographic questionnaire. To be able to connect the participants written answers and demographic data to their verbal expressions without revealing their identity, we asked them to choose one of 12 different superhero identities. Participants then wrote their superhero name on each page of the booklet and were only addressed with their superhero name throughout the interview. Participants were first assigned their superhero identities and were then briefed about the contents and procedure of the interview and signed a declaration of consent, allowing us to record the interview and using these recordings for further analysis.
### _Measuring Prior Experiences and Personality_
To gauge participants individual attitudes towards and experiences with technologies and specifically social robots we decided to measure these both quantitatively and qualitatively. For the quantitative part, participants were asked to fill out various questionnaires: Participants attitudes towards robots were measured using the _General Attitudes towards Robots Scale (GAToRS)_[31]. For application with German participants, we translated the items form English to German and had multiple researchers translate those items back to English to ensure they were a suitable translation. For measuring participants general readiness to try out and interact with technology, we used the German short scale for _technology commitment_[32]. Finally, we measured participants personality characteristics using the _Big Five Inventory (BFI-10) scale_[33].
### _Application of the Diversity Wheel_
As a final aspect to the participants demographic data, we wanted to examine their diversity characteristics. As hinted at in the previous section, we chose the diversity wheel to aid us in this endeavor. To ensure that the wheel and its dimensions are a good fit for the examination of social robots in public spaces, we chose a German translation of the original diversity wheel [23, 24] and critically assessed which dimensions would be relevant for the interaction with social robots in public spaces. We decided to remove the aspects _Personal Habits, Recreational Habits, Work Experience, Appearance, Geographic Location, Functional Level, Division/Department, Seniority, Work Location, Union Affiliation,_ and _Management Status_. The reason behind the removal of those exact aspects from the wheel is the assumption that a social robot would not know these aspects about a citizen in a regular interaction, and some aspects might not be relevant to the interaction at all. For example, a citizens personal habits might be different or non-existent when being in a public space and interacting with a social robot there. Similarly, it would be very unlikely that a social robot would know of a citizens union affiliation, considering an employee is not even obligated to share their union affiliation with their employer. Each remaining aspect of the diversity wheel was then included as part of the demographic data questionnaire. We followed various standards and guidelines
for capturing these aspects [34, 35, 36] and to keep anonymity whilst still creating a reliable and quantifiable picture of the participants diversity. At the end of this, participants were provided with the diversity wheel and asked to mark the three aspects that they deemed most important to their own identity. These pages were collected, and the aspects were anonymously transferred to a larger print of that same wheel, which was then hanged up for all participants to see. This was done to establish a common ground regarding the diversity characteristics represented in the group. Establishing a common ground has proven to be an important step to ensure questions can be understood correctly and answered precisely [37].
### _Interview Guide_
After filling out the aforementioned demographics questionnaire, participants were asked to state how often they used the devices and technologies under consideration (_never to more than once a day_) and how they would rate them (_positively, neutrally, negatively_). For this, large tables were prepared that allowed participants to state their usage frequency and evaluation by placing a dot in the corresponding field. This position in the interview guide represents the end of the quantitative part of the focus group interviews. To examine participants evaluation of the devices and functions in more detail, they were then asked to write down whether they have had any negative key experiences that made them dislike a device. When participants finished writing down their thoughts, they were asked to voluntarily share some of their experiences with the group. This approach was chosen to ensure that any intimidating effects of the other group members, the interview situation, or the moderators would not lead to any apprehensions in replies, whilst still allowing a public discussion of some negative experiences. Furthermore, this question was specifically phrased to prompt participants to share any experiences with algorithmic bias or discrimination by devices or functions. Next, this procedure was repeated, only this time asking participants to write down and share any positive experiences that were essential for their evaluation of devices or functions. Afterwards, participants were divided into groups of two and in two rounds each received a scenario of algorithmic bias which had been printed out. There were eight scenarios in total, with five scenarios following real-world cases of algorithmic bias [38, 39]. The remaining three scenarios were constructed from deliberations made during an ethics workshop dealing with ethical implications of a deployment of social robots in a public space. The groups were given 15 minutes to read and discuss the scenario, with the goal to determine how the error transpired, whose fault it was and which technical solutions or coping strategies could be adopted to rectify the issue. Again, participants were first required to write down their answers and thoughts, with a public discussion following once the 15 minutes had run out. Each group then summarized their scenario to the other groups and shared their thoughts on the attribution of guilt and reason behind the issue. Other groups also had the chance to offer their thoughts on the summarized scenario. The whole process was repeated with a second set of scenarios until each group had worked on two scenarios. In this part of the interview, we wanted to examine whether the participants would attribute the issue to user error, to a malfunction, or recognize that the issue happened because of the users diversity characteristics. Furthermore, we were interested in how participants would behave in place of the users and what technical solutions they would envision to avoid these problems from happening in the future. Thereby, participants have also been made aware of and primed for possible algorithmic biases. For the final part of the interview, the moderators presented a scenario to the entirety of the group:
_You want to borrow a book from the public library in your city. You no longer interact with humans in the library but are instead accompanied by a robot during your visit. This robot can read stories to you, navigate you through the library and serve as an information terminal. It could also perform the functions discussed earlier. It also understands human language and can respond both verbally and via a tablet._
With this scenario set, participants were urged to fantasize about the best-case (utopian) and worst-case (dystopian) interactions with this social robot and, again, first write down their thoughts and share them with the rest of the group afterwards. At the end of the focus group interview, the filled-out booklets were collected, and along with the tables scanned and digitalized. The recordings of the interviews archived and transcribed for further analysis.
## IV Conclusion
This extended abstract has introduced a novel approach of measuring diversity characteristics in HRI research in the application of the diversity wheel developed by [22]. Furthermore, this approach has been applied to specific research on social robots in the form of focus group interviews. These interviews aimed at compiling citizens' experiences with modern technologies, specifically experiences with algorithmic bias. Further steps will include carrying out more focus groups interviews following with more diverse groups of citizens, as well as analysing the data gathered during these focus group interviews. At the end of these steps, we aim to reliably predict citizens' reaction to a deployment of social robots in public spaces and ensure that these social robots are suitable for interaction with diverse audiences.
|
2309.03355 | Dynamics of weighted backward shifts on certain analytic function spaces | We introduce the Banach spaces $\ell^p_{a,b}$ and $c_{0,a,b}$, of analytic
functions on the unit disc, having normalized Schauder bases consisting of
polynomials of the form $f_n(z)=(a_n+b_nz)z^n, ~~n\geq0$, where $\{f_n\}$ is
assumed to be equivalent to the standard basis in $\ell^p$ and $c_0$,
respectively. We study the weighted backward shift operator $B_w$ on these
spaces, and obtain necessary and sufficient conditions for $B_w$ to be bounded,
and prove that, under some mild assumptions on $\{a_n\}$ and $\{b_n\}$, the
operator $B_w$ is similar to a compact perturbation of a weighted backward
shift on the sequence spaces $\ell^p$ or $c_0$. Further, we study the
hypercyclicity, mixing, and chaos of $B_w$, and establish the existence of
hypercyclic subspaces for $B_w$ by computing its essential spectrum. Similar
results are obtained for a function of $B_w$ on $\ell^p_{a,b}$ and $c_{0,a,b}$. | Bibhash Kumar Das, Aneesh Mundayadan | 2023-09-06T20:41:21Z | http://arxiv.org/abs/2309.03355v3 | # Dynamics of scalar-times the backward shift on analytic tridiagonal spaces
###### Abstract.
We study the backward shift operator \(B\) acting on the Hilbert space \(\mathcal{H}_{a,b}\) of analytic functions on the unit disc, having an orthonormal basis consisting of polynomials of the form \(f_{n}(z)=(a_{n}+b_{n}z)z^{n},\ n\geq 0\). We obtain necessary and sufficient conditions for \(B\) to be bounded, and prove that, under some mild assumptions on \(\{a_{n}\}\) and \(\{b_{n}\}\), the operator \(B\) is unitarily equivalent to a compact perturbation of a weighted backward shift on \(\ell^{2}\). Further, we characterize the hypercyclicity, mixing, and chaos of \(\lambda B\) for a non-zero scalar \(\lambda\), and establish the existence of hypercyclic subspaces for \(\lambda B\) by computing its essential spectrum. We also provide vector valued versions of our results for \(B\) when it acts on a reproducing kernel space corresponding to matrix valued kernels.
Key words and phrases:shift operator, hypercyclic, chaos, mixing operator, reproducing kernel Hilbert space, matrix valued kernels 2010 Mathematics Subject Classification: Primary 47A16, 46E22, 32K05, 47B32; Secondary 47B37, 37A99
###### Contents
* 1 Introduction
* 2 Boundedness of the shift operator on a tridiagonal space \(\mathcal{H}_{a,b}\)
* 3 The shift on \(\mathcal{H}_{a,b}\) as a compact perturbation of a weighted shift on \(\ell^{2}\)
* 4 Hypercyclicity, mixing, and chaos
* 5 The shift operator on tridiagonal spaces given by matrix valued kernels
* 6 Concluding remarks
## 1. Introduction
The aim of this paper is twofold, namely to realize the backward shift operator (sometimes known as a Taylor shift)
\[B\big{(}\sum_{n=0}^{\infty}\lambda_{n}z^{n}\big{)}=\sum_{n=0}^{\infty}\lambda_ {n+1}z^{n},\]
defined on a Hilbert space \(\mathcal{H}_{a,b}\) of analytic functions on the unit disc in the complex plane, having an orthonormal basis of the form
\[\big{\{}(a_{n}+b_{n}z)z^{n}:n\geq 0\big{\}},\]
as a compact perturbation of a weighted backward shift on the sequence space \(\ell^{2}\), and to study its linear dynamical properties. We prove that, although the dynamics of the operator \(B\) has similarities with that of a weighted unilateral shift on \(\ell^{2}\), the structure of the operator can be quite different; see the results in the sections 3 and 4. Weighted shifts have been extensively studied from the point of view of operator theory and function theory for several
decades, and we refer to Shields [32]. In linear dynamics, they received a major attention through Godefroy and Shapiro [20], Kitai [24] and Salas [31]. For a thorough account on the fundamentals of linear dynamics, see the monographs by Bayart and Matheron [6] and Grosse-Erdmann and Peris [23].
An operator \(T\) on a separable Banach space \(X\) is said to be _hypercyclic_ if there exists \(x\in X\), known as a _hypercyclic vector_ for \(T\), such that the orbit\(\{x,Tx,T^{2}x,\cdots\}\) is dense in \(X\). If a hypercyclic operator \(T\) on \(X\) has a dense set of periodic vectors, then \(T\) is called _chaotic_. Recall that a vector \(y\in X\) is periodic for \(T\) if its orbit under \(T\) is periodic, that is, \(T^{p}y=y\) for some \(p\). An operator \(T\) on \(X\) is said to be _topologically transitive_ if, for two non-empty open sets \(U_{1}\) and \(U_{2}\) of \(X\), there exists a natural number \(k\) such that \(T^{k}(U_{1})\cap U_{2}\neq\phi\). The transitivity notion is equivalent to that of hypercyclicity, assuming the separability of the underlying Banach space \(X\). A strong form of transitivity is the topological mixing: an operator \(T\) is _topologically mixing_ on \(X\) if, for any two non-empty open sets \(U_{1}\) and \(U_{2}\) of \(X\), there exists \(N\), a natural number, such that \(T^{n}(U_{1})\cap U_{2}\neq\phi\) for all \(n\geq N\). Mixing and chaos are stronger than the hypercyclicity; however, they are not comparable in general. Several familiar operators including weighted shifts on sequence spaces, and composition operators and differential operators on analytic function spaces exhibit the hypercyclic, mixing and chaotic properties. The study is intimately related to classical areas such as complex function theory, dynamical systems, and operator theory, cf. [6] and [23].
There has been an enormous research on the dynamics of weighted or unweighted backward shifts. On \(F\)-sequence spaces having the unit vectors \(\{e_{n}\}\) as basis, it is well known that the hypercyclic properties of shifts depend on the asymptotic behaviour of \(\|e_{n}\|\), where \(\|.\|\) refers to the \(F\)-norm of the underlying space. Hypercyclicity and chaos of weighted shifts on \(F\)-sequence spaces were characterized Grosse-Erdmann [22]. Prior to that, Salas [31] had characterized the hypercyclicity of the classical unilateral and bilateral shifts. Also, see Costakis and Sambarino [14] for mixing shifts, and Bonet, Kalmes and Peris [11] for dynamics of shifts on non-metrizable sequence spaces. In the context of the backward shift acting on \(F\)-spaces of analytic functions on the unit disc, the dynamics depends naturally on \(\|z^{n}\|\). For example, it is well known that the backward shift on the Bergman space of the unit disc is a mixing and non-chaotic operator, cf. Gethner and Shapiro [19] and Grosse-Erdmann [22], respectively. We also refer to Bonet [10], Beise and Muller [8], Beise, Meyrath and Muller [9], Bourdon and Shapiro [13], and Muller and Maike [28] for the dynamics related to the backward shift on analytic function spaces, (Bergman spaces, mostly).
We will make use of the following standard criteria in linear dynamics for establishing the hypercyclic and chaotic properties of the backward shift. Different versions of these criteria are available in the literature, cf. [6] and [23].
**Theorem 1.1**.: _(Gethner-Shapiro Criterion [19]) Let \(T\) be a bounded operator on a separable Banach space \(X\), and let \(X_{0}\) be a dense subset of \(X\). If \(\{n_{k}\}\subseteq\mathbb{N}\) is a strictly increasing sequence and \(S:X_{0}\mapsto X_{0}\) is a map such that, for each \(x\in X_{0}\),_
\[\lim_{k\to\infty}T^{n_{k}}x=0=\lim_{k\to\infty}S^{n_{k}}x,\]
_and_
\[TSx=x,\]
_then \(T\) is hypercyclic. Moreover, if \(n_{k}=k\) for all \(k\geq 1\), then \(T\) is mixing on \(X\)._
A similar criterion, known as the chaoticity criterion, has been used to obtain chaotic operators in Banach spaces, cf. [12]. This criterion is very strong, and it has other implications in linear dynamics; see [6] and [23].
**Theorem 1.2**.: _(Chaoticity Criterion [12]) Let \(X\) be a separable Banach space, \(X_{0}\) be a dense set in \(X\), and let \(T\) be a bounded operator on \(X\). If there exists a map \(D:X_{0}\to X_{0}\) such that_
\[\sum_{n\geq 0}T^{n}x\quad\text{and}\quad\sum_{n\geq 0}S^{n}x,\]
_are unconditionally convergent, and_
\[TSx=x\]
_for each \(x\in X_{0}\), then the operator \(T\) is chaotic and mixing on \(X\)._
The paper is organized as follows. In Section 2, we introduce the reproducing kernel space \(\mathcal{H}_{a,b}\), and obtain necessary and sufficient conditions for \(B\) to be bounded on \(\mathcal{H}_{a,b}\). In Section 3, under some mild conditions we show the shift \(B\) on \(\mathcal{H}_{a,b}\) is unitarily equivalent to a compact perturbation of a weighted shift on \(\ell^{2}\). Using this result, we compute the essential spectrum of \(B\) on \(\mathcal{H}_{a,b}\), which establishes the existence of hypercyclic subspaces for scalar multiples \(\lambda B\). In Section 4, we characterize the hypercyclicity, mixing, and chaos of the scalar multiple \(\lambda B\) in \(\mathcal{H}_{a,b}\). Section 5 contains similar dynamical properties of the shift on a vector valued tridiagonal space.
## 2. Boundedness of the shift operator on a tridiagonal space \(\mathcal{H}_{a,b}\)
We briefly recall the basics and essential properties of analytic (scalar and matrix valued) reproducing kernel Hilbert spaces. Theory of these spaces is available in Aronszajn [4] and Paulsen and Raghupati [30] in a more general set up of operator valued kernels. The main purpose of this section is to provide a sufficient condition for the backward shift to be a bounded operator on an analytic tridiagonal space, see Theorem 2.3.
Let \(M_{d}(\mathbb{C})\) denote the space of \(d\times d\) complex matrices. A function \(K:\mathbb{D}\times\mathbb{D}\to M_{d}(\mathbb{C})\) is called an _analytic kernel_ if \(z\mapsto K(z,w)\) is analytic for each fixed \(w\in\mathbb{D}\) and
\[\sum_{i,j=1}^{n}\langle K(w_{i},w_{j})u_{j},u_{i}\rangle_{\mathbb{C}^{d}}\geq 0,\]
for all choices of \(w_{1},\ldots,w_{n}\in\mathbb{D}\) and \(u_{1},\ldots,u_{n}\in\mathbb{C}^{d}\) and \(n\in\mathbb{N}\). For an analytic kernel \(K(z,w)\) over \(\mathbb{D}\), there exists a unique Hilbert space \(\mathcal{H}(K)\) of \(\mathbb{C}^{d}\)-valued analytic functions on \(\mathbb{D}\) such that
\[\text{span }\{K(\cdot,w)u:w\in\mathbb{D},u\in\mathbb{C}^{d}\}\]
is dense in \(\mathcal{H}(K)\) and
\[\langle f,K(\cdot,w)u\rangle_{\mathcal{H}(K)}=\langle f(w),u\rangle_{ \mathbb{C}^{d}}, \tag{2.1}\]
for all \(f\in\mathcal{H}(K)\), \(w\in\mathbb{D}\) and \(u\in\mathbb{C}^{d}\). Here, the symbol \(K(\cdot,w)u\) denotes the function \(z\mapsto K(z,w)u\) on \(\mathbb{D}\). Moreover, for \(u_{1},\ldots,u_{n}\in\mathbb{C}^{d}\) and \(w_{1},\ldots,w_{n}\in\mathbb{D}\),
\[\left\|\sum_{j=1}^{n}K(.,w_{j})u_{j}\right\|_{\mathcal{H}(K)}=\sum_{i,j=1}^{n }\langle K(w_{i},w_{j})u_{j},u_{i}\rangle_{\mathbb{C}^{d}}\]
which follows from (2.1). The Hilbert space \(\mathcal{H}(K)\) is called the _analytic reproducing kernel Hilbert space_ associated to the kernel \(K(z,w)\). From (2.1) it follows that the evaluation operator \(E_{w}:\mathcal{H}(K)\to\mathbb{C}^{d}\) is bounded for all \(w\in\mathbb{D}\), where
\[E_{w}(f)=f(w),\hskip 28.452756ptf\in\mathcal{H}(K).\]
Conversely, if \(\mathcal{H}\) is Hilbert space of \(\mathbb{C}^{d}\)-valued analytic functions on \(\mathbb{D}\), and the evaluation operators \(E_{w}\) are bounded for all \(w\in\mathbb{D}\), then \(\mathcal{H}\) is an analytic reproducing kernel Hilbert space corresponding to the \(M_{d}(\mathbb{C})\)-valued analytic kernel \(K(z,w)=E_{z}\circ E_{w}^{*}\) for \(z,w\in\mathbb{D}\), where \(E_{w}^{*}\) is the Hilbert space adjoint of \(E_{w}\), cf. [30]. From (2.1), it also follows that \(K(z,w)\) is co-analytic in \(w\). (Analytic kernels play vital roles in operator theory; for instance, see Curto and Salinas [15].) For a scalar valued analytic kernel \(k(z,w)\), we recall that the corresponding reproducing kernel space \(\mathcal{H}(k)\) is uniquely determined by the following properties [30]: the span \(\left\{k(.,w):w\in\mathbb{D}\right\}\) is dense in \(\mathcal{H}(k)\) and
\[f(w)=\big{\langle}f,k(.,w)\big{\rangle}_{\mathcal{H}(k)}\]
for all \(f\in\mathcal{H}(k)\) and \(w\in\mathbb{D}\), where \(k(.,w)\) denotes the function \(z\mapsto k(z,\underline{w})\) for a fixed \(w\in\mathbb{D}\). The kernel function has a formula, namely \(k(z,w)=\sum_{n\geq 0}e_{n}(z)\overline{e_{n}(w)}\) for any orthonormal basis \(\left\{e_{n}\right\}_{n\geq 0}\) of the space for which \(k(z,w)\) is the kernel.
It is known that, if \(k(z,w)\) is an analytic scalar kernel, then the derivatives
\[\frac{\partial^{n}k(.,0)}{\partial\overline{w}^{n}}\]
can give information on the dynamics of the adjoint of the multiplication by the independent variable on \(\mathcal{H}(k)\), see [29]. We will use the following fact to derive the necessary parts in the characterization of hypercyclicity, mixing and chaos of scalar multiples of \(B\) on tridiagonal spaces, and refer to [29].
**Proposition 2.1**.: _If \(\mathcal{H}(k)\) is an analytic reproducing kernel space over \(\mathbb{D}\), then_
\[\frac{\partial^{n}k(.,0)}{\partial\overline{w}^{n}}\in\mathcal{H}(k)\quad\text {and}\quad f^{(n)}(0)=\big{\langle}f,\frac{\partial^{n}k(.,0)}{\partial \overline{w}^{n}}\big{\rangle}_{\mathcal{H}(k)},\]
_for all \(n\geq 0\) and \(f\in\mathcal{H}(k)\). Moreover,_
\[\left\|\frac{\partial^{n}k(.,0)}{\partial\overline{w}^{n}}\right\|_{\mathcal{ H}(k)}=\left(\frac{\partial^{2n}k}{\partial z^{n}\partial\overline{w}^{n}}(0,0) \right)^{1/2}.\]
A standard example for an analytic reproducing kernel space is the diagonal space \(\mathcal{H}^{2}(\beta)\): for a given \(\beta=\left\{\beta_{n}\right\}_{n=0}^{\infty}\) of strictly positive reals, this space consists of analytic functions \(f(z)=\sum_{n\geq 0}\lambda_{n}z^{n}\) on \(\mathbb{D}\) such that \(\|f\|^{2}:=\sum_{n\geq 0}|\lambda_{n}|^{2}/\beta_{n}<\infty\). As \(\sqrt{\beta_{n}}z^{n}\), \(n\geq 0\), forms an orthonormal basis for \(\mathcal{H}^{2}(\beta)\), its kernel is given by \(\sum_{n\geq 0}\beta_{n}z^{n}\overline{w}^{n}\), cf. [30].
We now introduce analytic tridiagonal kernel spaces. For two sequences of non-zero complex numbers \(a=\left\{a_{n}\right\}_{n=0}^{\infty}\) and \(b=\left\{b_{n}\right\}_{n=0}^{\infty}\), let \(\mathcal{H}_{a,b}\) be the Hilbert space of functions on \(\mathbb{D}\), for which \(\left\{f_{n}\right\}_{n=0}^{\infty}\) forms an orthonormal basis, where
\[f_{n}(z)=(a_{n}+b_{n}z)z^{n},\ n\geq 0.\]
Since, \(k(z,w)=\sum_{n\geq 0}f_{n}(z)\overline{f_{n}(w)}\), we get the tri-diagonal kernel as,
\[k(z,w)=|a_{0}|^{2}+\sum_{n\geq 1}(|a_{n}|^{2}+|b_{n-1}|^{2})z^{n}\overline{w}^{n }+\sum_{n\geq 0}a_{n}\overline{b_{n}}z^{n}\overline{w}^{n+1}+\sum_{n\geq 0} \overline{a_{n}}b_{n}z^{n+1}\overline{w}^{n}, \tag{2.2}\]
for all \(z,w\in\mathbb{D}\). We call \(\mathcal{H}_{a,b}\) a tridiagonal reproducing kernel Hilbert space, or simply a tridiagonal space. We will always assume the following:
_For a fixed \(w\in\mathbb{D}\), the series in (2.2) has a radius of convergence \(1\)._
In that case, \(k(z,w)\) is analytic in \(z\in\mathbb{D}\), and consequently each \(f(z)\) in \(\mathcal{H}_{a,b}\) is analytic on \(\mathbb{D}\) by the continuity of evaluation functionals. For more on the terminology of tridiagonal kernels, we refer to Adams and McGuire [1].
To derive the boundedness and hypercyclicity properties of the backward shift operator on tridiagonal spaces, we first express a monomial \(z^{n}\) in the orthonormal basis \(\{f_{n}\}\); see (2.3) below. Such an expression will help us to find estimates of \(\|z^{n}\|_{\mathcal{H}_{a,b}}\) in terms of \(\{a_{n}\}\) and \(\{b_{n}\}\), (Proposition 4.1). Since we repeatedly use the orthonormal expansion of \(z^{n}\), we prefer to show their derivations, although the same is available in [1]. Indeed, fix \(n\geq 0\), and write \(z^{n}=\sum_{j=0}^{\infty}\alpha_{j}f_{j}\) for some \(\alpha_{j}\in\mathbb{C},n\geq 0.\) Then
\[z^{n}=\alpha_{0}a_{0}+\sum_{j=1}^{\infty}(\alpha_{j-1}b_{j-1}+\alpha_{j}a_{j} )z^{j}.\]
Thus, comparing coefficients, we have \(\alpha_{0}=\alpha_{1}=\cdots=\alpha_{n-1}=0\), and \(\alpha_{n}=\frac{1}{a_{n}}\), as the \(\alpha_{i}\) non zero scalars. Since
\[\alpha_{n+k-1}b_{n+k-1}+\alpha_{n+k}a_{n+k}=0,\]
it follows that
\[\alpha_{n+k}=-\frac{\alpha_{n+k-1}b_{n+k-1}}{a_{n+k}},\]
and thus
\[\alpha_{n+k}=\frac{(-1)^{k}}{a_{n}}\frac{b_{n}b_{n+1}\cdots b_{n+k-1}}{a_{n+1 }a_{n+2}\cdots a_{n+k}},\ \ (k\geq 1).\]
This implies
\[z^{n}=\frac{1}{a_{n}}\sum_{j=0}^{\infty}(-1)^{j}(\frac{\prod_{k=0}^{j-1}b_{n +k}}{\prod_{k=0}^{j-1}a_{n+k+1}})f_{n+j},\ \ (n\geq 0), \tag{2.3}\]
where the term corresponding to \(j=0\) is \(1\). The above expansion will be used repeatedly.
The backward shift operator on \(\mathcal{H}_{a,b}\) is defined by
\[(Bf)(z)=\sum_{n=0}^{\infty}\lambda_{n+1}z^{n}, \tag{2.4}\]
for \(f(z)=\sum_{n=0}^{\infty}\lambda_{n}z^{n}\) in \(\mathcal{H}_{a,b}\). Note that \(B\) is the "coefficient backward shift". To obtain necessary and sufficient conditions for \(B\) to be bounded on \(\mathcal{H}_{a,b}\), we proceed by computing the matrix representation of the operator \(B\) acting on \(\mathcal{H}_{a,b}\) with respect to the orthonormal basis \(\{f_{n}\}_{n\geq 0}\) and then, study the matrix operator on \(\ell^{2}\). See [1] for a similar study on tridiagonal shifts.
Recall the orthonormal basis \(f_{n}(z)=(a_{n}+b_{n}z)z^{n},n\geq 0\), in \(\mathcal{H}_{a,b}\). Note that \(B(f_{n})(z)=a_{n}z^{n-1}+b_{n}z^{n},\ n\geq 1\). Also, \(f_{0}(z)=a_{0}+b_{0}z\) and
\[B(f_{0})(z)=b_{0}=\frac{b_{0}}{a_{0}}f_{0}-\frac{b_{0}^{2}}{a_{0}a_{1}}f_{1}+ \frac{b_{0}^{2}b_{1}}{a_{0}a_{1}a_{2}}f_{2}-\frac{b_{0}^{2}b_{1}b_{2}}{a_{0}a_ {1}a_{2}a_{3}}f_{3}+\cdots.\]
For \(n\geq 1\), we have
\[B(f_{n})(z)=a_{n}z^{n-1}+b_{n}z^{n}=\frac{a_{n}}{a_{n-1}}f_{n-1}+(\frac{b_{n} }{a_{n}}-\frac{a_{n}}{a_{n-1}}\frac{b_{n-1}}{a_{n}})a_{n}z^{n},\]
by putting the value of \(z^{n-1}\). Setting
\[c_{n}:=\frac{b_{n}}{a_{n}}-\frac{b_{n-1}}{a_{n-1}},\]
we immediately get
\[B(f_{n})(z)=\frac{a_{n}}{a_{n-1}}f_{n-1}+c_{n}a_{n}z^{n},\ n\geq 1.\]
Now, the matrix representation of \(B\) can be obtained from the following expressions:
\[B(f_{1})(z)=\frac{a_{1}}{a_{0}}f_{0}+c_{1}f_{1}-\frac{c_{1}b_{1}}{a_{2}}f_{2}+ \frac{c_{1}b_{1}b_{2}}{a_{2}a_{3}}f_{3}-\cdots,\]
\[B(f_{2})(z)=\frac{a_{2}}{a_{1}}f_{1}+c_{2}f_{2}-\frac{c_{2}b_{2}}{a_{3}}f_{3}+ \frac{c_{2}b_{2}b_{3}}{a_{3}a_{4}}f_{4}-\cdots,\]
and so on. Hence, the matrix of \(B\) with respect to the orthonormal basis \(\{f_{n}\}_{n\geq 0}\) is
\[[B]:=\left[\begin{array}{cccccc}\frac{b_{0}}{a_{0}}&\frac{a_{1}}{a_{0}}&0&0 &0&\cdots\\ -\frac{b_{0}^{2}}{a_{0}a_{1}}&c_{1}&\frac{a_{2}}{a_{1}}&0&0&\ddots\\ \frac{b_{0}^{2}b_{1}}{a_{0}a_{1}a_{2}}&-\frac{c_{1}b_{1}}{a_{2}}&c_{2}&\frac{a _{3}}{a_{2}}&0&\ddots\\ -\frac{b_{0}^{2}b_{1}b_{2}}{a_{0}a_{1}a_{2}a_{3}}&\frac{c_{1}b_{1}b_{2}}{a_{2} a_{3}}&-\frac{c_{2}b_{2}}{a_{3}}&c_{3}&0&\ddots\\ \frac{b_{0}^{2}b_{1}b_{2}b_{3}}{a_{0}a_{1}a_{2}a_{3}a_{4}}&-\frac{c_{1}b_{1}b_ {2}b_{3}}{a_{2}a_{3}a_{4}}&\frac{c_{3}b_{2}b_{3}}{a_{3}a_{4}}&-\frac{c_{3}b_{ 3}}{a_{4}}&\ddots&\ddots\\ \vdots&\vdots&\vdots&\ddots&\ddots&\ddots\\ \end{array}\right]. \tag{2.5}\]
Compare the above matrix with that of a left inverse of the multiplication operator \(\big{(}Sf\big{)}(z)=zf(z)\) defined on a tridiagonal space, cf. Das and Sarkar [17], Proposition 3.1.
We now determine necessary and sufficient conditions under which the above (formal) matrix defines a bounded operator on \(\ell^{2}\). Equivalently, this gives boundedness results for \(B\) acting on \(\mathcal{H}_{a,b}\).
Recall that
\[c_{n}:=\frac{b_{n}}{a_{n}}-\frac{b_{n-1}}{a_{n-1}},\ n\geq 1.\]
**Proposition 2.2**.: _If \(B\) is bounded on an analytic tridiagonal space \(\mathcal{H}_{a,b}\), then_
\[\big{\{}\frac{a_{n+1}}{a_{n}}\big{\}}_{n\geq 1}\quad\text{ and }\quad\quad\{c_{n}\}_{n\geq 1}\]
_are bounded sequences._
Proof.: Let \(B\) be bounded on \(\mathcal{H}_{a,b}\). Then the matrix \([B]\) induces a bounded operator on \(\ell^{2}\). Let \(v_{n}\) be the \(n\)-th column of \([B]\). Operating \([B]\) on the subset \(\{e_{n}\}_{n\geq 1}\) of the standard orthonormal basis in \(\ell^{2}\), since \([B](e_{n})=v_{n}\), we get that
\[\sup_{n}\|v_{n}\|<\infty.\]
On the other hand,
\[\|v_{n}\|_{\ell^{2}}^{2}\geq\big{|}\frac{a_{n}}{a_{n-1}}\big{|}^{2}+|c_{n}|^{2 },\quad n\geq 1.\]
This implies the necessary conditions, as in the proposition.
The following theorem gives a (general) sufficient condition for \(B\) to be bounded.
**Theorem 2.3**.: _Let \(\mathcal{H}_{a,b}\) be the reproducing kernel Hilbert space having an orthonormal basis of the form \(\{(a_{n}+b_{n}z)z^{n}:\ n\geq 0\}\). If_
\[\sup_{n\geq 1}\ \left\{\left|\frac{a_{n+1}}{a_{n}}|,\ |c_{n}|\right\}<\infty,\ \text{and}\ \sum_{n=1}^{\infty}\max\left\{\left|\frac{b_{0}^{2}b_{1}\cdots b_{n-1}}{a_{ 0}a_{1}\cdots a_{n}}\right|,\ \sup_{j\geq 1}\ \left|\frac{c_{j}b_{j}b_{j+1}\cdots b_{j+n-1}}{a_{j+1}a_{j+2} \cdots a_{j+n}}\right|\right\}<\infty,\]
_then \(B\) is bounded on \(\mathcal{H}_{a,b}\)._
Proof.: We split the matrix of \(B\) as a formal series of infinite matrices as follows:
\[[B]=\begin{bmatrix}\frac{b_{0}}{a_{0}}&\frac{a_{1}}{a_{0}}&0&0&0&\cdots\\ -\frac{b_{0}^{2}}{a_{0}a_{1}}&c_{1}&\frac{a_{2}}{a_{1}}&0&0&\ddots\\ \frac{b_{0}^{2}b_{1}}{a_{0}a_{1}a_{2}}&-\frac{c_{1}b_{1}}{a_{2}}&c_{2}&\frac{ a_{3}}{a_{2}}&0&\ddots\\ -\frac{b_{0}^{2}b_{1}b_{2}}{a_{0}a_{1}a_{2}a_{3}}&\frac{c_{1}b_{1}b_{2}}{a_{2} a_{3}}&-\frac{c_{2}b_{2}}{a_{3}}&c_{3}&0&\ddots\\ \frac{b_{0}^{2}b_{1}b_{2}b_{3}}{a_{0}a_{1}a_{2}a_{3}a_{4}}&-\frac{c_{1}b_{1}b_ {2}b_{3}}{a_{2}a_{3}a_{4}}&\frac{c_{2}b_{2}b_{3}}{a_{3}a_{4}}&-\frac{c_{3}b_{ 3}}{a_{4}}&\ddots&\ddots\\ \vdots&\vdots&\vdots&\ddots&\ddots&\ddots\end{bmatrix}\]
\[=\begin{bmatrix}0&\frac{a_{1}}{a_{0}}&0&0&\cdots\\ 0&0&\frac{a_{2}}{a_{1}}&0&\ddots\\ 0&0&0&\frac{a_{3}}{a_{2}}&\ddots\\ 0&0&0&0&\ddots\\ \vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix}+\begin{bmatrix}\frac{b_{0}}{a_ {0}}&0&0&0&\cdots\\ 0&c_{1}&0&0&\ddots\\ 0&0&c_{2}&0&\ddots\\ 0&0&0&c_{3}&\ddots\\ \vdots&\vdots&\vdots&\ddots&\ddots\end{bmatrix}+\begin{bmatrix}0&0&0&\cdots\\ -\frac{b_{0}^{2}}{a_{0}a_{1}}&0&0&\ddots\\ 0&-\frac{c_{1}b_{1}}{a_{2}}&0&\ddots\\ 0&0&-\frac{c_{2}b_{2}}{a_{3}}&\ddots\\ \vdots&\vdots&\ddots&\ddots\end{bmatrix}+\ldots,\]
which is a formal series of matrices, \([B_{w}]+[D]+\sum_{n=1}^{\infty}[F_{n}]\). Here, \([B_{w}]\) is the matrix of the standard weighted backward shift \(B_{w}(e_{i})\mapsto w_{i}e_{i-1}\) on \(\ell^{2}\), \(i\geq 1\), having weights
\[w_{i}=\frac{a_{i}}{a_{i-1}},\ \ \ \ \ \ \ (i\geq 1),\]
and \([D]\) is the matrix of the diagonal operator
\[\text{diag}\ (\frac{b_{0}}{a_{0}},c_{1},c_{2},\cdots)\]
on \(\ell^{2}\). The matrix \([F_{n}]\) is obtained by deleting all the entries of \([B]\), except those at the \(n\)-th subdiagonal, where \(n\geq 1\). Observe that \([F_{n}]\) is the matrix of suitable powers of a weighted forward shift \(F_{n}\) for \(n\geq 1\).
It follows, respectively by the first two assumptions in the theorem, that the weighted shift \(B_{w}\) and the diagonal operator \(D\) are bounded on \(\ell^{2}\). Since
\[\|F_{n}\|=\max\left\{\left|\frac{b_{0}^{2}b_{1}\cdots b_{n-1}}{a_{0}a_{1} \cdots a_{n}}\right|,\ \sup_{j\geq 1}\ \left|\frac{c_{j}b_{j}b_{j+1}\cdots b_{j+n-1}}{a_{j+1}a_{j+2} \cdots a_{j+n}}\right|\right\},\]
the third condition in the theorem gives that \(F_{n}\) is bounded, and
\[\sum_{n\geq 1}\|F_{n}\|<\infty\]
with respect to the operator norm. Hence, the shift \(B\) is bounded on \(\mathcal{H}_{a,b}\). This completes the proof of the theorem.
_Remark 2.4_.: We note that \(B\) is a left-inverse of the multiplication operator \((Sf)(z)=zf(z)\) on \(\mathcal{H}_{a,b}\), assuming that both \(B\) and \(S\) are bounded. A closely related left inverse \(B_{1}\) of \(S\) was studied in [17], wherein the authors obtained conditions for the boundedness of \(B_{1}\). The matrices of \(B\) and \(B_{1}\) are almost the same, except the difference in the first columns. Their assumptions, given below, for boundedness are strong compared to those in the above theorem. Indeed, the conditions
\[\sup_{n\geq 1}\,|\frac{a_{n+1}}{a_{n}}|<\infty\quad\text{and}\quad\limsup_{n} \left|\frac{b_{n}}{a_{n+1}}\right|<1 \tag{2.6}\]
imply those in Theorem 2.3. To see this, writing
\[c_{n}=\frac{b_{n}}{a_{n+1}}\frac{a_{n+1}}{a_{n}}-\frac{a_{n}}{a_{n-1}}\frac{b_ {n-1}}{a_{n}},\qquad\ (n\geq 1),\]
we can see that \(\{c_{n}\}\) is bounded. Moreover, since \(\limsup_{n}\left|\frac{b_{n}}{a_{n+1}}\right|<1\), there exist \(r<1\) and \(N\in\mathbb{N}\) such that \(\left|\frac{b_{n}}{a_{n+1}}\right|<r\), for \(n\geq N\). From this, the remaining conditions in Theorem 2.3 follows.
## 3. The shift on \(\mathcal{H}_{a,b}\) as a compact perturbation of a weighted shift on \(\ell^{2}\)
Under some mild assumptions on \(\{a_{n}\}\) and \(\{b_{n}\}\), we prove that the shift \(B\) acting on \(\mathcal{H}_{a,b}\) is unitarily equivalent to the sum \(B_{w}+K\) on \(\ell^{2}\) for a suitable weighted backward shift on \(\ell^{2}\) and a compact operator \(K\). Using this perturbation result, we compute the essential spectrum of the shift \(B\) acting on \(\mathcal{H}_{a,b}\). These results are of independent interest as well.
The essential spectrum \(\sigma_{e}(T)\), of an operator \(T\) on a complex Hilbert space \(\mathcal{H}\) is the set of all \(\lambda\in\mathbb{C}\) such that \(T-\lambda I\) is not Fredholm, that is,
\[\sigma_{e}(T)=\{\lambda\in\mathbb{C}:\ \text{dim}\ \text{Ker}(T-\lambda I)= \infty\ \text{or}\ \text{dim}\ \text{Ker}(T^{*}-\overline{\lambda}I)=\infty\},\]
where \(T^{*}\) is the adjoint of \(T\), cf. Bayart and Matheron [6] and Douglas [18]. The essential spectrum plays a key role in the investigation of hypercyclic subspaces; see the section 4.
In the proof of the following theorem, we use a well known fact: \(\sigma_{e}(T)\) is invariant under a compact perturbation, that is,
\[\sigma_{e}(T+K)=\sigma_{e}(T)\]
for every compact operator \(K\).
**Theorem 3.1**.: _Let \(\mathcal{H}_{a,b}\) be a tridiagonal reproducing kernel space over the unit disc \(\mathbb{D}\). Assume that_
\[\sup_{n}|\frac{a_{n+1}}{a_{n}}|<\infty,\ \limsup_{n}|\frac{b_{n}}{a_{n+1}}|<1, \ and\ \lim_{n}\left|\frac{b_{n}}{a_{n}}-\frac{b_{n-1}}{a_{n-1}}\right|=0.\]
_Then the following hold._
* _The operator_ \(B\) _on_ \(\mathcal{H}_{a,b}\) _is unitarily equivalent to_ \(B_{w}+K\) _for some compact operator_ \(K\) _and the weighted backward shift_ \(B_{w}\) _on the sequence space_ \(\ell^{2}\)_, where the weight sequence_ \(w=(w_{n})\) _is given by_ \[w_{n}=\frac{a_{n}}{a_{n-1}},\ \ n\geq 1.\]
2. _The essential spectrum_ \(\sigma_{e}(B)\) _is the annulus_ \[\sup_{n\geq 1}\left(\inf_{k\geq 1}\left|\frac{a_{k+n}}{a_{k}}\right|\right)^{1/n} \ \leq|z|\leq\inf_{n\geq 1}\left(\sup_{k\geq 1}\left|\frac{a_{k+n}}{a_{k}}\right| \right)^{1/n}.\]
Proof.: The proof relies on the matrix representation of \(B\) with respect to the orthonormal basis \(f_{n}(z)=(a_{n}+b_{n}z)z^{n}\) of \(\mathcal{H}_{a,b}\). Consider the unitary operator \(U:\mathcal{H}_{a,b}\to\ell^{2}\) given by
\[U(\sum_{n=0}^{\infty}\lambda_{n}f_{n})=\sum_{n=0}^{\infty}\lambda_{n}e_{n},\]
that is, \(U(f_{n})=e_{n}\) for all \(n\), where \(\{e_{n}\}_{n\geq 0}\) is the standard basis in \(\ell^{2}\). Now, from the proof of Theorem 2.3 we recall that \(B\) on \(\mathcal{H}_{a,b}\) is unitarily equivalent via \(U\) to the sum (in the operator norm)
\[B_{w}+D+\sum_{m=1}^{\infty}F_{m}.\]
Here, \(B_{w}\) in the weighted backward shift on \(\ell^{2}\) with weights
\[w_{n}=\frac{a_{n}}{a_{n-1}},\hskip 28.452756ptn\geq 1. \tag{3.1}\]
Further, by the assumptions, the operators \(D\) and \(F_{m}\) are compact on \(\ell^{2}\) as the entries in the matrix of \(D\) and \(F_{m}\) converges to \(0\) for all \(m\geq 1\). Hence
\[K:=D+\sum_{m=1}^{\infty}F_{m}\]
is a compact operator on \(\ell^{2}\), and consequently, \(B\) acting on \(\mathcal{H}_{a,b}\) is unitarily equivalent to \(B_{w}+K\). This proves (i).
The invariance of the essential spectrum under compact perturbations along with (i) yields that
\[\sigma_{e}(B)=\sigma_{e}(B_{w}+K)=\sigma_{e}(B_{w}).\]
Thus, it is enough to compute \(\sigma_{e}(B_{w})\). We now recall the essential spectrum of a weighted backward shift on \(\ell^{2}\) and refer to [6] and [32]: In general, for an injective weighted shift \(B_{w}\) corresponding to \(w=\{w_{n}\}_{n=1}^{\infty}\), the essential spectrum is the annulus
\[\sup_{n\geq 1}\left(\inf_{k\geq 1}\prod_{i=1}^{n}|w_{k+i}|\right)^{1/n}\ \leq|z|\leq\inf_{n\geq 1}\left(\sup_{k\geq 1} \prod_{i=1}^{n}|w_{k+i}|\right)^{1/n}.\]
In our setting, \(B_{w}\) is the weighted shift with weights as in (3.1). Since
\[\prod_{i=1}^{n}w_{k+i}=\frac{a_{k+n}}{a_{k}}\]
for all \(k,n\geq 1\), the result in (ii) follows. The proof is complete.
## 4. Hypercyclicity, mixing, and chaos
In this section, we characterize the hypercyclicity, mixing and chaos of the backward shift \(\lambda B\) on \(\mathcal{H}_{a,b}\). These results resemble to those of weighted backward shifts on \(\ell^{2}\). The following estimates on the norms of monomials will be used in the characterizations of the hypercyclicity properties of \(\lambda B\).
**Proposition 4.1**.: _Assume that the conditions in Theorem 2.3 hold, with \(c_{n}\neq 0\) for all \(n\geq 0\). Then there exists constant \(M_{1}>0\) such that_
\[\|z^{n}\|_{\mathcal{H}_{a,b}}\leq\frac{M_{1}}{|c_{n}a_{n}|},\ \ n\geq 0. \tag{4.1}\]
_In addition, if \(\limsup_{n}|\frac{b_{n}}{a_{n+1}}|<1\), then there is a constant \(M_{2}>0\) such that_
\[\|z^{n}\|_{\mathcal{H}_{a,b}}\leq\frac{M_{2}}{|a_{n}|},\ \ n\geq 0. \tag{4.2}\]
Proof.: By the orthonormal expansion in \(\mathcal{H}_{a,b}\) and the continuity of evaluation functionals, we can find some \(\{\lambda_{j}\}_{j=0}^{\infty}\in\ell^{2}\) such that
\[z^{n}=\sum_{j\geq 0}\lambda_{j}(a_{j}z^{j}+b_{j}z^{j+1}),\]
for all \(z\in\mathbb{D}\). Equating the coefficients of like-powers, we have that \(\lambda_{j}=0\) for \(j=0,\ldots,n-1\), and
\[\lambda_{n}=\frac{1}{a_{n}},\ \lambda_{n+1}=-\frac{b_{n}}{a_{n+1}}\lambda_{n}=- \frac{1}{a_{n}}\frac{b_{n}}{a_{n+1}},\ \lambda_{n+2}=\frac{1}{a_{n}}\frac{b_{n}}{a_{n+1}}\frac{b_{n+1}}{a_{n+2}}, \tag{4.3}\]
and so on. Since \(\|z^{n}\|_{\mathcal{H}_{a,b}}^{2}=\sum_{j\geq 0}|\lambda_{j}|^{2}\), we have
\[\|z^{n}\|_{\mathcal{H}_{a,b}}^{2}=\frac{1}{|a_{n}|^{2}}+\left|\frac{1}{a_{n}} \frac{b_{n}}{a_{n+1}}\right|^{2}+\left|\frac{1}{a_{n}}\frac{b_{n}}{a_{n+1}} \frac{b_{n+1}}{a_{n+2}}\right|^{2}+\ldots. \tag{4.4}\]
By multiplying the numerators and denominators of each of the above terms by \(c_{n}\), we get the first part in the proposition.
On the other hand, the strong assumption
\[\limsup_{n}|\frac{b_{n}}{a_{n+1}}|<1\]
implies that there exist \(r<1\) and \(N\in\mathbb{N}\) such that \(|b_{n}/a_{n+1}|<r\) for all \(n\geq N\). Thus, by the equation (4.4) we have
\[\|z^{n}\|_{\mathcal{H}_{a,b}}^{2}\leq\frac{1}{|a_{n}|^{2}}\left(\sum_{k\geq 0 }r^{2k}\right),\]
for every \(n\geq N\). The required result in the second part of the proposition follows.
The next theorems contain the main results of this section.
**Theorem 4.2**.: _Let \(\mathcal{H}_{a,b}\) be the analytic tridiagonal space corresponding to \(a=\{a_{n}\}_{n=0}^{\infty}\) and \(b=\{b_{n}\}_{n=0}^{\infty}\) satisfying the conditions of Theorem 2.3 and that \(c_{n}\neq 0\) for all \(n\). Then the following hold for a scalar \(\lambda\):_
1. \(\lambda B\) _is hypercyclic on_ \(\mathcal{H}_{a,b}\) _if_ \[\sup_{n}\ |\lambda^{n}c_{n}a_{n}|=\infty.\]
2. _If_ \(\lambda B\) _is hypercyclic, then_ \[\sup_{n}|\lambda|^{n}(|a_{n}|+|b_{n-1}|)=\infty.\]
3. _Assuming the stronger condition_ \(\limsup_{n}|b_{n}/a_{n+1}|<1\)_, the operator_ \(\lambda B\) _is hypercyclic if and only if_ \[\sup_{n}|\lambda^{n}a_{n}|=\infty.\]
Proof.: To get (i), we apply the Gethner-Shapiro criterion. Let \(X_{0}\) be the space of all polynomials. Then, \(X_{0}\) is dense in \(\mathcal{H}_{a,b}\) as it contains the orthonormal basis \(\{(a_{n}+b_{n}z)z^{n}:n\geq 0\}\). Consider the forward \(S:X_{0}\to X_{0}\) given by
\[S(z^{n})=z^{n+1},\ \ \ \ n\geq 0.\]
Trivially,
\[BSf=f\ \ \ \text{and}\ \ \ (\lambda B)^{n}f\to 0,\ \ \ \ \ \ \text{as}\ n\to\infty,\]
for all \(f\in X_{0}\). It suffices to show that, there exists a strictly increasing sequence \(\{m_{k}\}\) of natural numbers such that
\[\frac{1}{\lambda^{m_{k}}}S^{m_{k}}(z^{n})\to 0,\]
as \(k\to\infty\), for every monomial \(z^{n}\). Combining (iii) of our theorem, with the first estimate in Proposition 4.1, we get an increasing sequence \(\{d_{k}\}\) such that
\[\frac{1}{\lambda^{d_{k}}}S^{d_{k}}(z^{n})\to 0,\]
as \(k\to\infty\). Now, Lemma 4.2 of [23] completes the proof of (i).
To obtain (ii), let
\[f(z)=\sum_{n=0}^{\infty}\lambda_{n}f_{n}(z),\ z\in\mathbb{D},\]
be a hypercyclic vector for \(\lambda B\), where \(f_{n}(z)=a_{n}z^{n}+b_{n}z^{n+1}\), \(n\geq 0\), forms an orthonormal basis of \(\mathcal{H}_{a,b}\). Rearranging the above sum as a power series, we get
\[B^{n}f(z)=\lambda_{n-1}b_{n-1}+\lambda_{n}a_{n}+(\lambda_{n}b_{n}+\lambda_{n+ 1}a_{n+1})z+\cdot\cdot\cdot.\]
As \(\{(\lambda B)^{n}f:n\geq 0\}\) is dense in \(\mathcal{H}_{a,b}\), it follows that
\[\sup_{n}\ |\lambda^{n}(\lambda_{n-1}b_{n-1}+\lambda_{n}a_{n})| = \infty.\]
On the other hand,
\[\lambda_{n-1}b_{n-1}+\lambda_{n}a_{n}=\frac{f^{(n)}(0)}{n!},\ \ \ \ n\geq 1,\]
which gives that
\[\sup_{n}|\lambda|^{n}|\frac{f^{(n)}(0)}{n!}|=\infty.\]
Recalling from Proposition 2.1 that \(|f^{(n)}(0)|\) can be dominated by derivatives of the kernel function \(k(z,w)\) of \(\mathcal{H}_{a,b}\), we obtain that
\[\sup_{n}\frac{|\lambda|^{n}}{n!}\left(\frac{\partial^{2n}k}{\partial z^{n} \partial\overline{w}^{n}}(0,0)\right)^{\frac{1}{2}}=\infty.\]
Now, the kernel for \(\mathcal{H}_{a,b}\) is
\[k(z,w)=|a_{0}|^{2}+\sum_{n\geq 1}(|a_{n}|^{2}+|b_{n-1}|^{2})z^{n}\bar{w}^{n}+ \sum_{n\geq 0}a_{n}\bar{b_{n}}z^{n}\bar{w}^{n+1}+\sum_{n\geq 0}\bar{a_{n}}b_{n}z^ {n+1}\bar{w}^{n},\]
for all \(z,w\in\mathbb{D}\), from which it follows that
\[\sup_{n}|\lambda|^{n}(|a_{n}|+|b_{n-1}|)=\infty.\]
This completes the proof of (ii).
To see the part (iii), we proceed as in (i) and (ii) for the sufficiency and necessity, respectively, and along the way, use the second part in Proposition 4.1 and the condition \(\limsup_{n}|b_{n}/a_{n+1}|<1\).
Our next result is on the mixing property of \(\lambda B\) in \(\mathcal{H}_{a,b}\).
**Theorem 4.3**.: _Consider the space \(\mathcal{H}_{a,b}\) corresponding to \(a=\{a_{n}\}\) and \(b=\{b_{n}\}\) satisfying the assumptions as in the previous theorem. Then the following hold for the shift \(B\) on \(\mathcal{H}_{a,b}\) and a scalar \(\lambda\)._
* \(\lambda B\) _is topologically mixing on_ \(\mathcal{H}_{a,b}\) _if_ \[\lim_{n\to\infty}\ |\lambda^{n}c_{n}a_{n}|=\infty\]
* _If_ \(\lambda B\) _is mixing, then_ \[\lim_{n\to\infty}|\lambda|^{n}(|a_{n}|+|b_{n-1}|)=\infty.\]
* _Assuming the stronger condition_ \(\limsup_{n}|b_{n}/a_{n+1}|<1\)_, the operator_ \(\lambda B\) _is mixing if and only if_ \[\lim_{n\to\infty}|\lambda^{n}a_{n}|=\infty.\]
Proof.: For the sufficiency parts in (i) and (iii), proceed exactly as in the proof of the previous theorem by applying the Gethner-Shapiro criterion with \(n_{k}=k\) for \(k\geq 1\).
In the part (iii) of the above theorems, we obtained characterizations of hypercyclicity and mixing of \(\lambda B\), under the strong assumption that \(\limsup_{n}|b_{n}/a_{n+1}|<1\). Along the same lines, we obtain a characterization for \(\lambda B\) to be chaotic using the chaoticity criterion.
**Theorem 4.4**.: _Let \(\mathcal{H}_{a,b}\) be a reproducing kernel Hilbert space of analytic functions on \(\mathbb{D}\) having an orthonormal basis \(\{f_{n}(z)=(a_{n}+b_{n}z)z^{n},n\geq 0\},\) where \(a_{n},b_{n}\) are non-zero complex numbers, satisfying \(\sup_{n}\left|a_{n+1}/a_{n}\right|<\infty\) and \(\limsup_{n}\left|b_{n}/a_{n+1}\right|<1\). Then the following are equivalent for the backward shift \(B\) and a scalar \(\lambda\)._
* \(\lambda B\) _is chaotic on_ \(\mathcal{H}_{a,b}\)_._
* \(\lambda B\) _has a non-trivial periodic vector._
* \(\sum_{n=0}^{\infty}\lvert\lambda^{n}a_{n}\rvert^{-2}<\infty.\)
Proof.: Suppose that the condition in (iii) holds. We apply the chaoticity criterion to show that \(\lambda B\) is chaotic on \(\mathcal{H}_{a,b}.\)
Let \(X_{0}\) be the space of all polynomials. Define \(S:X_{0}\to X_{0}\) given by \(S(z^{n})=\frac{1}{\lambda}z^{n+1},\)\(n\geq 0.\) Clearly \((\lambda B)S=I\) on \(X_{0}\). Moreover, the series \(\sum_{n=0}^{\infty}(\lambda B)^{n}(f)\) converges unconditionally for each \(f\in X_{0}.\) It remains to show that the series \(\sum_{n=0}^{\infty}S^{n}(f)\) converges unconditionally, for each \(f\in X_{0}.\) We prove that
\[\sum_{n=0}^{\infty}\frac{1}{\lambda^{n}}z^{n}\]
is unconditionally convergent in \(\mathcal{H}_{a,b}\). Recalling the orthonormal expansion from (2.3), for a fixed \(n\geq 0\), we have
\[z^{n}=\frac{1}{a_{n}}\sum_{j=0}^{\infty}\lambda_{n,j}f_{n+j}, \tag{4.5}\]
where
\[\lambda_{n,0}=1,\ \ \ \ \text{and}\ \ \ \ \lambda_{n,j}=(-1)^{j}\frac{b_{n}b _{n+1}\cdots b_{n+j-1}}{a_{n+1}a_{n+2}\cdots a_{n+j}},\ \ (j\geq 1).\]
Also,
\[\sum_{n=0}^{\infty}\frac{1}{\lambda^{n}}z^{n}= \sum_{n=0}^{\infty}\frac{1}{\lambda^{n}}\left(\frac{1}{a_{n}} \sum_{j=0}^{\infty}\lambda_{n,j}f_{n+j}\right)\] \[= \sum_{n=0}^{\infty}\left(\frac{\lambda_{0,n}}{a_{0}}+\frac{ \lambda_{1,n-1}}{\lambda a_{1}}+\cdots+\frac{\lambda_{n,0}}{\lambda^{n}a_{n}} \right)f_{n}.\]
As \(\limsup_{n}|b_{n}/a_{n+1}|<1\), one gets \(N\in\mathbb{N}\) and \(r<1\) such that \(|b_{n}/a_{n+1}|<r\) for all \(n\geq N\). Hence,
\[\left|\frac{\lambda_{0,n}}{a_{0}}+\frac{\lambda_{1,n-1}}{\lambda a_{1}}+ \cdots+\frac{\lambda_{n,0}}{\lambda^{n}a_{n}}\right|\leq\frac{r^{n}}{|a_{0}|} +\frac{r^{n-1}}{|\lambda a_{1}|}+\cdots+\frac{1}{|\lambda|^{n}|a_{n}|},\]
for all \(n\geq N\). The right hand side of the above inequality is the \(n\)-th term of an \(\ell^{1}\)-convolution of the \(\ell^{2}\) element \(\{\frac{1}{\lambda^{n}a_{n}}\}\), and hence it is absolutely square summable. Consequently, the series \(\sum_{n}\lambda^{-n}z^{n}\) is convergent. The unconditional convergence occurs because \(\{f_{n}\}\) is orthonormal. Hence, \(\lambda B\) satisfies the chaoticity criterion, and (i) follows.
To see that (ii) implies (iii), let
\[f(z)=\sum_{n=0}^{\infty}\lambda_{n}f_{n}(z),\]
be a non-zero periodic vector for \(B\) on \(\mathcal{H}_{a,b}\), where \(f_{n}(z)=a_{n}z^{n}+b_{n}z^{n+1},\)\(n\geq 0,\) forms an orthonormal basis for \(\mathcal{H}_{a,b}\). Now,
\[f(z) = \sum_{n=0}^{\infty}\lambda_{n}f_{n}(z)\] \[= \lambda_{0}a_{0}+\sum_{n=1}^{\infty}(\lambda_{n-1}b_{n-1}+\lambda _{n}a_{n})z^{n}\] \[:= \sum_{n=0}^{\infty}A_{n}z^{n},\]
where \(A_{0}=\lambda_{0}a_{0}\) and \(A_{n}=\lambda_{n-1}b_{n-1}+\lambda_{n}a_{n},\)\(n\geq 1.\) Let \(p\in\mathbb{N}\) such that \(B^{p}f(z)=f(z)\) for all \(z\in\mathbb{D}\). Then \(B^{kp}f(z)=f(z)\) for all \(k\geq 1\). It follows that
\[\lambda^{kp}(A_{kp}+A_{kp+1}z+\ldots+A_{kp+n}z^{n}+\ldots)=A_{0}+A_{1}z+\ldots+ A_{n}z^{n}+\ldots,\]
for all \(z\in\mathbb{D}\). We can compare the respective coefficients and get the required result. Indeed, equating the coefficients of \(z^{j}\) for \(0\leq j\leq p-1,\) we obtain
\[A_{j}=\lambda^{kp}A_{kp+j}\ \forall\ k\geq 1.\]
The case \(j=0\) gives
\[\lambda_{0}a_{0}=\lambda^{kp}\lambda_{kp-1}b_{kp-1}+\lambda^{kp}\lambda_{kp}a_ {kp},\]
for all \(k\geq 1\). We get
\[|\lambda_{0}a_{0}|^{2}\sum_{k=1}^{\infty}\left|\frac{1}{\lambda^{kp}a_{kp}} \right|^{2}\leq C\left(r^{2}\sum_{k=1}^{\infty}\left|\lambda_{kp-1}\right|^{2} +\sum_{k=1}^{\infty}\left|\lambda_{kp}\right|^{2}\right)\]
for some \(C>0\), where \(r:=\sup_{n}\left|\frac{b_{n}}{a_{n+1}}\right|<1.\) Since \(\{\lambda_{n}\}\in\ell^{2},\)
\[\sum_{k=1}^{\infty}\left|\frac{1}{\lambda^{kp}a_{kp}}\right|^{2}<\infty.\]
For \(j=1,\ldots,p-1\), we similarly have \(\lambda_{j-1}b_{j-1}+\lambda_{j}a_{j}=\lambda^{kp}(\lambda_{kp+j-1}b_{kp+j-1} +\lambda_{kp+j}a_{kp+j}).\) Once again, using \(r<1\), we obtain
\[\sum_{k=1}^{\infty}\left|\frac{1}{\lambda^{kp}a_{kp+j}}\right|^{2}<\infty.\]
Consequently, the series in (iii) is convergent.
_Remark 4.5_.: The dynamics of the shift \(B\) in \(\mathcal{H}_{a,b}\) can be observed from the previous theorems by taking \(\lambda=1\).
We conclude this section with a remark on the existence of hypercyclic subspaces for \(\lambda B\) in \(\mathcal{H}_{a,b}\). Recall that if the set \(HC(T)\) of all hypercyclic vectors of an operator \(T\) on a Banach space \(X\) contains a closed infinite dimensional subspace (excluding the zero vector), then we say that \(T\) has a hypercyclic subspace. It is well known that the essential spectrum of an operator \(T\) on a complex Banach space completely characterizes the existence of hypercyclic subspaces, thanks to an important result of Gonzalez, Leon-Saavedra and Montes-Rodriguez [21]: if \(T\) is a bounded operator satisfying the hypercyclicity criterion in a complex Banach space, then \(T\) has a hypercyclic subspace if and only if
\[\sigma_{e}(T)\cap\overline{\mathbb{D}}\neq\phi. \tag{4.6}\]
For details on the study of hypercyclic subspaces and related topics for various classes of operators including the weighted backward shifts, we refer to [6], [25], [26], and [27].
In view of (4.6) and Theorem 3.1 we can now establish the existence of hypercyclic subspaces in \(\mathcal{H}_{a,b}\).
**Corollary 4.6**.: _Let \(\mathcal{H}_{a,b}\) be an analytic tridiagonal space over the unit disc such that \(\sup_{n}|a_{n+1}/a_{n}|<\infty\) and \(\limsup_{n}|b_{n}/a_{n+1}|<1\). Then the multiple \(\lambda B\) has hypercyclic subspaces in \(\mathcal{H}_{a,b}\) if and only if_
\[\sup_{n}|\lambda^{n}a_{n}|=\infty\quad\quad\text{and}\quad\quad\sup_{n\geq 1} \left(\inf_{k\geq 1}\left|\frac{a_{k+n}}{a_{k}}\right|\right)^{1/n}\leq\frac{1}{| \lambda|}.\]
## 5. The shift operator on tridiagonal spaces given by matrix valued kernels
We now consider tridiagonal reproducing kernel Hilbert spaces which are induced by matrix valued analytic kernels, and study the dynamics of the shift \(B\) acting on these spaces. The results obtained in this section can be regarded, in particular, as the vector valued versions of our previous results in the scalar tridiagonal kernel spaces.
For sequences \(\mathcal{A}:=\{A_{n}\}_{n=0}^{\infty}\) and \(\mathcal{B}:=\{B_{n}\}_{n=0}^{\infty}\) of complex matrices of order \(d\), we consider the function
\[K:\mathbb{D}\times\mathbb{D}\to M_{d}(\mathbb{C}),\]
defined by
\[K(z,w)=\sum_{n=0}^{\infty}(A_{n}+B_{n}z)(A_{n}^{*}+B_{n}^{*} \overline{w})z^{n}\overline{w}^{n}, \tag{5.1}\]
where the symbol \(A^{*}\) denote the transpose conjugate of \(A\). We assume that, for each fixed \(w\in\mathbb{D}\), the above series (considered as a power series in \(z\)) has a radius of convergence \(1\). Note that \(K(z,w)\) reduces to a scalar tridiagonal form when \(d=1\).
It follows that \(K(z,w)\) is an \(M_{d}(\mathbb{C})\)-valued kernel over \(\mathbb{D}\). To verify this, from the definition of kernels it can be seen that
\[(A_{n}+B_{n}z)(A_{n}^{*}+B_{n}^{*}\overline{w})z^{n}\overline{w}^{n},\quad\ (n\geq 0),\]
is a kernel function and so is their sum. We refer to Paulsen and Raghupati [30] wherein it is proved that a sum of kernels is also a kernel. Denote the reproducing kernel space corresponding to \(K(z,w)\) by \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\).
The following gives a sufficient condition for the backward shift on \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\) to be a bounded operator.
**Theorem 5.1**.: _Let \(K(z,w)\) be an \(M_{d}(\mathbb{C})\)-valued kernel over \(\mathbb{D}\), of the form (5.1), and let \(\{A_{n},B_{n}:n\geq 0\}\) be simultaneously and unitarily diagonalizable invertible matrices. Let \(a_{n}^{(1)},\ldots,a_{n}^{(d)}\) and \(b_{n}^{(1)},\ldots,b_{n}^{(d)}\) be the eigenvalues of \(A_{n}\) and \(B_{n}\), respectively. If, for each \(1\leq q\leq d\),_
\[\sup_{n}\left|a_{n+1}^{(q)}/a_{n}^{(q)}\right|<\infty\quad\quad \text{and}\quad\quad\limsup_{n}\left|b_{n}^{(q)}/a_{n+1}^{(q)}\right|<1, \tag{5.2}\]
_then the backward shift \(B\) is bounded on \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\). In fact, the shift on \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\) is unitarily similar to the direct sum of the backward shifts on \(\oplus_{q=1}^{d}\mathcal{H}_{q}\), where \(\mathcal{H}_{q}\) is the (scalar) tridiagonal space having an orthonormal basis of the form_
\[a_{n}^{(q)}z^{n}+b_{n}^{(q)}z^{n+1},\ n\geq 0.\]
Proof.: By the assumptions on \(\{A_{n}\}\) and \(\{B_{n}\}\), there exists a unitary matrix \(Q\) such that
\[A_{n}=Q^{*}D_{n,1}Q\quad\text{ and }\quad B_{n}=Q^{*}D_{n,2}Q,\quad\ n\geq 0.\]
Here, \(D_{n,1}\) and \(D_{n,2}\) are respectively, the diagonal matrices consisting of the eigenvalues of \(A_{n}\) and \(B_{n}\). Thus, we have
\[K(z,w)=Q^{*}\Big{(}\sum_{n=0}^{\infty}(D_{n,1}+D_{n,2}z)(D_{n,1}^{*}+D_{n,2}^{*} \overline{w})z^{n}\overline{w}^{n}\Big{)}Q,\]
for all \(z,w\in\mathbb{D}\). Since \(Q\) preserves the inner product in \(\mathbb{C}^{d}\), the above factorization along with the reproducing property implies that the shift \(B\) on \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\) is unitarily similar to the backward shift on the reproducing kernel space \(\mathcal{H}\) corresponding to the kernel
\[K_{1}(z,w):=\sum_{n=0}^{\infty}(D_{n,1}+D_{n,2}z)(D_{n,1}^{*}+D_{n,2}^{*} \overline{w})z^{n}\overline{w}^{n}.\]
To see this, we claim that the spaces \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\) and \(\mathcal{H}\) are equal as sets with equal norms: for if \(w_{1},\ldots,w_{n}\in\mathbb{D}\) and \(u_{1},\ldots,u_{n}\in\mathbb{C}^{d}\), we have
\[\|\sum_{i=0}^{n}K(.,w_{i})u_{i})\|_{\mathcal{H}_{\mathcal{A}, \mathcal{B}}}^{2} =\sum_{i,j=1}^{n}\langle K(w_{i},w_{j})u_{j},u_{i}\rangle_{ \mathbb{C}^{d}}\] \[=\sum_{i,j=0}^{n}\langle K_{1}(w_{i},w_{j})u_{j},u_{i}\rangle_{ \mathbb{C}^{d}}\] \[=\|\sum_{i=1}^{n}K_{1}(.,w_{i})u_{i})\|_{\mathcal{H}}^{2}.\]
Since the sets
\[\{K(.,w)u:\ w\in\mathbb{D},u\in\mathbb{C}^{d}\}\ \text{and}\ \{K_{1}(.,w)u:\ w\in \mathbb{D},u\in\mathbb{C}^{d}\}\]
span dense subspaces of \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\) and \(\mathcal{H}\) respectively, the above claim is proved.
It remains to show that the backward shift on \(\mathcal{H}\) is unitarily equivalent to the direct sum of shifts on the tridiagonal spaces mentioned in the theorem. The kernel function \(K_{1}(z,w)\) can be written as a \(d\times d\) diagonal matrix:
\[K_{1}(z,w)=\text{diag}\ \big{[}a_{n}^{(1)}z^{n}+b_{n}^{(1)}z^{n+1},a_{n}^{(2) }z^{n}+b_{n}^{(2)}z^{n+1},\ldots,a_{n}^{(d)}z^{n}+b_{n}^{(d)}z^{n+1}\big{]}.\]
So, if \(\mathcal{H}_{q}\), \(q=1,2,\ldots,d\), denotes the reproducing kernel space having an orthonormal basis of the form
\[a_{n}^{(q)}z^{n}+b_{n}^{(q)}z^{n+1},\ n\geq 0,\]
we see that \(\mathcal{H}\) can be identified with \(\oplus_{q=1}^{d}\mathcal{H}_{q}\) under the unitary map defined by
\[U:\mathcal{H}_{d}\mapsto\oplus_{q=1}^{d}\mathcal{H}_{q},\]
\[U\big{(}g_{1},\ldots,g_{d}\big{)}=g_{1}\oplus\ldots\oplus g_{d},\]
where \((g_{1},\ldots,g_{d})\) is an arbitrary function in \(\mathcal{H}\). The same unitary operator intertwines the backward shift on \(\mathcal{H}\) and the direct sum of the backward shifts on \(\oplus_{q=1}^{d}\mathcal{H}_{q}\). On the other hand, it follows the assumption (5.2) that the backward shift on \(\mathcal{H}_{q}\) is bounded for all \(q=1,\ldots,d\). This shows that shift \(B\) is bounded on \(\mathcal{H}\). The proof is complete.
In view of Theorem 5.1, we immediately obtain tridiagonal vector valued versions of the results obtained in the section 4. The hypercyclicity, mixing, and chaos of \(B\) on \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\) is respectively, equivalent to those of \(B_{1}\oplus\cdots\oplus B_{d}\). Here \(B_{q}\) refers to the backward shift on the tridiagonal space having an orthonormal basis \(a_{n}^{(q)}z^{n}+b_{n}^{(q)}z^{n+1},\ n\geq 0\). Also, along
the same lines of the proofs in Theorems 4.2, 4.3 and 4.4, we can deduce the dynamical properties of \(B_{1}\oplus\cdots\oplus B_{d}\). We have the following:
**Theorem 5.2**.: _Let \(\{A_{n}\}\) and \(\{B_{n}\}\) be matrices satisfying the conditions in (5.2). Then the following hold for the shift \(B\) acting on \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\)._
1. \(B\) _is hypercyclic if and only if_ \(\sup_{n}\Big{(}\min_{1\leq q\leq d}|a_{n}^{(q)}|\Big{)}=\infty\)_._
2. \(B\) _is mixing if and only if_ \(\lim_{n\to\infty}|a_{n}^{(q)}|=\infty\) _for all_ \(q=1,\ldots,d\)_._
3. \(B\) _is chaotic if and only if_ \(\sum_{n=0}^{\infty}|a_{n}^{(q)}|^{-2}<\infty\) _for all_ \(q=1,\ldots,d\)_._
It is important to note that, unlike the scalar case, the existence of a single non-trivial periodic vector does not imply that \(B\) is chaotic on \(\mathcal{H}_{\mathcal{A},\mathcal{B}}\). This can be seen by choosing \(\{A_{n}\}\) and \(\{B_{n}\}\) to be diagonal matrices appropriately in Theorem 5.1 and recalling that if the direct sum of operators is chaotic, so is each of those operators.
## 6. Concluding remarks
The chaoticity criterion (cf. the section 1) yields strong dynamical properties of operators. In particular, if an operator \(T\) satisfies the chaoticity criterion, then it is frequently hypercyclic, see [12]. (The notion of frequent hypercyclicity was introduced by Bayart and Grivaux [5]). Hence, any of the statements in Theorem 4.4 implies that \(\lambda B\) is frequently hypercyclic on \(\mathcal{H}_{a,b}\). It would be interesting to know if the condition \(\sum_{n}|\lambda^{n}a_{n}|^{-2}<\infty\) is also necessary for \(\lambda B\) to be frequently hypercyclic. For weighted shifts on \(\ell^{p}\), it is well known that the chaos and frequent hypercyclicity are equivalent, cf. [7].
The main result in the section 3 shows that the operator \(B\), under mild conditions, is a compact perturbation of a weighted shift. This does not reveal the structure of \(B\) completely, although its essential spectrum can be understood. For example, the spectrum of \(B\) on \(\mathcal{H}_{a,b}\) is unknown. Similar work for the multiplication operator \(f(z)\mapsto zf(z)\) are available in [1], [2] and [16]. In particular, the authors in [2] show that the aforementioned multiplication operator on a very specific tridiagonal space is a rank-one perturbation of the unilateral unweighted shift. This motives us to raise a similar question for \(B\) on \(\mathcal{H}_{a,b}\).
We hope to investigate these issues in an upcoming work.
**Acknowledgments.** The first author acknowledges a research fellowship of CSIR, File No.: 09/1059(0037)/2020-EMR-I, and the second named author acknowledges SERB, DST, File. No.: SRG/2021/002418, for a start-up grant.
|
2309.13146 | Light confinement in stratum corneum | The epidermis is the outermost layer of the skin, and it plays a crucial role
in protecting the body from external insults such as UV radiation and physical
trauma. The stratum corneum is the topmost layer of the epidermis, composed of
dead skin cells and characterized by low water content. This low water content
creates a gradient in the refractive index. The current work aims to elucidate
the impact of a significant gradient of water content and, consequently, the
variations of the refractive index of the skin on light propagation in tissues.
Using analytical models of light propagation in single-layer and two-layer
tissues, we predict light confinement in the stratum corneum layer. For
example, the light intensity in the stratum corneum layer is noticeably
(11-17%) higher than in the underlying tissue layer. This effect can be
attributed to the high refractive index of the stratum corneum caused by low
water content, compared with underlying tissues, and scattering in the stratum
corneum layer. The effect is the most prominent for smaller diffuse reflectance
of the underlying tissue. Furthermore, the effect is expected to be maximal if
the thickness of the stratum corneum layer is more than the reduced scattering
length. Therefore, in the visible range of the spectrum, the light confinement
phenomena should be more noticable in stratum corneum layers with a thickness
of at least 150um, which can be found in the glabrous skin of palms and soles
and thickened epidermis-like calluses and corns | Gennadi Saiko | 2023-09-22T19:02:59Z | http://arxiv.org/abs/2309.13146v1 | # Light confinement in stratum corneum
###### Abstract
The epidermis is the outermost layer of the skin, and it plays a crucial role in protecting the body from external insults such as UV radiation and physical trauma. The stratum corneum is the topmost layer of the epidermis, composed of dead skin cells and characterized by low water content. This low water content creates a gradient in the refractive index. The current work aims to elucidate the impact of a significant gradient of water content and, consequently, the variations of the refractive index of the skin on light propagation in tissues. Using analytical models of light propagation in single-layer and two-layer tissues, we predict light confinement in the stratum corneum layer. For example, the light intensity in the stratum corneum layer is noticeably (11-17%) higher than in the underlying tissue layer. This effect can be attributed to the high refractive index of the stratum corneum caused by low water content, compared with underlying tissues, and scattering in the stratum corneum layer. The effect is the most prominent for smaller diffuse reflectance of the underlying tissue. Furthermore, the effect is expected to be maximal if the thickness of the stratum corneum layer is more than the reduced scattering length. Therefore, in the visible range of the spectrum, the light confinement phenomena should be more noticable in stratum corneum layers with a thickness of at least 150\(\upmu\)m, which can be found in the glabrous skin of palms and soles and thickened epidermis-like calluses and corns.
## 1 Introduction
The epidermis is the outermost layer of the skin, and it plays a crucial role in protecting the body from external insults such as UV radiation and physical trauma. The epidermis can be subdivided into two sublayers: non-living and living epidermis. The predominant cells are the keratinocytes, arranged in five strata: the stratum corneum, stratum lucidum, stratum granulosum, stratum spinosum, and stratum basale, each with a specific function.
The topmost layer of the epidermis is called the stratum corneum, composed of dead skin cells that have migrated to the skin's surface and become flattened. This layer acts as a barrier to water loss and helps to protect the body from external insults.
Below the stratum corneum is the stratum lucidum, found only in glabrous skin. It is also composed of dead cells and packed with lipid-rich eleidin, which helps to keep water out.
The stratum corneum and stratum lucidum form the non-living epidermis. The non-living epidermis (\(\sim\)20 \(\upmu\)m thick) consists of only dead squamous cells, which are highly keratinized with a high lipid (\(\sim\)20%) and protein (60%) content, and a relatively low (\(\sim\)20%) water content [1],
In the water content, the stratum corneum radically differs from other skin layers, which have much higher water content--the typical water content in other skin layers is 70%. |
2301.13764 | Unsupervised Neighborhood Propagation Kernel Layers for Semi-supervised
Node Classification | We present a deep Graph Convolutional Kernel Machine (GCKM) for
semi-supervised node classification in graphs. The method is built of two main
types of blocks: (i) We introduce unsupervised kernel machine layers
propagating the node features in a one-hop neighborhood, using implicit node
feature mappings. (ii) We specify a semi-supervised classification kernel
machine through the lens of the Fenchel-Young inequality. We derive an
effective initialization scheme and efficient end-to-end training algorithm in
the dual variables for the full architecture. The main idea underlying GCKM is
that, because of the unsupervised core, the final model can achieve higher
performance in semi-supervised node classification when few labels are
available for training. Experimental results demonstrate the effectiveness of
the proposed framework. | Sonny Achten, Francesco Tonin, Panagiotis Patrinos, Johan A. K. Suykens | 2023-01-31T16:55:42Z | http://arxiv.org/abs/2301.13764v3 | # Semi-Supervised Classification with Graph Convolutional Kernel Machines
###### Abstract
We present a deep Graph Convolutional Kernel Machine (GCKM) for semi-supervised node classification in graphs. First, we introduce an unsupervised kernel machine propagating the node features in a one-hop neighbourhood. Then, we specify a semi-supervised classification kernel machine through the lens of the Fenchel-Young inequality. The deep graph convolutional kernel machine is obtained by stacking multiple shallow kernel machines. After showing that unsupervised and semi-supervised layer corresponds to an eigenvalue problem and a linear system on the aggregated node features, respectively, we derive an efficient end-to-end training algorithm in the dual variables. Numerical experiments demonstrate that our approach is competitive with state-of-the-art graph neural networks for homophilious and heterophilious benchmark datasets. Notably, GCKM achieves superior performance when very few labels are available.
Graph Neural Networks Restricted Kernel Machines GCN Kernel Methods Least-Squares Support Vector Machines Deep Kernel Learning Node Classification Semi-Supervised Learning
## 1 Introduction
Semi-supervised node classification has been an important research area for several years. In many real-life applications, one has structured data for which the entire graph can be observed (e.g., a social network where users are represented as nodes and the relationships between users as edges). However, the node labels can only be observed for a small subset of nodes. The learning task is then to predict the label of unsupervised nodes, given the node attributes of all nodes and the network structure of the graph. In many cases, exploiting the information in a local neighbourhood can boost performance (e.g., friends in the social network are likely to share the same preferences). The challenge has been to exploit this information effectively.
In recent years, graph neural networks (GNNs) have rapidly transformed the field of learning on graphs. Their performance follows from: 1) their ability to effectively propagate the node information through the network iteratively, allowing the learned node representations to capture information from both the node attributes and the network structure; and 2) the end-to-end training, allowing the learning of node representations based on the objective of the final learning task Hamilton (2020); Bacciu et al. (2020); Wu et al. (2021).
More traditionally, kernel-based methods such as support vector machines were the standard in graph learning tasks because of the possibility to use a kernel function that represents pairwise similarities between two graphs as the dot product of their embeddings, without the need to explicitly know these potentially high-dimensional embeddings (i.e., the "kernel-trick") Ghosh et al. (2018); Krieg et al. (2020). An additional advantage of kernel machines is that they have strong foundations in learning theory and have clear and interpretable optimization Vapnik (1998); Scholkopf and Smola (2002); Suykens et al. (2002). A drawback of kernel methods, however, is that they do not benefit from hierarchical representation learning as deep learning methods do.
On the one hand, some works have explored synergies between graph deep learning methods and graph kernels Lei et al. (2017); Nikolentzos et al. (2018); Du et al. (2019); Feng et al. (2022). However, these methods are essentially graph neural networks or a shallow kernel machine. Other works have proposed deep versions of kernel machines to combine advantages of both deep learning and kernel methods Cho and Saul (2009); Mairal et al. (2014); Suykens (2017) but no previous work has investigated a deep kernel machine for graph learning.
In this work, we introduce a deep Graph Convolutional Kernel Machine (GCKM) for node classification made of multiple shallow layers with non-parametric aggregation functions. The proposed architecture consists of multiple unsupervised kernel machines for one-hop node aggregation and a final semi-supervised kernel machine. The main idea underlying our method is to combine multiple unsupervised kernel machines such that the final model can achieve higher performance in semi-supervised node classification, where most nodes in the graph are unlabeled. To do so, we show how to train the proposed deep kernel machine in its dual form by directly learning the hidden node representations. Because of the appropriate regularization mechanisms, the neighbourhood aggregation of each layer is implicitly embedded in the final representation, which is a key difference with GNNs. We propose a two-step optimization algorithm with an initialization and fine-tuning phase. Because the model is built on unsupervised core models, augmented with a supervised loss term, we illustrate the possibility to use an unsupervised validation metric. Finally, we show that our model has competitive performance compared to their state-of-the-art GNN counterparts in a transductive node classification setting and outperforms them when very few labels are available, which is of particular interest in many real-world applications where labels are difficult or expensive to collect. In short, we propose the first deep kernel machine for node classification. The reported results can be reproduced using our code in supplementary materials.
## 2 Preliminaries
This section introduces the notations and definitions which will be used in the remainder of the paper and provides the reader with some important background knowledge.
An undirected and unweighted graph \(\mathcal{G}(\mathcal{V},\mathcal{E})\) is defined by a set of nodes or vertices \(\mathcal{V}\) and a set of edges \(\mathcal{E}\) between these nodes. The node degree is simply the number of adjacent nodes: \(d_{v}=|\mathcal{N}_{v}|\), where \(\mathcal{N}_{v}\) is the one-hop neighbourhood of node \(v\). As the task of the proposed method will be node classification, we will consider attributed graphs \(\mathcal{G}(\mathcal{V},\mathcal{E},\mathbf{X})\) where each node \(v\) has a \(d\)-dimensional node features vector \(\mathbf{x}_{v}\) and a class label \(y_{v}\). By concatenating the feature vectors, we obtain the node feature matrix \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d}\).
We will use lowercase symbols (e.g., \(x\)) for scalars, lowercase bold (e.g., \(\mathbf{x}\)) for vectors and uppercase bold (e.g., \(\mathbf{X}\)) for matrices. A single entry of a matrix is represented by \(X_{ij}\) where \(i\) and \(j\) indicate the row and column respectively.
Superscripts in brackets indicate the layer in deep architectures whereas subscripts indicate datapoints (e.g., \(\mathbf{h}_{v}^{(l)}\)). Subscript \(c\) indicates a centering, as will be explained. We represent sets with curly brackets \(\{\cdot\}\) and use double curly brackets \(\{\{\cdot\}\}\) for multisets (i.e., sets that allow multiple instances of a same element). At any point, the reader can consult the list of symbols in appendix A for clarification.
### Graph Neural Networks
GNNs (e.g., Gilmer et al. (2017); Kipf and Welling (2017); Hamilton et al. (2017); Velickovic et al. (2018); Xu et al. (2019)) are powerful models for learning with graphs. All GNNs have a permutation equivariant node representation learning part. While for edge and graph level tasks this is followed by an appropriate readout part, no further readout is needed for node level tasks. In a GNN layer, information from a one-hop neighbourhood gets propagated to each node in the graph. By iterating this in successive layers, one increases the receptive field. We refer the interested reader to Bacciu et al. (2020); Wu et al. (2021); Bronstein et al. (2021) for an extensive introduction into the topic.
The majority of GNN layers can be categorized in one of three flavours that differ in the way that they propagate this information through the network Bronstein et al. (2021): 1) the convolutional flavour, where node features are aggregated, possibly after rescaling them with a given edge weight; 2) the attentional flavour, where this rescaling can be learned through an attention mechanism (e.g., Velickovic et al. (2018)); and 3) the message-passing flavour (e.g., Gilmer et al. (2017)), where messages (i.e., vectors) are learned before aggregating them. These flavours are ordered in increasing complexity and generality. We will further focus on the convolutional flavour.
Many convolutional GNN layers can be decomposed into a nonparametric aggregation step \(\psi(\cdot,\cdot)\), followed by a nonlinear transformation \(\phi(\cdot)\). In this case, the hidden representation of node \(v\) in layer \(l\) is of the form:
\[\mathbf{h}_{v}^{(l)}=\phi\left(\psi\left(\mathbf{h}_{v}^{(l-1)},\left\{\left\{\mathbf{h}_{ u}^{(l-1)}|u\in\mathcal{N}_{v}\right\}\right\}\right)\right). \tag{1}\]
Well-known examples are GCN Kipf and Welling (2017):
\[\mathbf{h}_{v}^{(l)}=\sigma\left(\mathbf{W}^{T}\sum_{u\in\mathcal{N}_{v}\cup\{v\}}\frac{ \mathbf{h}_{u}^{(l-1)}}{\sqrt{\tilde{d}_{u}\tilde{d}_{v}}}+\mathbf{b}\right),\]
where \(\tilde{d}_{v}\) is the node degree of node \(v\) after self-loops were added to the graph, and GIN Xu et al. (2019):
\[\mathbf{h}_{v}^{(l)}=\text{MLP}^{(l)}\left((1+\epsilon^{(l)})\cdot\mathbf{h}_{v}^{(l- 1)}+\sum_{u\in\mathcal{N}_{v}}\mathbf{h}_{u}^{(l-1)}\right),\]
which is maximally powerful in the class of message passing neural networks and as expressive as the one-dimensional Weisfeiler-Lehman graph isomorphism test Weisfeiler and Lehman (1968). Here, \(\epsilon^{(l)}\) can be a fixed or a learnable parameter. Xu et al. (2019) have demonstrated that the expressiveness follows from the sum aggregator and the injectiveness of the transformation function, for which they proposed a multilayer perceptron with at least one hidden layer, motivated by the universal approximator theorem Hornik et al. (1989), Hornik (1991).
### Restricted Kernel Machines
Restricted kernel machines (RKMs) Suykens (2017) connect least squares support vector machines (LS-SVMs) and kernel principal component analysis (kernel PCA) with restricted Boltzmann machines Suykens and Vandewalle (1999a), Suykens et al. (2003), Salakhutdinov (2015). They possess primal and dual model representations based on the concept of conjugate feature duality, which introduces dual variables as hidden features based on an inequality of quadratic forms. The feature map can be defined explicitly (e.g., with a deep neural network) or implicitly by means of a kernel function when using the dual representation.
Analogous to the LS-SVM setting, Suykens (2017) shows that the RKM interpretation of kernel PCA leads to an eigendecomposition of the kernel matrix. Asides kernel PCA, Suykens (2017) also formulated different types of kernel machines in the RKM framework. Deep RKMs are then obtained by stacking multiple RKM layers, where the dual variables are the input for the next layer.
## 3 Other Related Work
There have been many studies relating kernel machines to deep learning. From a training perspective, Suykens and Vandewalle (1999b) have shown how to train a multilayer perceptron in a LS-SVM setting and more recent works have formulated theoretical connections between training overparametrized (or even infinitely wide) deep neural networks and kernel methods Belkin et al. (2018), Jacot et al. (2018). With their Convolutional Kernel Networks, Mairal et al. (2014) proposed a multilayer kernel similar to convolutional neural networks, and approximate the kernel function in an unsupervised manner before using it in a shallow kernel machine. Related to graph learning, Du et al. (2019) introduced the graph neural tangent kernel, based on a GNN architecture to train in a shallow kernel machine, whereas Lei et al. (2017) derived a deep learning model based on graph kernels such as the Weisfeiler-Lehman kernel. Both these methods were designed for graph level tasks. On the other hand, Nikolentzos et al. (2018) and Feng et al. (2022) focused on extending GNNs with graph kernels. Although these methods use kernels, they use them as convolutional filters and are in essence not kernel machines.
Generally, kernel machines are closely related to graph learning in the sense that the kernel matrix can be viewed as a similarity graph and vice versa. Belkin et al. (2006) proposed a framework in which both transductive node classification in graphs and semi-supervised classification with support vector machines (for non-structured data) can be derived. Their framework is based on a supervised learning model augmented with a manifold regularization technique, whereas Mehrkaon et al. (2015) started from an unsupervised objective inspired by spectral clustering of the kernel matrix, and augmented it with a supervised term for the labeled datapoints to tackle the same problem.
In deep kernel learning, the recently proposed RKM framework Suykens (2017) gives a representation in terms of visible and hidden units similar to the Restricted Boltzmann Machine Salakhutdinov (2015). While RKMs have been successfully applied to unsupervised problems, including generative modelling Pandey et al. (2022), disentangled feature learning Tonin et al. (2021), and multi-view clustering Tao et al. (2022), deep kernel learning for graph learning tasks has not yet been investigated.
## 4 Method
This section introduces the deep Graph Convolutional Kernel Machine (GCKM): a semi-supervised kernel machine propagating information through the network for node classification in graphs. First, the GCKM layer (GCKM\(\ell\)) for single-hop propagation is proposed. Also, the semi-supervised kernel machine (Semi-SupRKM) is described. Finally, we explain how to combine these shallow kernel machines in a deep model to increase the receptive field of the model to multiple hops and to perform semi-supervised node classification. All proofs of the lemmas and propositions as well as a full derivation for both the GCKM\(\ell\) as the Semi-SupRKM can be found in Appendix B and C, respectively.
### The Graph Convolutional and Semi-Supervised Kernel Machine as Building Blocks
Graph Convolutional Kernel MachineGCKM\(\ell\) is based on the RKM interpretation of kernel PCA as introduced by Suykens (2017). However, the node features are aggregated before they are mapped into the feature space. We start from the primal minimization problem:
\[\min_{\mathbf{W},\mathbf{e}_{v}} J=\frac{\eta}{2}\text{Tr}(\mathbf{W}^{T}\mathbf{W})-\frac{1}{2}\sum_{v=1}^ {n}\mathbf{e}_{v}^{T}\mathbf{\Lambda}^{-1}\mathbf{e}_{v}\] \[\text{subject to}\left\{\begin{array}{l}\mathbf{e}_{v}=\mathbf{W}^{T} \phi_{c}(\mathbf{a}_{v}),\quad v=1,\dots,n\\ \mathbf{a}_{v}=\psi(\mathbf{x}_{v},\{\mathbf{x}_{u}|u\in\mathcal{N}_{v}\})\end{array} \right., \tag{2}\]
where \(\mathbf{W}\in\mathbb{R}^{d_{f}\times s}\) is an unknown interconnection matrix, \(\mathbf{e}_{v}\in\mathbb{R}^{s}\) the error variables, \(n=|\mathcal{V}_{\mathbf{v}}|\) the number of training nodes, and symmetric hyperparameter matrix \(\mathbf{\Lambda}\succ 0\). Given a feature map \(\phi(\cdot)\), the centered feature map is defined as \(\phi_{c}(\cdot)\triangleq\phi(\cdot)-\Sigma_{i}\phi(\mathbf{x}_{i})/n\). Note that the formulation of the error variables has the same form as (1) with \(\mathbf{W}^{T}\phi_{c}(\cdot)\) as the transformation step. Because of the minus sign in the objective function, one can interpret this minimization problem conceptually as maximizing the variance of the error variables \(\mathbf{e}_{i}\) around zero target, while keeping the weights \(\mathbf{W}\) small Suykens et al. (2003).
Dual variables \(\mathbf{h}_{i}\) are introduced using a simple case of Fenchel-Young inequality Rockafellar (1974):
\[\frac{1}{2}\mathbf{e}^{T}\mathbf{\Lambda}^{-1}\mathbf{e}+\frac{1}{2}\mathbf{h}^{T}\mathbf{ \Lambda}\mathbf{h}\geq\mathbf{e}^{T}\mathbf{h},\quad\forall\mathbf{e},\mathbf{h}\in\mathbb{R}^{s },\forall\mathbf{\Lambda}\in\mathbb{R}^{s\times s}_{\succ 0}.\]
When substituting the above in (2) and eliminating the error variables, one obtains a primal-dual minimization problem as an upper bound on the primal objective function:
\[\min_{\mathbf{W},\mathbf{h}_{v}}\bar{J}\triangleq -\sum_{v=1}^{n}\phi_{c}(\mathbf{a}_{v})^{T}\mathbf{W}\mathbf{h}_{v}\] \[+\frac{1}{2}\sum_{v=1}^{n}\mathbf{h}_{v}^{T}\mathbf{\Lambda}\mathbf{h}_{v}+ \frac{\eta}{2}\text{Tr}(\mathbf{W}^{T}\mathbf{W}). \tag{3}\]
Figure 1: A deep GCKM architecture for semi-supervised node classification, consisting of two GCKM layers (GCKM\(\ell_{1}\), GCKM\(\ell_{2}\)) and a Semi-SupRKM layer. In each GCKM\(\ell\), the node features are aggregated and then (implicitly) transformed to obtain the error variables. The dual variables are coupled with these error variables by means of conjugate feature duality and serve as input for the next layer. In the final Semi-SupRKM layer, the dual variables are directly used to represent the class labels of the unsupervised nodes.
Note that problem (3) is generally nonconvex. Whether or not it has a solution depends on hyperparameters \(\mathbf{\Lambda}\). In the next lemma we show how to determine \(\mathbf{\Lambda}\) automatically by the optimization. We next define the gram matrix \(\mathbf{K}\) with \(K_{uv}=\phi(\mathbf{a}_{u})^{T}\phi(\mathbf{a}_{v})\), which depends on the aggregated node features; \(\mathbf{K}_{c}=\mathbf{M}_{c}\mathbf{K}\mathbf{M}_{c}\) with \(\mathbf{M}_{c}=(\mathbf{I}-\frac{1}{n}\mathbf{1}_{n}\mathbf{1}_{n}^{T})\) the centering matrix; and \(\mathbf{H}=[\mathbf{h}_{1},\ldots,\mathbf{h}_{n}]^{T}\).
**Lemma 4.1**.: _The solution to the dual minimization problem:_
\[\min_{\mathbf{H}}-\frac{1}{2\eta}\text{Tr}(\mathbf{H}^{T}\mathbf{K}_{c}(\mathbf{ X},\mathcal{E})\mathbf{H})\] \[\text{subject to }\mathbf{H}^{T}\mathbf{H}=\mathbf{I}_{s}, \tag{4}\]
_satisfies the same first order conditions for optimality w.r.t. \(\mathbf{H}\) as (3) when the hyperparameters \(\mathbf{\Lambda}\) in (3) are chosen to equal the symmetric part of the Lagrange multipliers \(\mathbf{Z}\) of the equality constraints in (4); i.e., \(\mathbf{\Lambda}=(\mathbf{Z}+\mathbf{Z}^{T})/2\,.\)_
Now, (4) is bounded and is guaranteed to have a minimizer. Indeed, it is a minimization of a concave objective over a compact set. Note that (4) can be solved by a gradient-based algorithm. The solution satisfies the following property:
_Remark 4.2_.: Given a symmetric matrix \(\mathbf{K}_{c}\) with eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{s}>\lambda_{s+1}\geq\cdots\geq\lambda_{n}\geq 0\), and \(\eta>0\) a hyperparameter; and let \(\mathbf{g}_{1},\ldots,\mathbf{g}_{s}\) be the columns of \(\mathbf{H}\); then \(\mathbf{H}\) is a minimizer of (4) if and only if \(\mathbf{H}^{T}\mathbf{H}=\mathbf{I}_{s}\wedge\text{span}(\mathbf{g}_{1},\ldots,\mathbf{g}_{s})= \text{span}(\mathbf{v}_{1},\ldots,\mathbf{v}_{s})\), where \(\mathbf{v}_{1},\ldots,\mathbf{v}_{s}\) are the eigenvectors of \(\mathbf{K}_{c}\) corresponding to the \(s\) largest eigenvalues.
_Remark 4.3_.: One can obtain a solution of (4) by solving the eigendecomposition problem:
\[\frac{1}{\eta}\mathbf{K}_{c}(\mathbf{X},\mathcal{E})\mathbf{H}=\mathbf{H}\mathbf{\Lambda}, \tag{5}\]
and selecting the eigenvectors corresponding to the \(s\) largest eigenvalues as the columns of \(\mathbf{H}\).
Notice that (5) is the kernel PCA formulation, a nonlinear generalization of PCA Scholkopf and Smola (2002); Suykens et al. (2003), with the aggregated node features as the input, and where the first \(s\) components represent the data. The solution of (4) generally yields any orthonormal basis for the same subspace as spanned by the first \(s\) components, and therefore embeds the same information in the dual representations. Further, instead of explicitly defining a feature map, one can apply the kernel trick using Mercer's theorem, stating that for any positive definite kernel \(k(\cdot,\cdot)\) there exist a, possibly infinite dimensional, feature map \(\phi(\cdot)\) such that \(\phi(\mathbf{a}_{u})^{T}\phi(\mathbf{a}_{v})=k(\mathbf{a}_{u},\mathbf{a}_{v})\) Mercer (1909). In this case, the transformation function \(\mathbf{W}^{T}\phi(\cdot)\) is only implicitly defined. As the kernel function, one could choose for example a linear kernel2, a polynomial kernel3, or a radial basis function (RBF)4, among others. We can now define the GCKM layer:
Footnote 2: \(k(\mathbf{x}_{i},\mathbf{x}_{j})=\mathbf{x}_{i}^{T}\mathbf{x}_{j}\)
Footnote 3: \(k(\mathbf{x}_{i},\mathbf{x}_{j})=(\mathbf{x}_{i}^{T}\mathbf{x}_{j}+t)^{p}\)
Footnote 4: \(k(\mathbf{x}_{i},\mathbf{x}_{j})=\exp\left(-||\mathbf{x}_{i}-\mathbf{x}_{j}||^{2}/(2\sigma^{2 })\right)\)
**Definition 4.4** (Graph Convolutional Kernel Machine layer).: GCKM\(\ell\) is a kernel machine for unsupervised node representation learning that propagates information through the network in a one-hop neighbourhood in a convolutional flavour. More formally, it can be interpreted as a principal component analysis on the aggregated node features in a kernel induced feature space, where the latent representations are obtained by solving either (4) or (5), and are used as the input for the subsequent layer in a deep GCKM.
For the aggregation step, we can choose any function that can handle multisets of different sizes and that is invariant to permutations on this multiset. In our experiments, we use GCN aggregation:
\[\psi(\mathbf{x}_{v},\{\{\mathbf{x}_{u}|u\in\mathcal{N}_{v}\}\})=\sum_{u\in\mathcal{N}_ {v}\cup\{v\}}\frac{\mathbf{x}_{u}}{\sqrt{d_{u}d_{v}}}, \tag{6}\]
as well as sum aggregation:
\[\psi(\mathbf{x}_{v},\{\{\mathbf{x}_{u}|u\in\mathcal{N}_{v}\}\})=\mathbf{x}_{v}+\sum_{u\in \mathcal{N}_{v}}\mathbf{x}_{u}. \tag{7}\]
We proceed with some properties of GCKM\(\ell\). The model-based approach of kernel PCA gives us the possibility to use one set of nodes for training \(\mathcal{V}_{\text{tr}}\) and use an out-of-sample extension for another (super)set of nodes \(\mathcal{V}\), possibly from another graph. The following result follows from the stationarity conditions:
**Lemma 4.5**.: _Let \(n=|\mathcal{V}_{\nu}|\), \(m=|\mathcal{V}|\), and \(\mathbf{K}^{\mathcal{V}_{1},\mathcal{V}_{2}}\in\mathbb{R}^{|\mathcal{V}_{1}|\times| \mathcal{V}_{2}|}\) a kernel matrix containing kernel evaluations of all elements of set \(\mathcal{V}_{1}\) w.r.t. all elements of set \(\mathcal{V}_{2}\) (i.e., \(K^{\mathcal{V}_{1},\mathcal{V}_{2}}_{uv}=k(\mathbf{a}_{u},\mathbf{a}_{v})\ \forall u\in\mathcal{V}_{1},\forall v\in \mathcal{V}_{2}\)). The error variables can then be obtained using:_
\[\hat{\mathbf{E}}_{\mathcal{V}}=\frac{1}{\eta}\mathbf{K}^{\mathcal{V}, \mathcal{V}_{\nu}}\mathbf{H}_{\mathcal{V}_{\nu}}-\frac{\mathbf{1}_{m}\mathbf{1}_{n}^{T} \mathbf{K}^{\mathcal{V}_{\nu},\mathcal{V}_{\nu}}\mathbf{H}_{\mathcal{V}_{\nu}}}{n\eta}, \tag{8}\]
_where \(\hat{\mathbf{E}}_{\mathcal{V}}=[\hat{\mathbf{e}}_{1},\ldots,\hat{\mathbf{e}}_{m}]^{T}\). The hidden representations are given by \(\hat{\mathbf{H}}_{\mathcal{V}}=\hat{\mathbf{E}}_{\mathcal{V}}\mathbf{\Lambda}^{-1}\)._
Equation (8) is useful for large-scale problems, when subsets are used for training, and for inductive tasks. It also satisfies the permutation equivariance condition.
**Proposition 4.6**.: _Given an attributed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\), the aggregated node features \(\{a_{v}:v\in\mathcal{V}_{\nu}\}\) and latent representations \(\mathbf{H}_{\mathcal{V}_{\nu}}\)of the training nodes \(\mathcal{V}_{\nu}\), and a local aggregation function \(\psi(\mathbf{x}_{v},\{\{\mathbf{x}_{u}|u\in\mathcal{N}_{v}\}\})\) that is permutation invariant; the mapping \(f\) from \(\mathcal{G}\) to \(\mathcal{G}^{\prime}=(\mathcal{V},\mathcal{E},\hat{\mathbf{E}}_{\mathcal{V}})\) using (8) is equivariant w.r.t. any permutation \(\pi(\mathcal{G})\), i.e., \(\mathcal{G}^{\prime}=f(\mathcal{G})\iff\pi(\mathcal{G}^{\prime})=f(\pi( \mathcal{G}))\)._
A theoretical analysis of the expressiveness of GCKM\(\ell\) can be put in context of the theoretical results by Xu et al. (2019). Any associated feature map of the RBF-kernel is injective Christmann and Steinwart (2008). Furthermore, it has been established that SVMs using the RBF-kernel are universal approximators Burges (1998); Hammer and Gersmann (2003).
**Lemma 4.7**.: _A GCKM\(\ell\) that uses sum aggregation and a RBF-kernel is as expressive as an iteration of the Weisfeiler-Lehman graph isomorphism test Weisfeiler and Lehman (1968)._
Semi-Supervised Restricted Kernel MachineNext, we introduce a multi-class semi-supervised kernel machine for classification (Semi-SupRKM). Like Mehrkanoon et al. (2015), we start from kernel spectral clustering as an unsupervised core model, and augment it with a supervised loss term. Here however, to be able to use it in a deep kernel machine, we introduce duality with conjugated features, rather than by means of Lagrange multipliers.
The primal minimization problem is given by:
\[\min_{\mathbf{W},\mathbf{e}_{i},\mathbf{b}} J =\frac{\eta}{2}\text{Tr}(\mathbf{W}^{T}\mathbf{W})-\frac{1}{2\lambda_{1}} \sum_{i=1}^{n}v_{i}\mathbf{e}_{i}^{T}\mathbf{e}_{i} \tag{9}\] \[+\frac{1}{2\lambda_{2}}\sum_{i=1}^{n}l_{i}(\mathbf{e}_{i}-\mathbf{c}_{i} )^{T}(\mathbf{e}_{i}-\mathbf{c}_{i})\] \[\text{subject to}\quad\mathbf{e}_{i}=\mathbf{W}^{T}\phi(\mathbf{x}_{i})+\mathbf{b },\quad i=1,\ldots,n, \tag{10}\]
where \(l_{i}\in\{0,1\}\) indicates whether the label of datapoint \(i\) is used in training, \(\mathbf{c}_{i}\in\{-1,1\}^{p}\) codes its class label (e.g., in a one-vs-all encoding5), and \(v_{i}\) is a weighting scalar obtained as the inverse degree of the datapoint in the similarity graph defined by \(\mathbf{K}=\phi(\mathbf{x}_{i})^{T}\phi(\mathbf{x}_{j})=k(\mathbf{x}_{i},\mathbf{x}_{j})\), i.e., \(v_{i}=1/\Sigma_{j}K_{ij}\).
Footnote 5: A one-vs-all encoding is similar to a one-hot encoding but it uses \(-1\) instead of \(0\).
By introducing Fenchel-Young inequalities:
\[-\frac{1}{2}\mathbf{e}_{i}^{T}\mathbf{e}_{i}\leq\frac{1}{2}\mathbf{h}_{i}^{T} \mathbf{h}_{i}-\mathbf{e}_{i}^{T}\mathbf{h}_{i}\] \[-\frac{1}{2}(\mathbf{e}_{i}-\mathbf{c}_{i})^{T}(\mathbf{e}_{i}-\mathbf{c}_{i})\leq \frac{1}{2}\mathbf{h}_{i}^{T}\mathbf{h}_{i}-(\mathbf{e}_{i}-\mathbf{c}_{i})^{T}\mathbf{h}_{i},\]
and defining \(r_{i}=\frac{v_{i}}{\lambda_{1}}-\frac{l_{i}}{\lambda_{2}}\), one obtains the primal-dual minimization problem as an upper bound on the primal objective function:
\[\min_{\mathbf{W},\mathbf{h}_{i},\mathbf{b}}\bar{J} \triangleq\frac{\eta}{2}\text{Tr}(\mathbf{W}^{T}\mathbf{W})+\frac{1}{2} \sum_{i=1}^{n}r_{i}\mathbf{h}_{i}^{T}\mathbf{h}_{i}\] \[-\sum_{i=1}^{n}r_{i}(\mathbf{W}^{T}\phi(\mathbf{x}_{i})+\mathbf{b})^{T}\mathbf{h}_ {i}-\sum_{i=1}^{n}\frac{l_{i}}{\lambda_{2}}\mathbf{c}_{i}^{T}\mathbf{h}_{i}. \tag{11}\]
We next define matrices \(\mathbf{R}=\text{diag}(r_{1},\ldots,r_{n})\); \(\mathbf{L}=\text{diag}(l_{1},\ldots,l_{n})\); \(\mathbf{S}=\mathbf{I}_{n}-\frac{\mathbf{1}_{n}\mathbf{1}_{n}^{T}\mathbf{R}}{\mathbf{1}_{n}^{T}\mathbf{R}_{n}}\); \(\mathbf{H}=[\mathbf{h}_{1},\ldots,\mathbf{h}_{n}]^{T}\); and \(\mathbf{C}=[\mathbf{c}_{1},\ldots,\mathbf{c}_{n}]^{T}\).
**Lemma 4.8**.: _The solution to the dual minimization problem:_
\[\min_{\mathbf{H}} -\frac{1}{2\eta}\text{Tr}(\mathbf{H}^{T}\mathbf{R}\mathbf{K}(\mathbf{X})\mathbf{R}\mathbf{H})\] \[+\frac{1}{2}\text{Tr}(\mathbf{H}^{T}\mathbf{R}\mathbf{H})-\frac{1}{\lambda_{2} }\text{Tr}(\mathbf{H}^{T}\mathbf{L}\mathbf{C})\] \[\text{subject to }\mathbf{H}^{T}\mathbf{R}\mathbf{1}_{n}=\mathbf{0}_{p} \tag{12}\]
_satisfies the same first order conditions for optimality w.r.t. \(\mathbf{H}\) as (11) where the Lagrange multipliers equal the bias \(\mathbf{b}\)._
_Remark 4.9_.: Alternatively, one can find the dual variables by solving a linear system in the dual variables:
\[(\mathbf{I}_{n}-\frac{1}{\eta}\mathbf{R}\mathbf{S}\mathbf{K}(\mathbf{X}))\mathbf{R}\mathbf{H}=\frac{1}{ \lambda_{2}}\mathbf{S}^{T}\mathbf{L}\mathbf{C}. \tag{13}\]
From the stationarity conditions, one obtains \(\mathbf{e}_{i}=\mathbf{h}_{i}-\frac{l_{i}\mathbf{e}_{i}}{r_{i}\lambda_{2}}\), which simplifies to \(\mathbf{e}_{i}=\mathbf{h}_{i}\) for the unsupervised training points. One can thus directly infer the class label \(\hat{y}_{i}\) from the learned representation by comparing the class codes and select the one with closest Hamming distance to the error variable \(\mathbf{e}_{i}\). For one-vs-all encoding, this is simply the index with highest value: \(\hat{y}_{i}=\text{argmax}_{j}(\mathbf{h}_{i})_{j}\). When using a subsample for training, one can use the out-of-sample extension (26) described in Appendix C.
### Deep Graph Convolutional Kernel Machine
Next, we construct a deep graph convolutional kernel machine for semi-supervised node classification by stacking multiple GCKM\(\ell\)'s and a Semi-SupRKM (Figure 1). Similar to GNNs, the dual variables of the GCKM\(\ell\)'s (\(\mathbf{H}^{(1)}\) and \(\mathbf{H}^{(2)}\)) serve as input for the subsequent layer and can thus be viewed as hidden representations. The dual variables of the Semi-SupRKM layer (\(\mathbf{H}^{(3)}\)), can directly be used to infer the class label of the unlabeled nodes. The optimization problem for end-to-end learning is given by combining the dual minimization problems of the different layers (i.e., (4) and (12)). For two GCKM layers and a Semi-SupRKM layer, this yields:
\[\min_{\mathbf{H}^{(1)},\mathbf{H}^{(2)},\mathbf{H}^{(3)}}J_{\text{GCKM}}\triangleq -\frac{1}{2\eta^{(1)}}\text{Tr}(\mathbf{H}^{(1)^{T}}\mathbf{K}_{c}^{(1)} \mathbf{H}^{(1)})\] \[-\frac{1}{2\eta^{(2)}}\text{Tr}(\mathbf{H}^{(2)^{T}}\mathbf{K}_{c}^{(2)} \mathbf{H}^{(2)})\] \[-\frac{1}{2\eta^{(3)}}\text{Tr}(\mathbf{H}^{(3)^{T}}\mathbf{R}\mathbf{K}^{(3) }\mathbf{R}\mathbf{H}^{(3)})\] \[+\frac{1}{2}\text{Tr}(\mathbf{H}^{(3)^{T}}\mathbf{R}\mathbf{H}^{(3)})\] \[+\frac{1}{\lambda_{2}^{(3)}}\text{Tr}(\mathbf{H}^{(3)^{T}}\mathbf{L}\mathbf{C})\] \[\text{subject to }\left\{\begin{array}{l}\mathbf{H}^{(1)^{T}}\mathbf{H}^{(1)}=\mathbf{I}_{s_{1}} \\ \mathbf{H}^{(2)^{T}}\mathbf{H}^{(2)}=\mathbf{I}_{s_{2}}\\ \mathbf{H}^{(3)^{T}}\mathbf{R}\mathbf{1}_{n}=\mathbf{0}_{p}.\end{array}\right. \tag{14}\]
Like in GNNs, the number of GCKM\(\ell\)'s used in the deep GCKM determines the receptive field of the model (i.e., the number of hops that the information propagates through the network). However, the key difference is that in GCKM, this message passing is implicitly embedded in the final representation.
### Training Deep Graph Convolutional Kernel Machines
In the algorithmic aspect, the constrained optimization problem (14) is addressed with an alternating minimization scheme, as shown in Algorithm 1. First, note that the constraint set for the two GCKM layers is the Stiefel manifold \(\text{St}(s_{j},n)=\{\mathbf{H}^{(j)}\in\mathbb{R}^{n\times s_{j}}\mid\mathbf{H}^{(j) ^{T}}\mathbf{H}^{(j)}=\mathbf{I}_{s_{j}}\},\ j=1,2\). We therefore employ the Cayley Adam optimizer Li et al. (2019) to update \(\mathbf{H}^{(1)},\mathbf{H}^{(2)}\) with \(\mathbf{H}^{(3)}\) fixed. Then, \(\mathbf{H}^{(3)}\) is updated by solving the linear system (13) associated with the semi-supervised layer.
```
1:\(\mathbf{H}^{(1)}\), \(\mathbf{H}^{(2)}\), \(\mathbf{H}^{(3)}\),
with the initial node features as the input. Then \(\mathbf{H}_{0}^{(2)}\) is obtained by solving (5) with \(\mathbf{H}_{0}^{(1)}\) as the input node features. Finally, \(\mathbf{H}_{0}^{(3)}\) is obtained by solving (13) with \(\mathbf{H}_{0}^{(2)}\) as the input node features.
As a validation metric, one can use the accuracy of the validation set or a different supervised metric. Alternatively, because the core model of the Semi-SupRKM is based on kernel spectral clustering, one can use an unsupervised metric that quantifies the quality of obtained clustering Langone et al. (2013). For node \(v\), the centered cosine distance w.r.t. class \(s\) is:
\[d_{v,s}^{\text{cos}}=1-\frac{(\mathbf{c}_{s}-\mathbf{\mu})^{T}(\mathbf{e}_{v}-\mathbf{\mu})}{ ||\mathbf{c}_{s}-\mathbf{\mu}||\ ||\mathbf{e}_{v}-\mathbf{\mu}||}, \tag{15}\]
where \(\mathbf{c}_{s}\) is the coding of class \(s\) and \(\mathbf{\mu}\) is the center of all codings.6 The unsupervised performance metric for nodes \(\mathcal{V}_{\text{unsup}}\) is then obtained by assigning each node to its closest class encoding and averaging the cosine distances:
Footnote 6: For one-vs-all encoding, this becomes \(\mathbf{\mu}=\mathbf{1}_{p}/p\).
\[\mathcal{L}_{\text{unsup}}=\frac{1}{|\mathcal{V}_{\text{unsup}}|}\sum_{v=1}^{ |\mathcal{V}_{\text{unsup}}|}\min_{s}d_{v,s}^{\text{cos}}. \tag{16}\]
## 5 Experiments
In this section, we assess our model by conducting experiments in a semi-supervised node classification setting on a variety of open graph benchmark datasets and compare its performance with that of several graph neural network baselines. We also assess the performance of the model in different settings in which the unsupervised validation metric is used.
### Datasets and main setting
As datasets, we use three homophilious graphs: Cora, CiteSeer, and PubMed Sen et al. (2008); Yang et al. (2016), which are citation graphs, as well as five heterophilious graphs: Chameleon and Squirrel (Wikipedia graphs Rozemberczki et al. (2021)), webpage graphs Cornell and Texas (WebKB7; Pei et al. (2020)), and the Actor co-occurrence graph Tang et al. (2009); Pei et al. (2020). Table 1 summarizes the dataset statistics, in which \(\mathcal{H}(\mathcal{G})=\frac{1}{n}\sum_{v\in\mathcal{V}}\frac{|\{u\in \mathcal{N}_{v}:y_{u}=y_{v}\}|}{|\mathcal{V}_{v}|}\) is a measure for the level of homophily of nodes in the graph Pei et al. (2020).
Footnote 7: cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb
All graphs are undirected and unweighted and we use the same experimental setup as He et al. (2022), including the random seeds for the datasplits. We used the standard fixed splits. For Cora, CiteSeer, and PubMed, there are 20 training labels per class, 500 labels for validation, and 1000 labels for testing. For the other datasets a 2.5%/2.5%/95% train/validation/test-split is used.
We trained a deep GCKM with two GCKM layers and one Semi-SupRKM layer, as depicted in Figure 1, which we will simply refer to as GCKM. We used GCN aggregation (6) for the citation networks and sum aggregation (7) for the heterophilious graphs. We compare our method to a multilayer perceptron (MLP) and to GCN Kipf and Welling (2017) as it is the most comparable GNN counterpart to our method. Furthermore, we add APPNP Klicpera et al. (2019),
BernNet He et al. (2021), GPR-GNN Chien et al. (2021), and ChebNetII He et al. (2022) to the comparison because all these methods achieve state-of-the-art performance on at least one of the datasets. The reader can consult Appendix D for more details about the hyperparameter search.
### Semi-Supervised Node Classification with Fixed Splits and Few Labels
In the first experiment, we assess the models in case fewer labels are available. We decrease the total number of labels that are available for training and validation in the standard fixed splits by a factor five. This means that for Cora, CiteSeer and PubMed a total of 4 labels per class with an additional 100 labels are used. For Chameleon, Squirrel, and Actor, 1% of the node labels are used. Texas and Cornell are not part of this experiment, as these datasets are too small. For GCKM, all labels are used for training and the model is selected based on the cosine similarity metric (16) after a random search. For the baseline models, we use a 5-fold crossvalidation8 scheme and take the best model based on the average validation accuracy over the 5 folds from a grid search. The results are reported in table 2. We observe that GCKM achieves overall superior performance, with highest accuracies on 4 out of 6 datasets: Chameleon, Squirrel, Cora, and CiteSeer.
Footnote 8: We use 5-fold crossvalidation because the validation sets might be too small in a 10-fold.
### Semi-Supervised Node Classification with Standard Fixed Splits
The next experiment uses the standard fixed splits. For each dataset, we performed a random search to determine the hyperparameters and selected the model with highest validation accuracy. For the baseline models, we use the mean test accuracies and 95% confidence intervals as reported in He et al. (2022). Table 3 summarizes the results.
We observe that for Cornell and Actor, a simple MLP outperforms all models except BernNet and ChebNetII. Comparing to GCN, we observe that for Squirrel and CiteSeer the performance of GCKM is similar, whereas for all other datasets, GCKM again outperforms its GNN counterpart. Comparing our model to the models with more advanced aggregation techniques, we see that GCKM achieves second best performance on Chameleon and Squirrel and reaches new state-of-the-art performance on Texas and Cora.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Nodes & Edges & Features & Classes & \(\mathcal{H}(\mathcal{G})\) \\ \hline Cham. & 2,277 & 31,371 & 2,325 & 5 & 0.25 \\ Squi. & 5,201 & 198,353 & 2,089 & 5 & 0.22 \\ Texas & 183 & 279 & 1,703 & 5 & 0.06 \\ Corn. & 183 & 277 & 1,703 & 5 & 0.30 \\ Actor & 7,600 & 26,659 & 932 & 5 & 0.22 \\ Cora & 2,708 & 5,278 & 1,433 & 7 & 0.83 \\ Cite. & 3,327 & 4,552 & 3,703 & 6 & 0.72 \\ PubMed. & 19,717 & 44,324 & 500 & 3 & 0.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset statistics.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & Cham. & Squi. & Actor & Cora & Cite. & PubMed. \\ \hline MLP & 22.20\(\pm\)1.22 & 23.50\(\pm\)2.33 & 23.80\(\pm\)1.62 & 40.95\(\pm\)1.47 & 50.05\(\pm\)1.38 & 68.65\(\pm\)0.59 \\ GCN & 25.09\(\pm\)3.05 & 23.22\(\pm\)2.37 & 23.67\(\pm\)0.27 & 76.70\(\pm\)3.41 & 64.29\(\pm\)0.96 & 76.68\(\pm\)0.36 \\ APPNP & 25.07\(\pm\)2.50 & 22.59\(\pm\)2.29 & 23.45\(\pm\)0.30 & 80.06\(\pm\)0.45 & 66.46\(\pm\)0.92 & 77.18\(\pm\)0.46 \\ GPR-GNN & 29.58\(\pm\)3.06 & 24.93\(\pm\)3.57 & 24.43\(\pm\)0.57 & 80.16\(\pm\)0.66 & 64.58\(\pm\)1.27 & 76.95\(\pm\)2.96 \\ BernNet & 29.17\(\pm\)3.07 & 24.59\(\pm\)2.59 & **24.93\(\pm\)1.68** & 80.68\(\pm\)6.62 & 64.88\(\pm\)1.03 & **77.51\(\pm\)**4.93** \\ ChebNetII & 30.07\(\pm\)0.83 & 24.58\(\pm\)2.50 & 23.68\(\pm\)0.58 & 78.86\(\pm\)0.55 & 67.26\(\pm\)0.68 & 74.84\(\pm\)0.76 \\ \hline GCKM & **30.17** & **25.38** & 23.83 & **80.74** & **67.53** & 75.10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean test accuracy (%) and 95% confidence interval (%) for semi-supervised node classification with fixed splits where fewer labels are given. The best model is highlighted in bold and the next best model is underlined for each dataset. Since GCKM has a deterministic training procedure, no confidence intervals are reported.
### Semi-Supervised Node Classification with Fixed Splits and Few Validation Labels
This experiment repeats the above experiment for Cora, CiteSeer and PubMed, again with the same 20 training labels per class and 1000 test labels, but the experiment is repeated for different validation sets that are increasing in size. We performed a random search for the hyperparameters and reported in each run the unsupervised metric \(\mathcal{L}_{\text{ansup}}\) (16) on the test set, as well as the validation accuracies \(\mathcal{L}_{\text{val}}\) for each validation set. For each validation set, we selected the model that had highest combined score: \(\mathcal{L}_{\text{comb}}=(|\mathcal{V}_{\text{val}}|\mathcal{L}_{\text{val} }+|\mathcal{V}_{\text{test}}|\mathcal{L}_{\text{ansup}})/(|\mathcal{V}_{\text{ val}}|+|\mathcal{V}_{\text{test}}|)\). We trained the baseline models using the code provided by He et al. (2021) and He et al. (2022), did a hyperparameter gridsearch and selected the model with highest validation accuracy. Table 4 shows the resulting performances of the experiment.
When fewer validation labels are given than in the previous experiment, we see that GCKM achieves best performance on all but two cases. Even with no validation labels, when only the unsupervised metric is used, GCKM is able to obtain a decent performing model, whereas it is not possible to do early stopping or even select a model with the other methods. We conclude that GCKM is less sensitive to a decrease in validation set size than the baseline methods.
### Ablation Studies
We refer the interested reader to Appendix E for an empirical analysis of the effects of the message passing, the training progress, and the significance of layerwise initialization. Also the computational complexity of the training procedure is discussed.
## 6 Conclusion
We introduce GCKM, a new approach for message passing in graphs based on a deep kernel machine with convolutional aggregation functions. Through the use of duality, we derive optimization problems for the unsupervised and semi-supervised building blocks and we show optimization algorithms for end-to-end training for the semi-supervised node classification task. Numerical experiments on several benchmark datasets verify the effectiveness of the proposed method. Thanks to the unsupervised core, our model outperforms current state-of-the-art GNNs for semi-supervised
\begin{table}
\begin{tabular}{l l|c c c c c} \hline \hline \multicolumn{2}{c|}{Method} & \multicolumn{5}{c}{validation Labels per class} \\ & & 0 & 1 & 5 & 10 \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & GCN & - & 81.18\(\pm\)0.66 & 79.50\(\pm\)0.56 & 80.51\(\pm\)0.25 \\ & ChebNetII & - & 60.15\(\pm\)2.13 & 77.67\(\pm\)1.03 & 79.79\(\pm\)1.07 \\ & GCKM & **82.40** & **82.40** & **82.40** & **81.90** \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & GCN & - & 63.91\(\pm\)0.56 & 77.71\(\pm\)1.30 & **69.22\(\pm\)1.07** \\ & ChebNetII & - & 51.52\(\pm\)0.40 & 67.72\(\pm\)1.06 & 68.13\(\pm\)1.14 \\ & GCKM & **68.10** & **68.10** & **68.10** & 68.10 \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & GCN & - & 75.43\(\pm\)0.41 & 76.42\(\pm\)0.79 & 74.52\(\pm\)0.53 \\ & ChebNetII & - & 63.31\(\pm\)2.30 & **76.79\(\pm\)0.95** & 71.65\(\pm\)0.70 \\ \cline{1-1} & GCKM & **76.80** & **76.60** & 76.60 & **76.80** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Test accuracy (%) and 95% confidence interval (%) for semi-supervised node classification on Cora, CiteSeer and PubMed with fixed split for smaller validation set sizes. The best performing model is highlighted in bold.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Method & Cham. & Squil. & Texas & Corn. & Actor & Cora & Cite. & PubMed. \\ \hline MLP & 21.91\(\pm\)2.11 & 23.42\(\pm\)0.94 & 45.03\(\pm\)2.45 & 46.18\(\pm\)5.10 & 29.16\(\pm\)0.52 & 58.88\(\pm\)0.62 & 56.97\(\pm\)0.54 & 73.15\(\pm\)0.28 \\ GCN & 39.14\(\pm\)0.60 & 30.06\(\pm\)0.75 & 32.42\(\pm\)2.23 & 35.57\(\pm\)3.55 & 21.96\(\pm\)0.54 & 81.32\(\pm\)0.18 & 71.77\(\pm\)0.21 & 79.15\(\pm\)0.18 \\ APPNP & 30.06\(\pm\)0.96 & 25.18\(\pm\)0.35 & 46.31\(\pm\)3.01 & 45.73\(\pm\)4.85 & 28.19\(\pm\)0.31 & 83.52\(\pm\)0.24 & 72.09\(\pm\)0.25 & 80.23\(\pm\)0.15 \\ GPR-GNN & 30.56\(\pm\)0.94 & 25.11\(\pm\)0.51 & 45.76\(\pm\)3.78 & 43.42\(\pm\)4.95 & 27.32\(\pm\)0.83 & 83.95\(\pm\)0.22 & 70.92\(\pm\)0.57 & 78.97\(\pm\)0.27 \\ BernNet & 26.35\(\pm\)1.04 & 24.57\(\pm\)0.72 & 48.21\(\pm\)3.17 & 46.64\(\pm\)5.62 & 29.27\(\pm\)0.23 & 83.15\(\pm\)0.32 & 72.24\(\pm\)0.25 & 79.65\(\pm\)0.25 \\ ChebNetIII & **46.45\(\pm\)0.53** & **36.18\(\pm\)0.46** & 54.68\(\pm\)3.87 & **50.92\(\pm\)5.49** & **29.54\(\pm\)0.46** & 83.67\(\pm\)0.33 & **72.75\(\pm\)0.16** & **80.48\(\pm\)0.23** \\ \hline GCKM & 41.16 & 30.10 & **56.07** & 38.73 & 26.01 & **84.20** & 71.80 & 80.10 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean test accuracy (%) and 95% confidence interval (%) for semi-supervised node classification with fixed splits.
node classification when few labels are available for training, which can be a considerable advantage in real-world applications. Many directions for future work exist, such as extending the method to inductive tasks or attentional message passing and investigating methods for scaling up the kernel machines to very large graphs.
## Acknowledgements
The research leading to these results has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program / ERC Advanced Grant E-DUALITY (787960). This paper reflects only the authors' views and the Union is not liable for any use that may be made of the contained information. This work was supported in part by the KU Leuven Research Council (Optimization frameworks for deep kernel machines C14/18/068); the Flemish Government FWO projects GOA4917N (Deep Restricted Kernel Machines: Methods and Foundations), PhD/Postdoc grant; Flemish Government (AI Research Program). Johan Suykens, Sonny Achten, Francesco Tonin and Panagiotis Patrinos are also affiliated with Leuven.AI - KU Leuven institute for AI, B-3000, Leuven, Belgium.
|
2309.11989 | A Vision-Based Navigation System for Arable Fields | Vision-based navigation systems in arable fields are an underexplored area in
agricultural robot navigation. Vision systems deployed in arable fields face
challenges such as fluctuating weed density, varying illumination levels,
growth stages and crop row irregularities. Current solutions are often
crop-specific and aimed to address limited individual conditions such as
illumination or weed density. Moreover, the scarcity of comprehensive datasets
hinders the development of generalised machine learning systems for navigating
these fields. This paper proposes a suite of deep learning-based perception
algorithms using affordable vision sensors for vision-based navigation in
arable fields. Initially, a comprehensive dataset that captures the intricacies
of multiple crop seasons, various crop types, and a range of field variations
was compiled. Next, this study delves into the creation of robust infield
perception algorithms capable of accurately detecting crop rows under diverse
conditions such as different growth stages, weed density, and varying
illumination. Further, it investigates the integration of crop row following
with vision-based crop row switching for efficient field-scale navigation. The
proposed infield navigation system was tested in commercial arable fields
traversing a total distance of 4.5 km with average heading and cross-track
errors of 1.24{\deg} and 3.32 cm respectively. | Rajitha de Silva, Grzegorz Cielniak, Junfeng Gao | 2023-09-21T12:01:59Z | http://arxiv.org/abs/2309.11989v2 | Crop Row Switching for Vision-Based Navigation: A Comprehensive Approach for Efficient Crop Field Navigation
###### Abstract
Vision-based mobile robot navigation systems in arable fields are mostly limited to in-row navigation. The process of switching from one crop row to the next in such systems is often aided by GNSS sensors or multiple camera setups. This paper presents a novel vision-based crop row-switching algorithm that enables a mobile robot to navigate an entire field of arable crops using a single front-mounted camera. The proposed row-switching manoeuvre uses deep learning-based RGB image segmentation and depth data to detect the end of the crop row, and re-entry point to the next crop row which would be used in a multi-state row switching pipeline. Each state of this pipeline use visual feedback or wheel odometry of the robot to successfully navigate towards the next crop row. The proposed crop row navigation pipeline was tested in a real sugar beet field containing crop rows with discontinuities, varying light levels, shadows and irregular headland surfaces. The robot could successfully exit from one crop row and re-enter the next crop row using the proposed pipeline with absolute median errors averaging at 19.25 cm and 6.77\({}^{\circ}\)for linear and rotational steps of the proposed manoeuvre.
## I Introduction
The integration of robotics and autonomous systems in agriculture has enabled precision farming operations leading to effective use of time and resources. Achieving autonomous navigation in agricultural robots is an enabling technology that must be optimised for the deployment of autonomous robots for precision agriculture. The existing autonomous navigation solutions for in-field navigation, albeit being efficient, often relies on expensive sensors such as Real-Time Kinematic Global Navigation Satellite System (RTK-GNSS) sensors. Camera-based agricultural robot navigation systems are a popular alternative to these expensive sensors [1]. Implementation of such vision systems is often limited to crop row following behaviour [2]. In most such systems, row switching is achieved by the aid of GNSS sensors or multiple cameras to identify row end and re-entry to the next row [1, 3, 4].
The premise of vision-based navigation in agricultural robots is to reduce the cost of the overall robotic system with the aim of increased adoption of such technologies. To this end, a vision-based navigation system that uses a single front-mounted camera to perform in-row navigation and row switching would be in line with this objective. In our previous work, we have developed a vision-based crop row detection algorithm [5] and an in-row navigation framework [6] that then uses the crop row detection to guide the robot through a single crop row only based on RGB images. The crop row switching algorithm presented in this paper relates to our previous work on vision-based crop row detection [5] and navigation [6] in arable fields. The newly proposed algorithm serves as a vital bridge, seamlessly connecting with the established in-row navigation algorithm to dexterously integrate into a comprehensive, fully autonomous field-scale navigation behaviour. This row-switching algorithm could also integrated seamlessly with any other existing in-row navigation methods to eliminate the need for GNSS sensors in row-switching. This existing system can follow a crop row based on RGB image input and it can also identify the location of the end of crop rows when the robot is reaching towards the headland area. The crop row switching algorithm presented is be triggered upon the detection of the end of row (EOR) and navigates the robot towards the entry point of the next crop row to be traversed.
The main contributions of this work are as follows:
* A vision-based re-entry point detection algorithm that identifies the relative 3D position of the entry point of the adjacent crop row.
* A crop row switching pipeline based on vision and robot wheel odometry to navigate the robot towards the entry
Fig. 1: Crop row switching state machine. \(d_{r}\): Distance between current and next crop row, A: Initial Detection of the EOR, B: Robot is at the EOR, C: Robot traversed a distance equal to its length into the headland, D: Robot turn 90\({}^{\circ}\)towards the next row direction from state C, E: Robot traverse \(d_{r}\) distance forward from state D, F: Robot turn 90\({}^{\circ}\)towards the next row direction from state E, G: Robot re-enter the next crop row.
point of the next crop row to be traversed.
* A comprehensive evaluation and validation of an autonomous navigation system in a real arable field with a mobile robot deployed the proposed system.
## II Related Work
Research into vision-based navigation systems for crop row following is an extensively explored subject area [7, 8]. Existing robot navigation technologies for agri-robotics research include GNSS, inertial navigation systems (INS) and light detection and ranging (LiDAR) [9]. Each of these existing systems presents a cost-benefit trade-off which results in a lack of reliable navigation options for an affordable solution in crop row navigation. The vision-based infield navigation technologies available for arable field navigation explores the usage of RGB and depth images to identify the crop rows [1, 10]. The existing systems for such infield navigation mainly uses image segmentation, object detection or image matching methods to identify the crop rows for robot navigation [7]. Despite the considerable interest in vision-based crop row following systems, the presence of reliable vision-based crop row switching algorithms remains limited [2]. Most of the existing vision-based navigation systems depend completely or partially on GNSS, INS or Lidar-based solutions for crop row switching rather than a fully vision-based solution [7]. The crop row switching algorithms proposed in [2, 11, 12] completely rely on the RTK-GNSS-based sensors to perform the crop row switching manoeuvre. However, GNSS-based systems are not considered a simple and straightforward solution for agricultural robot navigation since they also need multiple redundancies in place for effective operation due to multi-path reflections and signal blockage[13]. Some systems also use manual control to perform the headland turn [14]. Fully vision-based solutions in agricultural robot navigation depend on multiple cameras on the robot for maintaining the localisation during the row switching process [15, 16]. The methods that use a single camera also demand special requirements such as variable field of view [17].
End of row (EOR) detection is an important step for any crop row switching algorithm since it serves as the starting point for any crop row switching manoeuvre [6]. The EOR detection scheme proposed in [18] employs image binarisation using classic computer vision methods to calculate the pixel count in binary masks to determine the EOR. However, the pixel count thresholds may vary depending on the stage of the plant growth and these thresholds must be matched to each growth stage when used for the long term. The pixel counts of binary image representing the crop would not yield the spatial localisation of EOR within a given image, while it only generates an EOR trigger based on the image. In contrast, the EOR detection method proposed by authors of [19] uses \(Cr\) channel of \(YCbCr\) colour space to calculate the position of the EOR within a given image. Such methods pose the advantage of early detection and potential failures due to noisy images in the method presented in [18]. However, such colour-based EOR detection methods are highly susceptible to distortions caused by external field variations. The advantages of deep learning-based methods in EOR detection outperform such colour-based methods [5, 6]. A 3D point cloud-based row-end detection was introduced in [14] where the EOR is identified by detection of height drop between the plant and the ground within the point cloud. This approach is mostly limited to crops with noticeable height differences relative to the ground level or crops at later growth stages. The novel EOR detection algorithm presented in this paper is based on the deep learning-based crop row detection model from our earlier work [5], considering the limitations of the existing EOR detection methods. Identification of the relative distance between two crop rows is also vital for accurate crop row switching. The distance between two crop rows often referred to as "inter-row distance" is considered as a fixed distance in existing crop row switching methods [3, 20]. The relative position between the current robot pose and the next crop row could vary due to imperfections in planting or due to a slight offset during robot navigation. Therefore, active perception of the relative position of the re-entry point to the next crop row to be traversed is a useful attribute. The re-entry point detection algorithm proposed in this work estimates the relative distance to the next crop row based on crop row segmentation mask from Triangle Scan Method (TSM) [5] and depth data. Such active perception of inter-row space helps to eliminate any potential row-switching failures caused by varying inter-row space or inaccurate positioning of the robotic platform within the currently traversing crop row. Four most common headland turning patterns are semi-circular turn, U-turn, light bulb turn (a.k.a \(\Omega\) turn) and switch-back turn [11]. The semi circular turn describes a half circle with a constant radius while U-turn describes two quarter circle turns (90\({}^{\circ}\)) centered with a linear traversal stage. These methods are typically used in smaller robots with tighter turning radii relative to the inter-row distance [15, 18, 20]. The \(\Omega\) turn and the switch-back turn are mostly used in robots with constrained maneuverability such as Ackerman steering robots or tractors [11, 21, 22]. Considering the skid-steering robot configuration of the robotic platform used in this work, U-turn pattern was chosen to execute the row switching manoeuvre.
## III Methodology
Figure 1 portrays a state machine representing the process of crop row switching, spanning across a total of seven states. Table I describe each state of the process given in Figure 1 encountered during row switching manoeuvre. The robot was driven within the crop rows using Triangle Scan Method (TSM) based crop row navigation framework [6] we have developed in our previous work up to state \(A\). The switching manoeuvre is composed of three steps: row exit, U-turn and re-entry. The transitions \(A\to B\to C\) belong to the step of row exit. The U-turn step contains to the transitions \(C\to D\to E\to F\) while the transition \(F\to G\) is considered as re-entry step. The methods and techniques used in each of these three steps are explained in sections III-B, III-C and III-D respectively.
### _Robotic Platform_
A Mark-1 robot from Hexman Robotics was used to test the proposed crop row switching algorithm. Figure 2 shows the robotic setup with a front mounted Intel RealSense D435i camera and a Reach RS+ RTK GNSS receiver. A NVIDIA Jetson AGX Orin developer kit was used as the onboard computer for the robot. Mark-1 is a skid steer robot with a ground clearance of 128 mm. The dimensions of the robot are \(526\times 507\times 244\) mm.
### _Row Exit Step_
Row exit is the process in which the robot drives itself completely out of the currently traversing crop row after detecting the EOR. The EOR is initially detected at state \(A\) where the robot estimates the relative 3D coordinate of the starting point of the next crop row robot would enter. The Y value of this relative 3D coordinate would represent the inter-row distance \(d_{r}\) between current and the next crop row. The \(d_{r}\) value is useful when robot is traversing during the \(D\to E\) transition. The robot transits through the sates \(A\to B\to C\) after the re-entry point detection using a combination of visual feature matching and odometry as explained in following subsections.
#### Iii-B1 Re-entry Point Detection
The re-entry point detection module is extended based on the TSM crop row detection pipeline from our previous work [5] on crop row detection. Similar to TSM, a deep learning based skeleton segmentation of crop rows was used as the input for the re-entry point detector. As illustrated in Figure 3, the ROI \(AL_{2}L_{3}B\) was defined if the next intended turn is to the left (\(AR_{2}R_{3}C\) for right). Points \(A,B\) and \(C\) were determined using the "Anchor scans" step of TSM [5]. The horizontal green line in Figure 3 was obtained using the EOR detector [6]. Equation 1 yields a point \(P_{t}\) by scanning for the pixel sum along \(AP\) line where point \(P\) is an arbitrary point on \(L_{2}L_{3}B\) path. Similarly, equation 2 yields a point \(A_{t}\) by scanning the pixel sum along \(\overline{AP}_{t}\) line where point \(\overline{A}\) is an arbitrary point on \(AL_{1}\) line. The intersection between the EOR line and \(A_{t}P_{t}\) is identified as the re-entry point \(R\) for the next crop row. The depth information from the corresponding depth image was used to determine the 3D coordinate corresponding to the point \(R\) which was then used to determine \(d_{r}\).
\[P_{t}=\arg\max\left[\sum_{L_{xy}=A}^{P}I(x,y)\right]_{P=L_{1}\to L_{2}}^{P=L_{2} \to B} \tag{1}\]
\[A_{t}=\arg\max\left[\sum_{L_{xy}=P_{t}}^{\overline{\mathcal{A}}}I(x,y)\right] _{\overline{\mathcal{A}}=A}^{\overline{\mathcal{A}}=L_{1}} \tag{2}\]
#### Iii-B2 A to B Transition
The TSM based crop row navigation framework from our previous work [6] was proven to be able to maintain the average heading of the robot relative to the crop row under 1\({}^{\circ}\). Therefore, the relative heading angle between the robot and the crop row was assumed to be under 1\({}^{\circ}\)at state \(A\). The detected EOR line in Figure 3 demarcates the headland area and the crop row region within the RGB image obtained from the front mounted camera. The
\begin{table}
\begin{tabular}{|l|l|} \hline
**State** & **Description** \\ \hline \(A\) & Position of the robot during initial EOR detection. \\ \hline \(B\) & Robot reaches the EOR position. \\ \hline \(C\) & Robot enters the headland area completely passing EOR position. \\ \hline \(D\) & Robot turns 90βtowards the direction of next crop row from state \(C\). \\ \hline \(E\) & Robot aligns itself with the next crop row traversing forward from state \(D\). \\ \hline \(F\) & Robot turns 90βfacing itself towards the starting point of the next crop row while residing within the headland buffer. \\ \hline \(G\) & Robot enters the next crop row. \\ \hline \end{tabular}
\end{table} TABLE I: States encountered during crop row switching manoeuvre.
Fig. 3: Regions of interests (ROI) for re-entry point scanning. Red: Left side ROI, Blue: Right side ROI, Green: Detected EOR line.
Fig. 2: Hexman Mark-1 robot in the Sugar Beet Field.
RGB image was cropped below the EOR line and saved as a reference image \(I_{R}\) at state \(A\). The robot was then moved towards the EOR with a constant forward linear velocity while calculating the local feature (Scale Invariant Feature Transform [23] was used) similarity score in each new image captured by the robot camera with \(I_{R}\). The robot was stopped and assumed to reach state \(B\) when the feature similarity score dropped below a experimentally determined threshold. The threshold value was determined by observing the feature similarity score while driving the robot to the actual EOR position using teleoperation.
#### Iii-B3 B to C Transition
The frontmost edge of the robot is coincident with EOR at state \(B\). The minimum distance the robot must move in order to completely exit the crop row is the length of the robot (\(L_{R}\)) itself. The wheel odometry of the robot was used as feedback to move the robot forward towards the headland. The length of the Hexman Mark 1 robotic platform used in during the field trials was 526 mm. The wheel odometry of the robot was assumed to be accurate enough to navigate the robot forward into the headland at such small distances.
### _U-Turn Step_
The U-turn step involves the robot taking two 90degturns with a linear navigation stage (\(D\to E\)) in between. "Headland buffer" region is the space in the headland that is directly in front of a given crop row bound by the EOR, edge of the field and the two centre lines of the inter-row space between adjacent crop rows. The goal of the U-turn step is to bring the robot into the headland buffer region of the next crop row while facing towards the next crop row to be traversed. It was experimentally verified that the TSM can resume it's normal crop row navigation framework from this point(state \(F\)) onward to traverse in to the crop row. The 90degturns and \(D\to E\) transition is executed with wheel odometry feedback of the robot. The practical behaviour of the robot during rotation stages (\(C\to D\) and \(E\to F\)) ensured that the robot would reach the headland buffer when the \(D\to E\) transition distance was set to \(d_{r}\).
### _Re-Entry Step_
The robot is within the headland buffer region at state \(F\) and state \(G\) is reached when the robot enters the next crop row to be traversed. The re-entry step is the transition from state \(F\) to state \(G\) where the robot moves from the headland buffer into the crop row in front of it. This transition was realized by launching the TSM based crop row following framework at state \(F\). The TSM was able to detect the crop row in front of the robot and navigate the robot into the crop row. This behaviour of TSM based re-entry navigation is verified by the experiment outlined in Section IV-C.
## IV Results and Discussion
The row switching manoeuvre presented in this paper is a three-step process that involves 6 state transitions as illustrated in Figure 1. The accuracy and the performance of each transition is individually evaluated during the experiments. The Hexman Mark-1 robot was programmed to follow a crop row and automatically detect the EOR and re-entry positions. The proposed row-switching manoeuvre is automatically started based on the EOR detection trigger. The path of the robot during the row-switching manoeuvre was tracked using the onboard RTK GNSS tracker with sub-centimeter accuracy.
An experiment was set up in a selected area of 10 crop rows within a real sugar beet field. The GNSS coordinates of the 10 crop rows were recorded with sub-centimetre accuracy by driving the robot through each crop row at very slow speeds by an expert human driver. These ground truth GNSS coordinates of each crop row was then used to generate a regression line which would be used as a reference to calculate errors in autonomous navigation during row switching manoeuvre of the robot. The robot was allowed to autonomously execute the row switching manoeuvre among the 10 crop rows turning in both directions (left and right turns). A total of 18 row-switching trials were conducted during this experiment as plotted in Figure 5. The GNSS coordinates were converted to Universal Transverse Mercator (UTM) coordinates for plotting and error calculations. The errors during each state transition of each row switching manoeuvre trial are illustrated in Figure 4.a where distance errors and angular errors are normalized within a scatter plot. There are 9 x-ticks in Figure 4.a between each pair of adjacent states which represents a pair of consecutive real-world crop rows where the trial took place. A box and whisker plot of the errors for each transition is denoted in Figure 4.b. This normalised representation provides a comparative illustration of the error magnitudes in each state transition. The plots corresponding to translational steps stands above the zero line while box and whisker plots of angular transitions are centered close to zero. Table II presents the median errors of traversal during each stage of the row-switching manoeuvre. Normalized median percentage error (\(\alpha=E_{median}/E_{max,T}\)) was calculated where \(E_{max,T}\) is the maximum absolute error for the type \(T\) (T being a distance or an angle) across the entire manoeuvre and \(E_{median}\) is the median error for each transition. The vision based transition \(A\to B\) records the highest \(\alpha\) value representing the transition with highest error. The remaining transitions record relatively lower \(\alpha\) values representing accurate navigation relative to the vision-based transition.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Transition} & \multicolumn{2}{c|}{Median Errors} & \multirow{2}{*}{\(\alpha\)} \\ \cline{2-3} \cline{5-4} & Error & & Absolute Error \\ \hline \(A\to B\) & 23.40 cm & 31.63 cm & 40.20 \% \\ \hline \(B\to C\) & 8.87 cm & 8.87 cm & 15.24 \% \\ \hline \(C\to D\) & -1.09\({}^{\circ}\) & 6.91\({}^{\circ}\) & -2.88 \% \\ \hline \(D\to E\) & 12.47 cm & 17.26 cm & 21.41 \% \\ \hline \(E\to F\) & 2.51\({}^{\circ}\) & 6.62\({}^{\circ}\) & 6.64 \% \\ \hline \end{tabular}
\end{table} TABLE II: Linear and angular errors during each transition of the row switching maneuver.
### _Crop row exit and headland entry_
The row exit step (\(A\to B\to C\) ) in the proposed row switching manoeuvre could be identified with the densely distributed GNSS coordinates in Figure 5 leading towards headland from within the crop row in each trial. The dense distribution of GNSS coordinates attributes to the slower speed of row switching manoeuvre relative to the in-row navigation speed of the crop row navigation algorithm.
The row crop row exit (\(A\to B\) transition) is a vision-based navigation stage where the headland entry (\(B\to C\) transition) uses the wheel odometry to guide the robot into the headland area. This difference in feedback modalities is reflected in the distance errors of each transition. The visual feedback in \(A\to B\) transition ensures that the robot has successfully reached the EOR position with visual confirmation. Therefore, it is vital for successfully reaching the EOR despite the higher error margins compared to wheel odometry. Majority of the errors in \(A\to B\) and \(B\to C\) transitions are positive, which indicates that the robot always travel further into the headland area beyond desired positions at states \(B\) and \(C\). This trend doesn't have significantly adverse impact on the overall row switching manoeuvre since such extra distance traversed into the headland wouldn't cause the robot to damage crops during the U-turn step.
The robot would stop at 52.6 cm (\(L_{robot}\)) away from the actual EOR position in an ideal \(C\) state. However, the overall maximum error in the row exit step (\(E_{ABC,max}\)) was recorded as 64.27 cm, a distance which the robot would move further away from the desired position at state C. Based on these observations, the minimum width \(W_{H,min}\) for the headland space was calculated to be 143.17 cm using Equation 3. The coefficient of \(L_{robot}\) term in Equation 3 was set to \(1.85\) since the RTK-GNSS receiver used to measure the robot motion was mounted at 45 cm behind the front of the robot. This coefficient must be changed to \((1+R)\) where \(R\) depends on the position of the RTK-GNSS receiver on the robot, if this experiment is being repeated on a different robot. The robot was expected to be aligned with the crop row at state \(A\) within a heading error margin of \(2^{\circ}\). A heading error beyond \(2^{\circ}\)at state \(A\) leads the robot to cross into the headland buffer of an adjacent crop row at state \(C\), which would cause it to skip 1 crop row during switching or re-enter to the same crop row it traversed as seen in the unsuccessful attemps in Figure 5.
\[W_{H,min}=1.85\times L_{robot}+E_{ABC,max} \tag{3}\]
### _U-turn towards next crop row_
The U-turn step of the row switching manoeuvre is represented by the state transitions \(C\to D\to E\to F\). The angular error for \(C\to D\) transition was calculated based on the angle between \(\overrightarrow{AC}\) and \(\overrightarrow{DE}\) vectors. The angle between \(\overrightarrow{DE}\) and \(\overrightarrow{FFF}_{N}\) vectors was considered to calculate the angular error for \(E\to F\) transition where \(F_{N}\) is a point on the GNSS trajectory which is \(N\)(=5) points after the GNSS coordinate of state F. Distance error for \(D\to E\) transition was calculated by comparing the \(DE\) distance with
Fig. 4: Normalized state transition errors during the row switching manoeuvre. a: Scatter plot, b: Box and whisker plot.
Fig. 5: UTM projections of the GNSS trajectories from row switching experiments (Black: Regression lines and ground truth coordinates).
Fig. 6: GNSS trajectories from re-entry experiment (Black: Ground truth coordinates).
the inter-row distance between the adjacent crop rows which the robot is being switched, using regressed ground truth lines.
The angular errors in both rotational transitions are evenly distributed with a near zero mean. The absolute median angular errors during rotational transitions were less than \(7^{\circ}\). These error margins could be considered as acceptable for this application scenario since such small angular errors wouldn't incur significant deviations to \(\overline{DE}\) and \(\overline{FG}\) vectors. The robot would stay at the same place without any translational motion in an ideal rotational transition. However, it was evident from some of the recorded trajectories in Figure 5 that these rotations also incur some translational motion within the trajectory. Such motions push the robot towards or away from the direction of next crop row. This would cause the robot to skip the next crop row to be traversed or turn towards the same row it came in, despite achieving the desired \(\overline{DE}\) distance. This unintended translational motion is often caused by the uneven terrain in the headland area which is not detected by the wheel odometry. The headland buffer area of \(4^{th}\) crop row had such uneven terrain which lead to failure of both left and right turns originated from that crop row.
### _TSM based Re-Entry Validation_
The median error for \(D\to E\) transition which led to successful re-entry was always below 30 cm. This indicate that the robot could execute a successful re-entry when it faces the next crop row with a perpendicular offset of 30 cm or below at state \(F\). An experiment was setup to validate this hypothesis where the robot was placed facing towards the crop row at different angles within the headland buffer of a given crop row. The TSM algorithm was executed on the robot such that it would detect the crop row in front of it and gradually drive the robot into the crop row in front. The path of the robot was recorded in GNSS coordinates as plotted in Figure 6. All the recorded trajectories could successfully enter the row in front of it since the robot was initiated in the headland buffer and facing towards the general direction of the crop row in front of it.
The re-entry failures in Figure 5 could be explained by the main findings of this experiment. There are two key factors governing the success of a re-entry to the next crop row during \(F\to G\) transition. First requirement is that the robot must be positioned within the headland buffer of the crop row it intends to enter. If the perpendicular offset between the crop row and robot position at state \(F\) is beyond 30 cm, a re-entry failure occurs. The second factor is that the robot must be oriented towards the general direction of the crop row it intends to enter. Maximum deviation angle of the robot heading from crop row was \(26^{\circ}\)in the experiment illustrated in Figure 6. Although the errors in individual state transitions of the row switching manoeuvre are minimal, the overall outcome of all transitions would not lead to success when these two requirements are not met at state \(F\).
## V Conclusion and Future Work
The proposed row-switching manoeuvre could navigate the robot from one crop row to another without needing RTK-GNSS sensors or multiple cameras while using a single front mounted camera on the robot. Individual steps of the row-switching manoeuvre demonstrated excellent results within the context of each state transition and its functionality. The vision-based \(A\to B\) transition was the transition with highest errors in the proposed manoeuvre. The rotational transitions of the manoeuvre yielded smaller percentage errors relative to the translational transitions. The success rate of the conducted row switching experiment was 55.5% while the re-entry experiment yielded 100% success rate. The row switching manoeuvre is also crop agnostic since it doesn't rely on plant specific visual features.
Two main unexpected behaviour patterns in the row exit and U-turn steps lead to the failure of row switching: large heading error at state \(A\) and translational motion during rotational state transitions. These two shortcomings of the proposed row switching manoeuvre could be corrected by introducing a heading correction step at state \(A\) and inertial measurement unit (IMU) based sensor fusion to track the unintended translational motion during the rotational transitions correcting the intended \(\overline{DE}\) distance.
|
2309.17320 | Development of a Deep Learning Method to Identify Acute Ischemic Stroke
Lesions on Brain CT | Computed Tomography (CT) is commonly used to image acute ischemic stroke
(AIS) patients, but its interpretation by radiologists is time-consuming and
subject to inter-observer variability. Deep learning (DL) techniques can
provide automated CT brain scan assessment, but usually require annotated
images. Aiming to develop a DL method for AIS using labelled but not annotated
CT brain scans from patients with AIS, we designed a convolutional neural
network-based DL algorithm using routinely-collected CT brain scans from the
Third International Stroke Trial (IST-3), which were not acquired using strict
research protocols. The DL model aimed to detect AIS lesions and classify the
side of the brain affected. We explored the impact of AIS lesion features,
background brain appearances, and timing on DL performance. From 5772 unique CT
scans of 2347 AIS patients (median age 82), 54% had visible AIS lesions
according to expert labelling. Our best-performing DL method achieved 72%
accuracy for lesion presence and side. Lesions that were larger (80% accuracy)
or multiple (87% accuracy for two lesions, 100% for three or more), were better
detected. Follow-up scans had 76% accuracy, while baseline scans 67% accuracy.
Chronic brain conditions reduced accuracy, particularly non-stroke lesions and
old stroke lesions (32% and 31% error rates respectively). DL methods can be
designed for AIS lesion detection on CT using the vast quantities of
routinely-collected CT brain scan data. Ultimately, this should lead to more
robust and widely-applicable methods. | Alessandro Fontanella, Wenwen Li, Grant Mair, Antreas Antoniou, Eleanor Platt, Paul Armitage, Emanuele Trucco, Joanna Wardlaw, Amos Storkey | 2023-09-29T15:28:16Z | http://arxiv.org/abs/2309.17320v1 | # Development of a Deep Learning Method to Identify Acute Ischemic Stroke Lesions on Brain CT
###### Abstract
Computed Tomography (CT) is commonly used to image acute ischemic stroke (AIS) patients, but its interpretation by radiologists is time-consuming and subject to inter-observer variability. Deep learning (DL) techniques can provide automated CT brain scan assessment, but usually require annotated images. Aiming to develop a DL method for AIS using labelled but not annotated CT brain scans from patients with AIS, we designed a convolutional neural network-based DL algorithm using routinely-collected CT brain scans from the Third International Stroke Trial (IST-3), which were not acquired using strict research protocols. The DL model aimed to detect AIS lesions and classify the side of the brain affected. We explored the impact of AIS lesion features, background brain appearances, and timing on DL performance. From 5772 unique CT scans of 2347 AIS patients (median age 82), 54% had visible AIS lesions according to expert labelling. Our best-performing DL method achieved 72% accuracy for lesion presence and side. Lesions that were larger (80% accuracy) or multiple (87% accuracy for two lesions, 100% for three or more), were better detected. Follow-up scans had 76% accuracy, while baseline scans 67% accuracy. Chronic brain conditions reduced accuracy, particularly non-stroke lesions and old stroke lesions (32% and 31% error rates respectively). DL methods can be designed for AIS lesion detection on CT using the vast quantities of routinely-collected CT brain scan data. Ultimately, this should lead to more robust and widely-applicable methods.
## 1 Introduction
Ischemic stroke occurs when blood flow is reduced in one of the arteries supplying the brain (Dirnagl et al., 1999) due to embolus or local thrombosis. Acute ischemic stroke is characterized by a sudden onset of neurological symptoms (Lees et al., 2000). Non-contrast-enhanced computed tomography (CT) is the most commonly used imaging modality for stroke assessment (Wintermark et al., 2015). While other imaging modalities, like MRI, can refine treatment decisions, CT is widely used due to its availability and speed (Mikhail et al., 2020). Stroke detection and accurate diagnosis are critical for successful treatment (Wardlaw et al., 2010), but depend on the reviewing clinicians' experience (e.g. stroke clinician versus radiologist) and scan timing (ischemic lesions become more visible with time). Computer-aided diagnosis can reduce delays and increase treatment success (Taylor et al., 2018). However, current techniques are still in development. While there are commercially available systems that predict features or representative scores of a CT scan, such as the Alberta Stroke Program Early CT Score (ASPECTS) (Nagel et al., 2017), to the best of our knowledge these systems were developed using annotated images which (due to the effort required to produce these annotations) necessarily limits the size (and representativeness) of the imaging dataset used for development.
In this study, we aim to develop a deep learning (DL) method for acute ischemic stroke lesion diagnosis using a large dataset of routinely-collected brain CT scans from an international multicentre clinical trial where expert readers have labelled the scans for lesion location and extent, without annotations. We also explore the interpretability of our model, the impact of different infarct sizes and background conditions on its performance, and quantify its agreement with the assessment of expert radiologists.
## 2 Methods
### Data Source and expert labelling of imaging data
We utilized CT data from the Third International Stroke Trial (IST-3) (Group et al., 2015, 2012), which was a randomised-controlled trial aimed at testing intravenous allephase for
patients suffering from acute ischemic stroke (AIS). The study recruited a total of 3035 patients, and baseline CT brain imaging was acquired within 6 hours of stroke onset, followed by a 24-48-hour follow-up CT for most patients. All patients recruited in IST-3 were screened by experts using all available data, including imaging, to confirm the presence of genuine ischemic stroke and to exclude hemorrhage or stroke mimics.
The IST-3 imaging dataset consists of raw CT data in DICOM (Digital Imaging and Communications in Medicine) format, which were obtained from 156 different hospitals worldwide. The recruiting hospitals were instructed to submit all relevant imaging for each patient acquired according to their own stroke imaging protocols, with only minimal basic requirements imposed by the trial (for non-enhanced CT, the whole brain should be imaged, and the image slices should have a maximum thickness of 10 mm in the axial plane). Therefore, the IST-3 CT dataset is similar to the imaging acquired during routine clinical care.
All brain scans were centrally assessed by a single expert drawn from a panel of 10, and who had undergone prior assessment for consistency (inter-rater agreement greater than kappa 0.7 (Group et al., 2015)). The experts were masked to all other data except whether scans were acquired at baseline or follow-up. They provided labelling for a range of acute and chronic brain changes related to stroke, including acute ischemic brain lesions (Wardlaw & Sellar, 1994; Barber et al., 2000), acute arterial obstruction (on non-enhanced CT, presence of a hyperattenuating artery (Mair et al., 2015)), and at follow-up acute haemorrhage, all quantified by location and extent (1-4 with 1 being smallest, 4 the largest) using clinically validated methods. The expert imaging assessment included the identification and labelling of acute ischemic brain lesions, which can occur anywhere in the brain. In particular, AIS lesions were divided into seven categories based on global brain anatomy, arterial blood supply, and lesion type: major arterial territories of cerebral hemispheres (3 categories - anterior, middle and posterior cerebral - ACA, MCA and PCA respectively), cerebral border zones (1 category), posterior circulation (2 categories), and lacunar (1 category). The experts also assessed and labeled scans for chronic brain changes (Van Swieten et al., 1990), such as atrophy, leukoaraiosis, old stroke lesions, and
Figure 1: An example of a post-processed and standardised full-brain CT scan (a), right and left sides of the same scan (b).
other benign incidental abnormalities, which may impact the expert or DL assessment of the imaging.
### Pre-processing of CT scans
We previously developed a pipeline to clean and pre-process clinical CT data for deep learning development (Fontanella et al., 2023b). The pipeline included several preprocessing steps such as identifying axial images, converting DICOM data to NIFTI format (Neuroimaging Informatics Technology Initiative, 2021), removing localizers and poor quality scans, cropping redundant space, and normalizing image brightness. To account for varying slice numbers, a uniform sampling approach was applied, selecting 11 slices from each scan. The processed scans were standardized to the dimensions of \(500\times 400\times 11\) (height, width, and slices). A visual representation of a sample CT scan after this processing is provided in Figure 1(a).
### Deep learning method
Our goal was to classify CT brain scans as either having an AIS lesion (positive) or not (negative) and, if positive, to predict which side of the brain is affected (left, right, or both). To study the impact of lesion location in the accuracy of the model, we also compared the performance of our method across different regions of the brain.
To achieve this, we employed Pytorch to design a deep learning method using a multi-task learning (MTL) convolutional neural network (CNN), with two heads and seven convolutional layers. We divided our dataset into training, validation, and test sets using a 70-15-15 split, with all the scans of each patient appearing in only one dataset.
We trained the algorithm to learn acute lesion features from each side of the brain separately. To accomplish this, we split all scans into two halves at the sagittal midline, creating half-brain inputs (Figure 1(b). We then concatenated the extracted features from each side into a full-brain lesion
Figure 2: Multitask deep learning method logic (a), half brain CNN model architecture (b), and multi-task learning architecture (c). FC indicates fully connected layers.
feature vector, which was used by a multi-task classifier to predict lesion presence (Task 1) and, if positive, the side of the brain affected (Task 2). The logic of our MTL architecture is depicted in Figure 2(a).
In the first stage of training, to help prevent confounders, our model takes half brain inputs and is solely trained to classify if a lesion is present or absent. Each layer of the CNN performs 2D convolution, batch normalization, and average pooling on each slice. At the end of the seventh layer, we average each feature map across all 11 slices. The architecture of the 7-layer CNN model is illustrated in Figure 2(b).
In the second stage, we add a classifier with two headers, each comprising of one fully connected layer and one output layer for the corresponding task. The complete architecture of our method is shown in Figure 2(c). In particular, we first trained the half-brain model on its own and then fine-tuned the whole architecture. The models were trained using eight NVIDIA GeForce RTX 2080 Ti GPUs. The hyper-parameters employed are listed in the Appendix, Table 4.
### Agreement between DL Classification and Expert Readings
The accuracy and reliability of CT scan labelling can be influenced by the quality of the data and the experience of the clinicians. A previous reliability study (Mair et al., 2015) compared the assessments of seven expert contributors for CT and concurrent CT angiography (CTA) scans from 15 patients. The study showed substantial agreement between experts, as measured by Krippendorff's-alpha (K-alpha) co
Figure 3: Image with a clear lesion in the right MCA region (a) and corresponding saliency maps highlighting the lesion (b). In (c), the lesion in the left MCA region is less clear and thereore the model is less certain about the lesion location, as shown by the corresponding saliency maps in (d). For the saliency maps, the voxels in the 99 percentile are displayed.
efficient with bootstrapping.
To assess the agreement between our DL algorithm and the expert readings, we used 14 of the same 15 patient scans. One scan was excluded due to comprising two image sets, one through the skull base and one through the skull vault. To ensure fairness, we withheld the CT scans of these 14 patients from the training and validation datasets used to develop our DL method.
### Model interpretability and explanation
To gain insights into the factors driving the predictions of our DL model, we employed counterfactuals, a method for generating explanations for model outputs. Counterfactual explanations identify how an input image should be modified to produce a different prediction, enabling us to identify the most important features in the image for the classification outcome Fontanella et al. (2023). To accomplish this, we employed the method described by Cohen et al. Cohen et al. (2021), later referred to as "gifsplanation".
In particular, we considered an image with a stroke lesion and reduced the probability of lesion to less than 0.01. By considering the difference between the original image and the counterfactual image, we obtained an attribution map of the most salient regions. Intuitively, the voxels that are more affected by the class change are the ones encoding more class-specific information and therefore relevant for lesion detection. Examples are shown in Figure 3
## 3 Results
### Data
A total of 5772 CT scans were included in the study, obtained from 2347 patients, with 1243 females and 1104 males. The median age of the patients was 82 years, with a lower quartile of 74 years and an upper quartile of 86 years. After excluding the 14 patients reserved for assessing algorithm-expert agreement, 5730 unique scans from 2333 patients were used in subsequent analyses.
Figure 4: Lesion location (a) and chronic conditions (b) distribution on the processed IST-3 dataset.
The dataset was split into three sets: 4031 scans from 1633 patients for training, 844 scans from 350 patients for validation, and 855 scans from 350 patients for testing. Of the 5772 total CT scans, approximately 54% (3102 scans) were positive for an AIS lesion according to experts. Of the positive scans, 54% (1667 scans) showed lesions on the left side of the brain, 45% (1386 scans) showed lesions on the right side, and the remaining (49 scans) showed lesions on both sides of the brain. However, the distribution of lesion locations was uneven, as shown in Figure 4(a). In addition, 5274 scans were labeled with background or chronic brain conditions, with the distribution of these conditions shown in Figure 4(b).
### Model selection
On the validation dataset, we investigated the optimal number of convolutional layers to employ in our model. Figure 5 displays the accuracy obtained with an increasing number of layers, which demonstrates an initial performance improvement followed by a plateau after six layers. Therefore, we determined that utilizing seven convolutional layers provides a favorable trade-off between performance and computational resources.
We also compare the performance of our architecture with a model directly trained on full brain scans. The latter achieves a validation accuracy of 71%, significantly inferior to our proposed approach (76%).
### Overall accuracy, precision, specificity of the DL model
The overall accuracy, precision, and specificity of the DL model were evaluated using 855 test scans, including 416 baseline scans and 439 follow-up scans. The model achieved an accuracy of 72% for classifying a given full brain CT scan into one of four classes: left-side brain lesion, right-side brain lesion, bilateral lesions, or no lesion. Notably, the accuracy (76%) on follow-up scans was significantly higher than the accuracy on baseline scans (67%).
For Task 1, which involves classifying an image as positive or negative for a lesion, the model achieved an accuracy of 75%. For Task 2, which involves classifying the side of the lesion for scans classified as positive in Task 1, the model achieved an accuracy of 91%.
On the entire test set, the model demonstrated higher specificity (80%) than sensitivity (70%). The sensitivity on follow-up scans was 78%, while that on baseline scans was 56%. The specificity of follow-up scans was 83%, compared to 79% on baseline scans.
### Accuracy by lesion location
Accuracy within brain regions was evaluated on 409 scans (out of the 416 positive ones) from the test dataset, which included both lesion side and location labels. Of the 409 images, 148 were baseline and 261 were follow-up scans. Our algorithm demonstrated high accuracy for lesions in the ACA region (21/28, 75%), followed by the MCA region (248/363, 68%) and PCA region (18/34, 53%). However, it
Figure 5: Validation accuracy by number of convolutional layers.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & MCA & ACA & PCA & Lacunar & Border zone & Cerebellar & Brain stem \\ \hline Baseline test scans (148) & 135 & 5 & 9 & 4 & 2 & 4 & 0 \\ Correct classification & 71 (53\%) & 3 (60\%) & 2 (22\%) & 2 (50\%) & 1 (50\%) & 0 (0\%) & N/A \\ \hline Follow-up test scans (261) & 228 & 23 & 25 & 11 & 5 & 5 & 5 \\ Correct classification & 177 (78\%) & 18 (78\%) & 16 (64\%) & 3 (27\%) & 5 (100\%) & 3 (60\%) & 1 (20\%) \\ \hline All test scans (409) & 363 & 28 & 34 & 15 & 7 & 9 & 5 \\ Correct classification & 248 (68\%) & 21 (75\%) & 18 (53\%) & 5 (33\%) & 6 (86\%) & 3 (33\%) & 1 (20\%) \\ \hline \hline \end{tabular} (a)
\begin{tabular}{c c} \hline \hline & Region(s) affected & \multicolumn{2}{c}{Accuracy} \\ \hline
1 Lesion & Only MCA & \multicolumn{2}{c}{216/327 (66\%)} \\ & Only ACA & \multicolumn{2}{c}{27/ (29\%)} \\ & Only PCA & \multicolumn{2}{c}{4/14 (29\%)} \\ & Only Lacunar & \multicolumn{2}{c}{2/8 (25\%)} \\ & Only Cerebellar & \multicolumn{2}{c}{2/7 (29\%)} \\ & Only Brainstem & \multicolumn{2}{c}{0/4 (0\%)} \\ \hline
2 Lesions & MCA+ACA & \multicolumn{2}{c}{15/17 (88\%)} \\ & MCA+PCA & \multicolumn{2}{c}{9/11 (82\%)} \\ & MCA+Border zone & \multicolumn{2}{c}{2/2 (100\%)} \\ \hline
3 Lesions & MCA+ACA+PCA & \multicolumn{2}{c}{1/1 (100\%)} \\ & MCA+ACA+Lacunar & \multicolumn{2}{c}{1/1 (100\%)} \\ & MCA+PCA+Border zone & \multicolumn{2}{c}{1/1 (100\%)} \\ \hline
4 Lesions & MCA+ACA+Lacunar+Border zone & \multicolumn{2}{c}{1/1 (100\%)} \\ \hline
5 Lesions & MCA+ACA+PCA+Border zone+Brainstem & \multicolumn{2}{c}{1/1 (100\%)} \\ \hline \hline & Size 0 & Size 1-2 & Size 3-4 \\ \hline Baseline test scans (392) & 244 & 77 & 71 \\ Correct classification & 191 (78\%) & 29 (38\%) & 45 (63\%) \\ \hline Follow-up test scans (327) & 105 & 117 & 105 \\ Correct classification & 89 (85\%) & 65 (56\%) & 95 (90\%) \\ \hline All test scans (719) & 349 & 194 & 176 \\ Correct classification & 280 (80\%) & 95 (49\%) & 140 (80\%) \\ \hline \hline \end{tabular} (c)
\begin{tabular}{c c c} \hline \hline & Atrophy & Leukoraiosis & Old stroke lesion & Non-stroke lesion \\ \hline Scans with other brain conditions (779) & 582 & 398 & 353 & 50 \\ Wrong classification & 164 (28\%) & 102 (26\%) & 111 (31\%) & 16 (32\%) \\ \hline \hline \end{tabular} (d)
\end{table}
Table 1: Accuracy by lesion location (a), number of lesions (b), infarct size (c) and background conditions (d) on the test set. Only scans the with necessary annotation were included in each table. As expected, the algorithm has better performance when multiple or bigger lesions are present. Old stroke lesions and non-stroke lesions affect classification accuracy the most.
had lower accuracy for brain stem (1/5, 20%), lacunar (3/9, 33%), and cerebellar (5/15, 33%) lesions (see Table 1(a)). It should be noted that these types of lesions were extremely rare in the dataset, which hindered the generalisation capabilities of our model.
Some patients have multiple lesions affecting different regions. The accuracy of our model increased with an increasing number of lesions, as shown in Table 1(b). On average, scans with only one lesion had a classification accuracy of 62%, scans with two lesions had an accuracy of 87%, while scans with more than two lesions had 100% accuracy.
### Different infarct sizes and background conditions
The accuracy of our algorithm varies across different infarct sizes. The scans with the largest infarct size (3 and 4) and those with no infarct showed the highest accuracy (80%). The scans with infarct sizes 1 and 2 (small and very small lesions) are more difficult to classify, resulting in an accuracy of only 49%. We observed a higher accuracy in classifying AIS in follow-up scans compared to baseline scans, across scans with different lesion sizes (see Table 1(c)).
In addition, we found that 779 out of 855 test scans had background brain conditions. Among these scans, non-stroke lesions and old stroke lesions had the worst error rates, at 32% and 31% respectively, followed by atrophy (28%) and leukoaraiosis (26%) (Table 1(d)).
### Reliability compared to human experts
To evaluate the agreement between our model and expert readings, we compared the classifications of our algorithm with those of seven human experts on the same 14 scans. We calculated the k-alpha value of our algorithm's classification compared to each expert's reading and found an average value of 0.41, which is lower than the general k-alpha among the experts (0.72) (see Table 2). However, as depicted in Table 3, there were instances involving two scans (patients 7 and 12) where the consensus among experts diverged from the label present in our dataset; this label is regarded as the ground truth by our algorithm and was consequently matched by its predictions. Moreover, the expert agreement data we used was based on an assessment of both CT and corresponding CT angiography (CTA) data for each patient, whereas our DL method only utilised the CT images. Indeed, using data from another study 5, we also computed the K-alpha value from 8 experts each rating the same CT scans (without having access to CTA images). The K-alpha value in this analysis was lower than the one obtained when utilising both CT and CTA data: 0.51, with 95% CI of \([0.46,0.57]\).
### Saliency maps evaluation
Sample saliency maps are shown in Figure 3, for scans with lesions in the MCA region of the brain. For scans with a lesion that is easily distinguishable, the saliency maps usually highlight the relevant brain areas (Figure 3 (a), (b)). In cases where the lesions are less clear, the areas highlighted by the saliency maps are more scattered, a sign the model is less certain about the lesion location, while nevertheless usually still highlighting the correct region (Figure 3 (c), (d)). A quantitative evaluation of the saliency maps is presented in Appendix B
### Discussion
In this study, we developed a multitask deep learning algorithm capable of detecting AIS lesions of any type and in any brain location, using 5772 CT brain scans collected from stroke patients, and labelled but not annotated for lesion location/extent. Our best-performing method achieved an accuracy of 72% in correctly detecting ischemic lesions. We found that our algorithm performed better on follow-up scans compared to baseline scans, which is consistent with human performance where lesions become more visible with time. Our algorithm showed higher specificity than sensitivity, indicating that it may be better at screening true negative cases than identifying true positive ones.
We also investigated the impact of lesion location, lesion type, lesion size, and background brain changes on the performance of our DL system. However, training a DL model requires a large number of examples (Fontanella et al., 2020; Andreeva et al., 2020). In our study, the distribution and type of AIS lesions commonly encountered were highly skewed, with most cases showing lesions caused by large-medium vessel occlusion affecting the MCA territory of the brain. As a result, our algorithm was less successful in detecting less frequently occurring lesions such as brain stem lesions, lacunar lesions, and cerebellar lesions, which had fewer example cases. Furthermore, some AIS lesions are much smaller than others, affecting the performance of our model.
We also analyzed four types of background brain changes
\begin{table}
\begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{K-alpha of our algorithm vs each expert} \\ \hline Expert 1 & 0.2646 \\ Expert 2 & 0.5574 \\ Expert 3 & 0.2895 \\ Expert 4 & 0.3672 \\ Expert 5 & 0.4622 \\ Expert 6 & 0.4622 \\ Expert 7 & 0.4622 \\ \hline Average & 0.4093 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average K-alpha values of our algorithm against each expert.
and found that our DL system had the highest classification error for scans with old stroke lesions and scans with other lesion types not related to stroke. However, a balanced dataset where each feature is represented equally would be required to determine the importance of DL system confounding by specific acute lesions or background brain changes. Further studies in the future are needed to address this issue.
The average agreement between our algorithm and seven experts was relatively low compared to the agreement among the seven experts. There are likely multiple reasons for this. Firstly, ground truth is not always obtainable in medical imaging, and our analysis was based on a clinical gold standard reference that was qualitatively assessed by a single expert, which is known to be imperfect and influenced heavily by clinician experience. In other words, our DL system learned from the best available data, but the data were imperfect. Secondly, the expert agreement data we used included both CT and corresponding CT angiography (CTA) data for each patient, while our DL method only utilized the CT images. The addition of CTA makes it more likely for our experts to reach the correct answer (and thus agree) for each scan. In fact, using data from a separate analysis, we observed lower agreement among experts when only CT images were provided, which was more similar to our expert-DL agreement.
Interpretability of deep learning models, particularly in the context of medical imaging, is a challenging topic due to the so-called "black box" nature of these models. However, understanding how these models arrive at their decisions is critical for ensuring their reliability and detecting any potential biases (Kim et al., 2018). To address this issue, we employed counterfactual explanations and generated saliency maps that highlight the most relevant parts of the images for our model's output. Our saliency maps showed that our DL algorithm was able to detect obvious AIS lesions with high accuracy, while also indicating that the model was less certain about the location of more subtle lesions and may highlight regions outside the true lesion. This behaviour is consistent with that of humans.
Other authors employed a two-stage network to combine local and global information for stroke detection (Wu et al., 2021), obtaining 87% accuracy. However, in addition to CT scans they also employed DWI images, and their dataset is composed of only 277 patients. Mirajkar et al. (Mirajkar et al., 2015) also used a combination of CT and DWI images for the segmentation of stroke lesions. However, our study focuses solely on CT scans and involves a larger-scale investigation to establish a benchmark for this imaging modality. By doing so, we aim to provide valuable insights for the development and optimization of future stroke detection algorithms based on CT imaging.
A limitation of our study is that culprit AIS lesions may not be visible on CT scans, especially at baseline. This could lead to incorrect labelling of scans. Using healthy controls would have been an option, but it is not ethical to scan truly normal individuals with CT due to the associated radiation and other individuals with 'normal' CTs acquired for other reasons may include confounding features. The second limitation is that subgroup analyses exploring the impact of lesion location, lesion number, and other chronic features suffer from small numbers of cases in many of the categories.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & Exp. 1 & Exp. 2 & Exp. 3 & Exp. 4 & Exp. 5 & Exp. 6 & Exp. 7 & Exp. consensus & IST-3 label & Our model \\ \hline Patient 1 & L & L & L & L & L & L & L & L & L & L \\ Patient 2 & N & N & L & N & N & R & N & N & N & N \\ Patient 3 & L & L & L & L & L & L & L & L & L & L \\ Patient 4 & R & R & R & R & R & R & R & R & R \\ Patient 5 & L & L & L & L & L & L & L & L & L \\ Patient 6 & L & L & R & L & L & L & L & L & L & N \\ Patient 7 & R & R & R & R & R & R & R & R & N & N \\ Patient 8 & L & N & N & R & N & N & N & N & N & N \\ Patient 9 & N & N & N & N & N & N & N & N & N & N \\ Patient 10 & L & L & L & L & L & N & L & L & L & N \\ Patient 11 & R & R & R & R & R & R & R & R & R \\ Patient 12 & R & N & R & R & R & R & R & N & N \\ Patient 13 & R & R & B & R & R & R & R & R & R & N \\ Patient 14 & L & N & L & N & N & N & N & N & N & N \\ \hline \hline \end{tabular}
\end{table}
Table 3: Detailed comparison between our algorithm and the 7 experts on the 14 hold-out patientsβ CT images. For patients 7 and 12, the consensus agreement of the experts was different from the clinical gold standard in our dataset, which was matched by our method
## 4 Conclusion
Our deep learning algorithm achieved an accuracy of 72% in detecting the presence of ischemic lesions in CT brain scans of patients with stroke symptoms and identifying the location of the lesion(s) on the left or right side of the brain (or both). Our algorithm performed best on follow-up scans where the lesions were more visible. We found that different lesion types, sizes, and chronic brain conditions affected the performance of our system. The deep learning visualisation methodology we used provided further evidence of the difficulty in detecting subtle ischemic brain lesions. Our results demonstrate the potential of deep learning algorithms for detecting AIS lesions on CT using a large number of routine-collected scans. This approach has the potential to develop deep learning systems from vast numbers of scans, not just those collected for research (as is currently the norm). Such algorithms would much better represent real-life patients with all their natural heterogeneity and ultimately, provide more accurate image interpretation for all patients with acute ischemic stroke.
## Acknowledgements
Early project development was funded by the Royal College of Radiologists' 2018 Pump Priming Grant and UK Dementia Research Institute. GM is the Stroke Association Edith Murphy Foundation Senior Clinical Lecturer (SA L-SMP 18/1000). JMW is partially funded by the UK DRI. AF is supported by the United Kingdom Research and Innovation (grant EP/S02431X/1), UKRI Centre for Doctoral Training in Biomedical AI at the University of Edinburgh, School of Informatics. The funders of this study had no role in the study design, data collection, data analysis, data interpretation, or writing of the report.
|
2309.11751 | How Robust is Google's Bard to Adversarial Image Attacks? | Multimodal Large Language Models (MLLMs) that integrate text and other
modalities (especially vision) have achieved unprecedented performance in
various multimodal tasks. However, due to the unsolved adversarial robustness
problem of vision models, MLLMs can have more severe safety and security risks
by introducing the vision inputs. In this work, we study the adversarial
robustness of Google's Bard, a competitive chatbot to ChatGPT that released its
multimodal capability recently, to better understand the vulnerabilities of
commercial MLLMs. By attacking white-box surrogate vision encoders or MLLMs,
the generated adversarial examples can mislead Bard to output wrong image
descriptions with a 22% success rate based solely on the transferability. We
show that the adversarial examples can also attack other MLLMs, e.g., a 26%
attack success rate against Bing Chat and a 86% attack success rate against
ERNIE bot. Moreover, we identify two defense mechanisms of Bard, including face
detection and toxicity detection of images. We design corresponding attacks to
evade these defenses, demonstrating that the current defenses of Bard are also
vulnerable. We hope this work can deepen our understanding on the robustness of
MLLMs and facilitate future research on defenses. Our code is available at
https://github.com/thu-ml/Attack-Bard.
Update: GPT-4V is available at October 2023. We further evaluate its
robustness under the same set of adversarial examples, achieving a 45% attack
success rate. | Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu | 2023-09-21T03:24:30Z | http://arxiv.org/abs/2309.11751v2 | # How Robust is Google's Bard to Adversarial Image Attacks?
###### Abstract
Multimodal Large Language Models (MLLMs) that integrate text and other modalities (especially vision) have achieved unprecedented performance in various multimodal tasks. However, due to the unsolved adversarial robustness problem of vision models, MLLMs can have more severe safety and security risks by introducing the vision inputs. In this work, we study the adversarial robustness of Google's Bard, a competitive chatbot to ChatGPT that released its multimodal capability recently, to better understand the vulnerabilities of commercial MLLMs. By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability. We show that the adversarial examples can also attack other MLLMs, e.g., a 26% attack success rate against Bing Chat and a 86% attack success rate against ERNIE bot. Moreover, we identify two defense mechanisms of Bard, including face detection and toxicity detection of images. We design corresponding attacks to evade these defenses, demonstrating that the current defenses of Bard are also vulnerable. We hope this work can deepen our understanding on the robustness of MLLMs and facilitate future research on defenses. Our code is available at [https://github.com/thu-ml/Attack-Bard](https://github.com/thu-ml/Attack-Bard).
**Update:** GPT-4V is available at October 2023. We further evaluate its robustness under the same set of adversarial examples, achieving a 45% attack success rate.
## 1 Introduction
The recent progress of Large Language Models (LLMs) [2; 6; 10; 36; 38; 43; 50; 51] has demonstrated unprecedented levels of proficiency in language understanding, reasoning, and generation. Leveraging the powerful LLMs, numerous studies [1; 11; 27; 29; 62] have attempted to seamlessly integrate visual inputs into LLMs. They often employ pre-trained vision encoders (e.g., CLIP [42]) to extract image features and then align image and language embeddings. These Multimodal Large Language Models (MLLMs) have demonstrated impressive abilities in vision-related tasks, such as image description, visual reasoning, etc. Recently, Google's Bard [19] released its multimodal capability which allows users to submit prompt containing both image and text, demonstrating superior performance over open-source MLLMs [46].
Despite these commendable achievements, the security and safety problems associated with these large-scale foundation models are still inevitable and remain a significant challenge [5; 16; 40; 52; 54; 63; 64]. These problems can be amplified for MLLMs, that the integration of vision inputs introduces a compelling attack surface due to the continuous and high-dimensional nature of images [7; 41]. It is a well-established fact that vision models are inherently susceptible to small adversarial perturbations [18; 48]. The adversarial vulnerability of vision encoders can be inherited by MLLMs, resulting in security and safety risks in practical applications of large models.
Some recent studies have explored the robustness of MLLMs to adversarial image attacks [4; 7; 41; 44; 61]. However, these works mainly focus on open-source MLLMs (e.g., MiniGPT4 [62]), leaving the robustness of commercial MLLMs (e.g., Bard) unexplored. It would be more challenging to attack commercial MLLMs because they are black-box models with unknown model configurations and training datasets, they have much more parameters with significantly better performance, and they are equipped with elaborate defense mechanisms. A common way of performing black-box attacks is based on adversarial transferability [30; 39], i.e., adversarial examples generated for white-box models are likely to mislead black-box models. Although extensive efforts have been devoted to improving the adversarial transferability, they mainly consider image classification models [13; 14; 28; 55]. Due to the large difference between MLLMs and conventional classifiers, it is worth exploring the effective strategies to fool commercial MLLMs, with the purpose of fully understanding the vulnerabilities of these prominent models.
In this paper, we study the adversarial robustness of Google's Bard [19] as a representative example of commercial MLLMs. Firstly, we consider adversarial attacks for the image description task, where we generate adversarial images to make Bard output incorrect descriptions. We adopt the state-of-the-art transfer-based attacks [9; 31] to make the image embedding of the adversarial image away from that of the original image (i.e., image embedding attack) or return a target sentence (i.e., text description attack) based on several surrogate models. Our attack leads to the _22% success rate and 5% rejection rate against Bard_ with \(\epsilon=16/255\) under the \(\ell_{\infty}\) norm. We show that these adversarial images are highly transferable to fool other MLLMs, including _GPT-4V [37] with the 45% attack success rate, Bing Chat [34] with the 26% attack success rate and 30% rejection rate, and ERNIR Bot [3] with the 86% attack success rate_. Secondly, we identify two defense mechanisms of Bard - face detection and
Figure 1: Adversarial attacks against Googleβs Bard. We consider attacks on image description and two defenses of Bard β face detection and toxicity detection.
toxicity detection of images, which are used to protect face privacy and avoid abuse. We perform corresponding attacks against these two defenses, demonstrating that they can be easily evaded by our methods. The results show that the current defenses of Bard are themselves not strong enough.
Given the vulnerabilities of Bard identified in our experiments under adversarial image attacks, we further discuss broader impacts to the practical use of MLLMs and suggest some potential solutions to improve their robustness. We hope this work can provide a deeper understanding of the weaknesses of MLLMs in the aspect of adversarial robustness under the completely black-box setting, and facilitate future research to develop more robust and trustworthy multimodal foundation models.
## 2 Related work
**Multimodal large language models.** The breakthrough of Large Language Models (LLMs) in language-oriented tasks and the emergence of GPT-4 motivate researchers to harness the powerful capabilities of LLMs to assist in various tasks across multimodal scenarios, and further lead to the new realm of Multimodal Large Language Models (MLLMs) [58]. There have been different strategies and models to bridge the gap between text and other modalities. Some works [1; 27] leverage learnable queries to extract visual information and generate language using LLMs conditioned on the visual features. Models including MiniGPT-4 [62], LLaVA [29] and PandaGPT [47] learn simple projection layers to align the visual features from visual encoders with text embeddings for LLMs. Also, parameter-efficient fine-tuning is adopted by introducing lightweight trainable adapters into models [17; 32]. Several benchmarks [25; 57] have verified that MLLMs show satisfying performance on visual perception and comprehension.
**Adversarial robustness of MLLMs.** Despite achieving impressive performance, MLLMs still face issues of adversarial robustness due to their architecture based on deep neural networks [48]. Multiple primary attempts have been conducted to study the robustness of MLLMs from different aspects. [44] evaluates the adversarial robustness of MLLMs on image captioning under white-box settings, while [61] conducts both transfer-based and query-based attacks on MLLMs assuming black-box access. [7; 41] trigger LLMs to generate toxic content by imposing adversarial perturbations to the input images. [4] studies image hijacks to achieve specific string, leak context, and jailbreak attacks. These exploratory works demonstrate that MLLMs still face stability and security issues under adversarial perturbations. However, they only consider popular open-source models, but do not study commercial MLLMs (e.g., Bard [19]). Not only are their model and training configurations unknown, but they are also equipped with multiple auxiliary modules to enhance the performance and ensure the safety, making it more challenging to attack.
**Black-box adversarial attacks.** Black-box adversarial attacks can be generally categorized into query-based [12; 23] and transfer-based [13; 30] methods. Query-based methods require repeatedly invoking the victim model for gradient estimation, incurring higher costs. In contrast, transfer-based methods only need local surrogate models, leveraging the transferability across models of adversarial samples to carry out the attack. Some methods [13; 28; 53] improve the optimization process by correcting gradients similar to the methods in model training that enhance generalization. Besides, incorporating diversities into the optimization could also raise the transferability [14; 28; 56], which applies various transformations to inputs to boost the generalization. The ensemble-based attack is also effective when generating the adversarial samples on a group of surrogate models [9; 13] or adjusting one model to simulate diverse models [22; 31].
## 3 Attack on image description
Google's Bard [19] is a representative MLLM that allows users to assess its multimodal capability through API access. This work aims to identify the adversarial vulnerabilities of Bard to highlight the risks associated with it and the importance of designing more robust models in the future. Specifically, we evaluate the performance of Bard to describe image contents perturbed by imperceptible adversarial noises. We choose the image description task since it is one of the fundamental tasks of MLLMs and we can avoid the influence of instruction following ability on our evaluation. As the model will evoke over time, we perform all evaluations during September 10th to 15th, 2023 using the latest update of Bard at July 13th, 2023.
### Attack method
MLLMs usually first extract image embeddings using vision encoders and then generate corresponding text based on image embeddings. Thus, we propose two attacks for MLLMs - **image embedding attack** and **text description attack**. As their names indicate, image embedding attack makes the embedding of the adversarial image diverge from that of the original image, based on the fact that if adversarial examples can successfully disrupt the image embeddings of Bard, the generated text will inevitably be affected. On the other hand, text description attack targets the entire pipeline directly to make the generated description different from the correct one.
Formally, let \(\mathbf{x}_{nat}\) denote a natural image and \(\{f_{i}\}_{i=1}^{N}\) denote a set of surrogate image encoders. The image embedding attack can be formulated as solving
\[\max_{\mathbf{x}}\sum_{i=1}^{N}\|f_{i}(\mathbf{x})-f_{i}(\mathbf{x}_{nat})\|_{2}^{2},\quad \text{s.t. }\|\mathbf{x}-\mathbf{x}_{nat}\|_{\infty}\leq\epsilon, \tag{1}\]
where we maximize the distance between the image embeddings of the adversarial example \(\mathbf{x}\) and the natural example \(\mathbf{x}_{nat}\), while also ensuring that the \(\ell_{\infty}\) distance between \(\mathbf{x}\) and \(\mathbf{x}_{nat}\) is smaller than \(\epsilon\).
For text description attack, we collect a set of surrogate MLLMs as \(\{g_{i}\}_{i=1}^{N}\), where \(g_{i}\) can predict a probability distribution of the next word \(w_{t}\) given the image \(\mathbf{x}\), text prompt \(\mathbf{p}\), and previously predicted words \(w_{<t}\) as \(p_{g_{i}}(w_{t}|\mathbf{x},\mathbf{p},w_{<t})\). The text description attack maximizes the log-likelihood of predicting a target sentence \(Y:=\{y_{t}\}_{t=1}^{L}\) as
\[\max_{\mathbf{x}}\sum_{i=1}^{N}\sum_{t=1}^{L}\log p_{g_{i}}(y_{t}|\mathbf{x},\mathbf{p},y _{<t}),\quad\text{s.t. }\|\mathbf{x}-\mathbf{x}_{nat}\|_{\infty}\leq\epsilon. \tag{2}\]
Note that we perform a targeted attack in Eq. (2) rather than an untargeted attack that minimizes the log-likelihood of the ground-truth description. This is because there are multiple correct descriptions of an image. If we only minimize the log-likelihood of predicting a single ground-truth description, the model can also output other correct descriptions given the adversarial example, making the attack ineffective.
To solve the optimization problems in Eq. (1) and Eq. (2), we adopt the state-of-the-art transfer-based attack methods [9; 31] in this paper. The spectrum simulation attack (SSA) [31] performs a spectrum transformation to the input to improve the adversarial transferability. The common weakness attack (CWA) [9] proposes to find the common weakness of an ensemble of surrogate models by promoting the flatness of loss landscapes and closeness between local optima of surrogate models. SSA and CWA can be combined as SSA-CWA, which demonstrates superior transferability for black-box models. Therefore, we adopt SSA-CWA as our attack. More details can be found in [9].
### Experimental results
**Experimental settings. (1) Dataset:** We randomly select 100 images from the NIPS17 dataset1. **(2) Surrogate models:** For image embedding attack, we adopt the vision encoders of ViT-B/16 [15], CLIP [42], and BLIP-2 [27] as surrogate models. For text description attack, we choose BLIP-2 [27], InstructBLIP [11] and MiniGPT-4 [62] as surrogate models. **(3) Hyper-parameters:** We set the perturbation budget as \(\epsilon=16/255\) under the \(\ell_{\infty}\) norm. For SSA-CWA, we adopt the same settings as in [9], except that the number of attack iterations is 500. **(4) Evaluation metric:** We measure the attack success rate to evaluate the robustness of Bard. We consider an attack successful only when the main object in the image is predicted incorrectly, as shown in Fig. 1 (top). Other wrong details, such as hallucinations, object counting, color, or background, are considered unsuccessful attacks.
Footnote 1: [https://www.kaggle.com/competitions/nips-2017-non-targeted-adversarial-attack](https://www.kaggle.com/competitions/nips-2017-non-targeted-adversarial-attack)
**Results.** Tab. 1 shows the results. The image embedding attack achieves 22% success rate while the text description attack achieves 10% success rate against Bard. The superiority of image embedding attack over text description attack may be due to the similarity between vision encoders but large differences between LLMs, as commercial models like Bard usually adopt much larger LLMs than open-source LLMs used in our experiments. Note that some of the adversarial examples are wrongly rejected by the defenses of Bard. Fig. 2 shows two successful adversarial examples that Bard provides
incorrect descriptions, e.g., Bard describes a panda's face as a painting of a woman's face as shown in Fig. 2(b). The experiment demonstrates that large vision-language models like Bard are vulnerable to adversarial attacks and can readily misidentify objects in adversarial images.
**Ablation study on model ensemble.** To prove the effectiveness of the ensemble attack, we conduct an ablation study with different surrogate models. For simplicity, we only choose 20 images in the NIP2017 dataset to perform image embedding attack. As illustrated in Tab. 2, the attack success rate increases with the number of surrogate models. Therefore, in this work, we choose to ensemble three surrogate models to strike a balance between efficacy and time complexity.
**Generalization across different prompts.** To assess the generalization of the adversarial examples across different prompts, we measure the attack success rate using the prompts in [29] (e.g., "Provide a brief description of the given image.", "Offer a succinct explanation of the picture presented.", "Take a look at this image and describe what you notice", "Summarize the visual content of the image.", etc.). Remarkably, the adversarial examples that are successful given the original prompt "Describe this image", can also mislead Bard using the prompts given above, demonstrating good generalization of adversarial examples across different prompts.
\begin{table}
\begin{tabular}{c|c c} \hline \hline & Attack Success Rate & Rejection Rate \\ \hline No Attack & 0\% & 1\% \\ Image Embedding Attack & 22\% & 5\% \\ Text Description Attack & 10\% & 1\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Attack success rate of different methods against Bardβs image description.
\begin{table}
\begin{tabular}{c c|c} \hline \hline \multicolumn{2}{c|}{Image Encoder(s)} & Attack Success Rate \\ ViT-B/16 & CLIP & BLIP-2 & Attack Success Rate \\ \hline β & & 0\% \\ & β & 5\% \\ & & β & 0\% \\ β & β & 15\% \\ β & & β & 10\% \\ β & β & β & 20\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Black-box attack success rate against Bard using different surrogate image encoder(s).
Figure 2: Screenshots of successful attacks against Bardβs image description.
### Attack on other MLLMs
We then examine the attack performance of our generated adversarial examples against other commercial MLLMs. GPT-4V [37] is very recently accessible at October 2023 after the first version of this paper. We further evaluate its robustness at October 13th, 2023 in the second version of this paper. In the first version, we also consider two other commercial MLLMs, including Bing Chat [34] and ERNIE Bot [3]. We adopt the 100 adversarial examples generated by the image embedding attack method to directly evaluate the performance of these two models.
Tab. 3 shows the results of attacking GPT-4V, Bing Chat, and ERNIE Bot. Our attack achieves 45%, 26%, and 86% attack success rates against GPT-4V, Bing Chat, and ERNIE bot, respectively, while most of the natural images can be correctly described. There are 30% adversarial images being rejected by Bing Chat since it finds noises in them. Based on the results, we find that **Bard is the most robust model among the commercial MLLMs we study**, and ERNIE Bot is the least robust one under our attack with 86% success rate. We find that the attack success rate is higher for GPT-4V since it will provide vague descriptions for adversarial images rather than rejecting them like Bing Chat. Fig. 3, Fig. 4, and Fig. 5 show the successful examples of attacking GPT-4V, Bing Chat, and ERNIE Bot.
ERNIE Bot, respectively. The results indicate that commercial MLLMs have similar robustness issues under adversarial attacks, requiring further improvement of robustness.
## 4 Attack on defenses of Bard
In our evaluation of Bard, we found that Bard is equipped with (at least) two defense mechanisms, including face detection and toxicity detection. Bard will directly reject images containing human faces or toxic contents (e.g., violent, bloody, or pornographic images). The defenses may be deployed to protect human privacy and avoid abuse. However, the robustness of the defenses under adversarial attacks is unknown. Therefore, we evaluate their robustness in this section.
### Attack on face detection
Modern face detection models employ deep neural networks to identify human faces with impressive performance. To attack the face detection module of Bard, we select several face detectors as white-box surrogate models for ensemble attacks. Let \(\{D_{i}\}_{i=1}^{K}\) denote the set of surrogate face detectors. The output of a face detector \(D_{i}\) contains three elements: the anchor \(A\), the bounding box \(B\), and the face confidence score \(S\in\{0,1\}\). Therefore, our face attack minimizes the confidence score such that the model cannot detect the face, which can be formulated as
\[\min_{\mathbf{x}}\sum_{i=1}^{K}L(S_{D_{i}}(\mathbf{x}),\hat{y}),\quad\text{s.t.}\ \|\mathbf{x}-\mathbf{x}_{nat}\|_{\infty}\leq\epsilon, \tag{3}\]
where \(L\) is the binary cross-entropy (BCE) loss and \(\hat{y}=0\) (i.e., we minimize the confidence score \(S_{D_{i}}(\mathbf{x})\)). \(\mathbf{x}_{nat}\) is the natural image containing human face and we aim to generate an adversarial example \(\mathbf{x}\) without being detected. We also adopt the SSA-CWA method to solve Eq. (3).
**Experimental settings. (1) Dataset:** The experiments are conducted on FFHQ [24] and LFW [21]. The FFHQ dateset comprises 70,000 images, each with a resolution of 1024 \(\times\) 1024. The LFW dataset contains 13,233 celebrity images with a resolution of 250 \(\times\) 250. We randomly select 100 images from each dataset for manual testing. **(2) Surrogate models:** We choose three public face detection models for ensemble attack, including PyramidBox [49], S3FD [60] and DSFD [26]. **(3) Hyper-parameters:** We consider perturbation budgets \(\epsilon=16/255\) and \(\epsilon=32/255\). **(4) Evaluation metric:** We consider an attack successful if Bard does not reject the image and provides a description.
**Experimental results and analyses.** In Fig. 6, we present examples of successful attacks on FFHQ dataset. The quantitative results are summarized in Tab. 4. The experimental results suggest that even
\begin{table}
\begin{tabular}{c|c c} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{c}{Attack Success Rate} \\ \cline{2-3} & \(\epsilon=16/255\) & \(\epsilon=32/255\) \\ \hline
100 images of FFHQ & 4\% & 7\% \\ \hline
100 images of LFW & 8\% & 38\% \\ \hline \end{tabular}
\end{table}
Table 4: Attack success rate with different settings against Bardβs face detection.
Figure 5: Screenshots of successful attacks against ERNIE Botβs image description (in Chinese).
if the detailed model configurations of Bard are unknown, we still can successfully attack the face detector of Bard under the black-box setting based on the transferability of adversarial examples. In addition, it seems that the attack success rate is positively correlated with the value of the perturbation budget and negatively correlated with the image resolution. In other words, the attack success rate is higher when the \(\epsilon\) is larger and the image resolution is lower.
### Attack on toxicity detection
To prevent providing descriptions for toxic images, Bard employs a toxicity detector to filter out such images. To attack it, we need to select certain white-box toxicity detectors as surrogate models. We find that some existing toxicity detectors [45] are linear probed versions of pre-trained vision models like CLIP [42]. To target these surrogate models, we only need to perturb the features of these pre-trained models. Therefore, we employ the exact same objective function as given in Eq. (1) and use the same attack method SSA-CWA. Note that this procedure could also affect the description of the image as shown in Sec. 3. But as the attack success rate on image description is not very high, we could find successful examples that not only evade the toxicity detector but also lead to correct description of the image.
**Experiment.** We manually collect a set of 100 toxic images containing violent, bloody, or pornographic contents. The other experimental settings are the same as Sec. 3.2. We achieve 36% attack success rate against Bard's toxicity detector. As shown in Fig. 7, the toxicity detector fails to identify the toxic images with adversarial noises. Consequently, Bard provides inappropriate descriptions for these images. This experiment underscores the potential for malicious adversaries to exploit Bard to generate unsuitable descriptions for harmful contents.
Figure 6: Screenshots of successful attacks against Bardβs face detection.
Figure 7: Screenshots of successful attacks against Bardβs toxicity detection.
Discussion and Conclusion
In this paper, we analyzed the robustness of Google's Bard to adversarial attacks on images. By using the state-of-the-art transfer-based attacks to optimize the objectives on image embedding or text description, we achieved a 22% attack success rate against Bard on the image description task. The adversarial examples can also mislead other commercial MLLMs, including Bing Chat with a 26% attack success rate and ERNIE Bot with a 86% attack success rate. The results demonstrate the vulnerability of commercial MLLMs under black-box adversarial attacks. We also found that the current defense mechanisms of Bard can also be easily evaded by adversarial examples.
As large-scale foundation models (e.g., ChatGPT, Bard) have been increasingly used by humans for various purposes, their security and safety problems become a big concern to the public. Adversarial robustness is an important aspect of model security. Although we consider adversarial attacks on the typical image description task, which is not very harmful in some sense, some works demonstrate that adversarial attacks can be used to break the alignment of LLMs [64] or MLLMs [7, 41]. For example, by attaching an adversarial suffix to harmful prompts, LLMs would produce objectionable responses. This problem will be more severe for MLLMs since attacks can be conducted on images. And it will be harder to defend against adversarial image perturbations than adversarial text perturbations due to the continuous space of images. Although previous works [7, 41] have studied this problem for MLLMs, they only consider white-box attacks. We will study black-box attacks against the alignment of commercial MLLMs in future work.
Defending against adversarial attacks of vision models is still an open problem despite extensive research. Adversarial training (AT) [33] is arguably the most effective defense method. However, AT may not be suitable for large-scale foundation models for several reasons. First, AT leads to the trade-off between accuracy and robustness [59]. The performance of MLLMs could be degraded when employing AT. Second, AT is much more computational expensive, often requiring an order of magnitude longer training time than standard training. As training foundation models is also time- and resource-consuming, it is hard to apply AT to these models. Third, AT is not generalizable across different threats, e.g., a model robust to \(\ell_{\infty}\) perturbations could also be broken by \(\ell_{2}\) perturbations. Thus, the adversary can also find ways to evade AT models.
Given the problems of AT, we think that preprocessing-based defenses are more suitable for large-scale foundation models as they can be used in a plug-and-play manner. Some recent works leverage advanced generative models (e.g., diffusion models [20]) to purify adversarial perturbations (e.g., diffusion purification [35], likelihood maximization [8]), which could serve as promising strategies to defend against adversarial examples. We hope this work can motivate future research on developing more effective defense strategies for large-scale foundation models.
|
2301.13770 | Energy-Conserving Neural Network for Turbulence Closure Modeling | In turbulence modeling, we are concerned with finding closure models that
represent the effect of the subgrid scales on the resolved scales. Recent
approaches gravitate towards machine learning techniques to construct such
models. However, the stability of machine-learned closure models and their
abidance by physical structure (e.g. symmetries, conservation laws) are still
open problems. To tackle both issues, we take the `discretize first, filter
next' approach. In this approach we apply a spatial averaging filter to
existing fine-grid discretizations. The main novelty is that we introduce an
additional set of equations which dynamically model the energy of the subgrid
scales. Having an estimate of the energy of the subgrid scales, we can use the
concept of energy conservation to derive stability. The subgrid energy
containing variables are determined via a data-driven technique. The closure
model is used to model the interaction between the filtered quantities and the
subgrid energy. Therefore the total energy should be conserved. Abiding by this
conservation law yields guaranteed stability of the system. In this work, we
propose a novel skew-symmetric convolutional neural network architecture that
satisfies this law. The result is that stability is guaranteed, independent of
the weights and biases of the network. Importantly, as our framework allows for
energy exchange between resolved and subgrid scales it can model backscatter.
To model dissipative systems (e.g. viscous flows), the framework is extended
with a diffusive component. The introduced neural network architecture is
constructed such that it also satisfies momentum conservation. We apply the new
methodology to both the viscous Burgers' equation and the Korteweg-De Vries
equation in 1D. The novel architecture displays superior stability properties
when compared to a vanilla convolutional neural network. | Toby van Gastelen, Wouter Edeling, Benjamin Sanderse | 2023-01-31T17:13:17Z | http://arxiv.org/abs/2301.13770v5 | # Energy-Conserving Neural Network for Turbulence Closure Modeling
###### Abstract
In turbulence modeling, and more particularly in the Large-Eddy Simulation (LES) framework, we are concerned with finding closure models that represent the effect of the unresolved subgrid scales on the resolved scales. Recent approaches gravitate towards machine learning techniques to construct such models. However, the stability of machine-learned closure models and their abidance by physical structure (e.g. symmetries, conservation laws) are still open problems. To tackle both issues, we take the 'discretize first, filter next' approach, in which we apply a spatial averaging filter to existing energy-conserving (fine-grid) discretizations. The main novelty is that we extend the system of equations describing the filtered solution with a set of equations that describe the evolution of (a compressed version of) the energy of the subgrid scales. Having an estimate of the energy of the subgrid scales, we can use the concept of energy conservation and derive stability of the discrete representation. The compressed variables are determined via a data-driven technique in such a way that the energy of the subgrid scales is matched. For the extended system, the closure model should be energy-conserving, and a new skew-symmetric convolutional neural network architecture is proposed that has this property. Stability is thus guaranteed, independent of the actual weights and biases of the network. Importantly, our framework allows energy exchange between resolved scales and compressed subgrid scales and thus enables backscatter. To model dissipative systems (e.g. viscous flows), the framework is extended with a diffusive component. The introduced neural network architecture is constructed such that it also satisfies momentum conservation. We apply the new methodology to both the viscous Burgers' equation and the Korteweg-De Vries equation in 1D and show superior stability properties when compared to a vanilla convolutional neural network.
**Keywords**: Turbulence modeling, Neural networks, Energy conservation, Structure preservation, Burgers' equation, Korteweg-de Vries equation
## 1 Introduction
Simulating turbulent flows with direct numerical simulations (DNSs) is often infeasible due to the high computational requirements. This is due to the fact that with increasing Reynolds number fine computational meshes are required in order to resolve all the relevant scales. Especially for applications in design and uncertainty quantification, where typically many simulations are required, this rapidly becomes computationally infeasible [1; 2]. To tackle this issue several different approaches have been proposed, such as reduced order models [3], Reynolds-averaged Navier-Stokes (RANS) [4], and Large Eddy Simulation (LES) [5]. These approaches differ in how much of the physics is simulated and how much is modelled. Here we will focus on the LES approach.
In LES the large-scale physics is modelled directly by a numerical discretization of the governing equations on a coarse grid. However, due to fact that the filter does not commute with the nonlinear terms in the equations a commutator error arises. This prevents one from obtaining an accurate solution without knowledge of the subgrid-scale (SGS) content. This commutator error is typically referred to as the closure term and modeling this term is the main concern of the LES community. A major difficulty in the modeling of this closure term, by a corresponding closure model, is dealing with the exchange of energy from the small to the large scales (backscatter) [6; 7], as the SGS energy content is unknown during the time of the simulation. This makes accounting for backscatter difficult without leading to numerical instabilities
[8]. Classical physics-based closure models are therefore often represented by a dissipative model, e.g. of eddy-viscosity type [9], ensuring a net decrease in energy, or clipped such that backscatter is removed [10]. Even though the assumption of a global net decrease in energy is valid [9], explicit modeling of backscatter is still important, as locally the effect of backscatter can be of great significance [11; 12]. Closure models that explicitly model the global kinetic energy present in the small scales at a given point in time, to allow for backscatter without sacrificing stability, also exist [13]. Recently, machine learning approaches, or more specifically neural networks (NNs), have also become a viable option for the modeling of this closure term, as they show potential for outperforming the classical approaches in different use cases [14; 15; 16; 17]. However, stability remains an important issue along with abidance by physical structure such as mass, momentum, and energy conservation [18; 19; 20; 16].
In [18] the case of homogeneous isotropic turbulence for the compressible Navier-Stokes equations was investigated. A convolutional neural network (CNN) was trained to reproduce the closure term from high-resolution flow data. Although _a priori_ cross-correlation analysis on the training data showed promising results, stable models could only be achieved by projecting onto an eddy-viscosity basis. In [19] a gated recurrent NN was applied to the same test case which showed even higher cross-correlation values with the actual closure term, but still yielded unstable models, even after employing stability training on data with artificial noise [20]. In [16] the case of incompressible turbulent channel flow was treated. Here NNs with varying dimensions of input space were employed to construct a closure model. They showed that increasing the size of the input space of the NN improves _a priori_ performance. However, _a posteriori_ analysis showed that this increased input space also led to instabilities. Even after introducing a form of backscatter clipping to stabilize these larger models, they were still outperformed by NN closure models with a small input space, for which only the solution at neighboring grid points was provided to the NN. Two other recent promising approaches to improving the stability of NN closure models are 'trajectory fitting' [14; 15; 21; 22; 23] and reinforcement learning [24; 25]. Both of these approaches have in common that instead of fitting the NN to match the exact closure term (which is what we will refer to as 'derivative fitting'), one optimizes directly with respect to how well the solution is reproduced when carrying out a simulation with the closure model embedded into the solver. This has been shown to lead to more accurate and stable models [14; 15; 26]. The main difference between the two is that for trajectory fitting one requires the implementation of the spatial and temporal discretization to be differentiable with respect to the NN parameters. In this way one can determine the gradients of the solution error with respect to the NN parameters such that gradient-based optimizers can be applied to the corresponding optimization problem. Reinforcement learning on the other hand does not require these gradients which makes it suitable for non-differentiable processes such as chess and self-driving cars [27]. However, neither of these approaches lead to a provably stable NN closure model without some form of clipping and also do not guarantee abidance by the underlying conservation laws. The latter something that to our knowledge does not yet exist in the case of LES closure models.
To resolve the issues of stability and lack of physical structure, we present _a new NN closure model that satisfies both momentum and kinetic energy conservation and is therefore stable by design_, while still allowing for backscatter of energy into the resolved scales. As stated earlier, the difficulty of this task mainly lies in the fact that: (i) the kinetic energy conservation law includes terms which depend on the SGS content which is too expensive to simulate directly, and consequently (ii) kinetic energy of the large scales is not a conserved quantity (in the limit of vanishing viscosity). In order to tackle these issues we propose to take the 'discretize first, filter next' approach [22; 26]. This means that we start from a high-resolution solution with \(N\) degrees of freedom (on a fine computational grid), to which we apply a discrete filter (a spatial average) that projects the solution onto a coarse computational grid of dimension \(I\), with \(I\ll N\). Given the discrete filter the exact closure term can be computed from the high-resolution simulation by calculating the commutator error. The main advantage of this approach is that the closure term now also accounts for the discretization error. Based on the filter's properties we then derive an energy conservation law that can be split into two components: one that depends solely on the large, or resolved, scales (resolved energy) and another that solely depends on the SGS content (SGS energy) [13]. Like in existing works the closure model is represented by a NN, however, we include an additional set of SGS variables that represent the SGS energy in our simulation. The key insight is that the resulting total system of equations should still conserve energy in the inviscid limit, and we choose our NN approximation such that it is consistent with
this limit. In this way we still allow for backscatter without sacrificing stability.
The paper is structured in the following way. In section 2 we discuss Burgers' and Korteweg-de Vries equation and their energy and momentum conservation properties. We introduce the discrete filter, the resulting closure problem, and derive a new energy conservation law that describes an energy exchange between the resolved energy and the SGS energy. In section 3 we introduce our new machine learning approach for modeling the closure term, satisfying the derived energy conservation law using a set of SGS variables to represent the SGS energy. In addition, we show how to also satisfy momentum conservation. In section 4 we study the convergence properties and stability of our closure model with respect to the coarse grid resolution and compare this to a vanilla CNN. We also analyze the structure-preserving properties in terms of momentum and energy conservation and the ability of the trained closure models to extrapolate in space and time. In section 5 we conclude our work.
## 2 Governing equations, discrete filtering, and closure problem
Before constructing a machine learning closure on the discrete level, we formulate a description of the closure problem and the machinery required (e.g. discrete filters and reconstruction operators) at the discrete level, and we discuss the effect of filtering on the physical structure.
### Spatial discretization
We consider an initial value problem (IVP) of the following form:
\[\frac{\partial u}{\partial t} =f(u), \tag{1}\] \[u(\mathbf{x},0) =u_{0}(\mathbf{x}), \tag{2}\]
which describes the evolution of some quantity \(u(\mathbf{x},t)\) in space \(\mathbf{x}\in\Omega\) and time \(t\) on the spatial domain \(\Omega\subseteq\mathbb{R}^{d}\), given initial state \(u_{0}\). The dynamics of the system is governed by right-hand side (RHS) \(f(u)\), which typically involves partial derivatives of \(u\). After spatial discretization (method of lines), we obtain the vector \(\mathbf{u}(t)\in\mathbb{R}^{N}\) which approximates the value of \(u\) at each of the \(N\) grid points \(\mathbf{x}_{i}\in\Omega\) for \(i=1,\ldots,N\), such that \(\mathrm{u}_{i}\approx u(\mathbf{x}_{i})\). The discrete analogue of the IVP is then
\[\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t} =f_{h}(\mathbf{u}), \tag{3}\] \[\mathbf{u}(0) =\mathbf{u}_{0}, \tag{4}\]
where \(f_{h}\) represents a spatial discretization of \(f\). It is assumed that all the physics described by equation (1) is captured in the discrete solution \(\mathbf{u}\). This means that whenever the physics involves a wide range of spatial scales, a very large number of degrees of freedom \(N\) is needed to adequately resolve all these scales. This places a heavy (or even insurmountable) burden on the computational effort that is required to numerically solve the considered equations.
### Burgers' and Korteweg-de Vries equation and physical structure
We are interested in the modeling and simulation of turbulent flows. For this purpose, we first consider Burgers' equation, a 1D simplification of the Navier-Stokes equations. Burgers' equation describes the evolution of the velocity \(u(x,t)\) according to partial differential equation (PDE)
\[\frac{\partial u}{\partial t}=-\frac{1}{2}\frac{\partial u^{2}}{\partial x}+ \nu\frac{\partial^{2}u}{\partial x^{2}}, \tag{5}\]
where the first term on the RHS represents non-linear convection and the second term diffusion, weighted by the viscosity parameter \(\nu\geq 0\). These processes are somewhat analogous to 3-D turbulence in the fact that smaller scales are created by nonlinear convective terms which are then dissipated by diffusion [28]. We
will be interested in two properties of the Burgers' equation, which we collectively call'structure'. Firstly, momentum \(P\) is conserved on periodic domains:
\[\frac{\mathrm{d}P}{\mathrm{d}t}=\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}u \mathrm{d}\Omega=\int_{\Omega}-\frac{1}{2}\frac{\partial u^{2}}{\partial x}+ \nu\frac{\partial^{2}u}{\partial x^{2}}\mathrm{d}\Omega=-\frac{1}{2}[u^{2}]_{a }^{b}+\nu[\frac{\partial u}{\partial x}]_{a}^{b}=0, \tag{6}\]
Secondly, on periodic domains (kinetic) energy is conserved in the absence of viscosity:
\[\frac{\mathrm{d}E}{\mathrm{d}t}=\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int _{\Omega}u^{2}\mathrm{d}\Omega=\int_{\Omega}-\frac{u}{2}\frac{\partial u^{2}} {\partial x}+u\nu\frac{\partial^{2}u}{\partial x^{2}}\mathrm{d}\Omega=-\frac{ 1}{3}[u^{3}]_{a}^{b}+\nu[u\frac{\partial u}{\partial x}]_{a}^{b}-\nu\int_{ \Omega}\left(\frac{\partial u}{\partial x}\right)^{2}\mathrm{d}\Omega=-\underbrace {\nu\int_{\Omega}\left(\frac{\partial u}{\partial x}\right)^{2}\mathrm{d} \Omega}_{\geq 0}, \tag{7}\]
where we used integration by parts.
These properties can be preserved in a discrete setting by employing a structure-preserving scheme [29] on a uniform grid with grid-spacing \(h\). The convective term is approximated by the following skew-symmetric scheme:
\[\mathbf{G}(\mathbf{u})=-\frac{1}{3}\mathbf{D}_{1}\mathbf{u}^{2}-\frac{1}{3} \mathrm{diag}(\mathbf{u})\mathbf{D}_{1}\mathbf{u}, \tag{8}\]
where \(\mathbf{D}_{1}\) is the central difference operator corresponding to the stencil \((\mathbf{D}_{1}\mathbf{u})_{i}=(\mathrm{u}_{i+1}-\mathrm{u}_{i-1})/(2h)\), \(\mathbf{u}^{2}\) is to be interpreted element-wise, and \(\mathbf{D}_{2}\) is the diffusive operator with stencil \((\mathbf{D}_{2}\mathbf{u})_{i}=(\mathrm{u}_{i+1}-2\mathrm{u}_{i}+\mathrm{u}_ {i-1})/h^{2}\). We assume periodic boundary conditions (BCs). The spatial discretization leads to a system of ordinary differential equations (ODEs):
\[\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}=\underbrace{\mathbf{G}(\mathbf{u})+ \nu\mathbf{D}_{2}\mathbf{u}}_{=f_{h}(\mathbf{u})}. \tag{9}\]
which we will march forward in time using an explicit RK4 scheme [30]. The structure is preserved because the discretization conserves the discrete momentum \(P_{h}=h\mathbf{1}^{T}\mathbf{u}\) (for periodic BCs):
\[\frac{\mathrm{d}P_{h}}{\mathrm{d}t}=h\mathbf{1}^{T}f_{h}(\mathbf{u})=0, \tag{10}\]
where \(\mathbf{1}\) is a column vector with all entries equal to one. Furthermore, due to the skew-symmetry of the convection operator the evolution of the discrete kinetic energy \(E_{h}=\frac{h}{2}\mathbf{u}^{T}\mathbf{u}\) (which we will refer to simply as energy) is given by:
\[\text{Burgers' equation:}\qquad\frac{\mathrm{d}E_{h}}{\mathrm{d}t}=h\mathbf{u}^{ T}f_{h}(\mathbf{u})=h\nu\mathbf{u}^{T}\mathbf{D}_{2}\mathbf{u}=-\nu|| \mathbf{Q}\mathbf{u}||_{2}^{2}. \tag{11}\]
Here we used the fact that \(\mathbf{D}_{2}\) can be written as the Cholesky decomposition \(-\mathbf{Q}^{T}\mathbf{Q}\)[3], where \(\mathbf{Q}\) is a simple forward difference approximation of the first-order derivative. The norm \(\|.\|_{2}\) represents the conventional two-norm further detailed in section 2.5. This discretization ensures net energy dissipation and conservation in the inviscid limit.
In addition to Burgers' equation we will consider the Korteweg-de Vries (KdV) equation:
\[\frac{\partial u}{\partial t}=-\frac{\varepsilon}{2}\frac{\partial u^{2}}{ \partial x}-\mu\frac{\partial^{3}u}{\partial x^{3}}, \tag{12}\]
where \(\varepsilon\) and \(\mu\) are parameters. The KdV equation conserves momentum and (kinetic) energy irrespective of the values of \(\varepsilon\) and \(\mu\). We discretize the nonlinear term in the same way as for Burgers' equation, using the skew-symmetric scheme. The third-order spatial derivative is approximated by the skew-symmetric central difference operator \(\mathbf{D}_{3}\) corresponding to the stencil \((\mathbf{D}_{3}\mathbf{u})_{i}=(-\mathrm{u}_{i-2}+2\mathrm{u}_{i-1}-2\mathrm{u }_{i+1}+\mathrm{u}_{i+2})/(2h^{3})\), see [31]. The resulting discretization is then not only momentum conserving, but also energy conserving in the case of periodic BCs:
\[\text{KdV equation:}\qquad\frac{\mathrm{d}E_{h}}{\mathrm{d}t}=0. \tag{13}\]
### Discrete filtering
In order to tackle the issue of high computational expenses for large \(N\) we apply a spatial averaging filter to the fine-grid solution \(\mathbf{u}\), resulting in the coarse-grid solution \(\bar{\mathbf{u}}\). The coarse grid follows from dividing \(\Omega\) into \(I\) non-overlapping cells \(\Omega_{i}\) with cell centers \(\mathbf{X}_{i}\). The coarse grid is refined into the fine grid by splitting each \(\Omega_{i}\) into \(J(i)\) subcells \(\omega_{ij}\) with cell centers \(\mathbf{x}_{ij}\). This subdivision is intuitively pictured in the upper grid of Figure 1, for a 1D grid. Given the coarse and fine grid, we define the mass matrices \(\boldsymbol{\omega}\in\mathbb{R}^{N\times N}\) and \(\boldsymbol{\Omega}\in\mathbb{R}^{I\times I}\) which contain the volumes of the fine and coarse cells on the main diagonal, respectively.
To reduce the degrees of freedom of the system we apply a discrete spatial averaging filter \(\mathbf{W}\in\mathbb{R}^{I\times N}\), \(I<N\), to the fine-grid solution \(\mathbf{u}\) in order to obtain the filtered solution \(\bar{\mathbf{u}}\):
\[\bar{\mathbf{u}}=\mathbf{W}\mathbf{u}. \tag{14}\]
The spatial averaging filter is defined as
\[\mathbf{W}:=\boldsymbol{\Omega}^{-1}\mathbf{O}. \tag{15}\]
with overlap matrix \(\mathbf{O}\in\mathbb{R}^{I\times N}\):
\[\mathbf{O}:=\begin{bmatrix}|\omega_{11}|&\dots&|\omega_{1J(1)}|&&&&\\ &&\ddots&\ddots&\ddots&&\\ &&&&|\omega_{I1}|&\dots&|\omega_{IJ(I)}|\end{bmatrix}. \tag{16}\]
Here \(|.|\) represents the volume of the considered subcell. The overlap matrix essentially contains the volume of the overlap between coarse-grid cell \(\Omega_{i}\) and fine-grid subcell \(\omega_{ij}\) at the appropriate locations. Note that each column of \(\mathbf{W}\) and \(\mathbf{O}\) only contains one non-zero entry.
The filter reduces the number of unknowns at each time step from \(N\) to \(I\). Next to the filter, we define a reconstruction operator \(\mathbf{R}\in\mathbb{R}^{N\times I}\) which relates to \(\mathbf{W}\) as
\[\mathbf{R}:=\boldsymbol{\omega}^{-1}\mathbf{W}^{T}\boldsymbol{\Omega}= \boldsymbol{\omega}^{-1}\mathbf{O}^{T}. \tag{17}\]
The matrix \(\mathbf{R}\) is essentially a simple approximation of the inverse of \(\mathbf{W}\) by a piece-wise constant function [32]. This is intuitively pictured in Figure 2. An important property of the filter/reconstruction pair, which will be used in subsequent derivations, is that
\[\mathbf{W}\mathbf{R}=\boldsymbol{\Omega}^{-1}\mathbf{O}\boldsymbol{\omega}^{ -1}\mathbf{O}^{T}=\begin{bmatrix}\ddots&&\\ &\sum_{j=1}^{J(i)}\frac{|\omega_{ij}|}{|\Omega_{i}|}&\\ &&\ddots\end{bmatrix}=\mathbf{I}. \tag{18}\]
Consequently, filtering a reconstructed solution \(\mathbf{R}\bar{\mathbf{u}}\) leaves \(\bar{\mathbf{u}}\) unchanged, i.e.
\[\bar{\mathbf{u}}=\underbrace{(\mathbf{W}\mathbf{R})^{p}}_{=\mathbf{I}}\mathbf{ W}\mathbf{u} \tag{19}\]
for \(p\in\mathbb{N}_{0}\). We will refer to this property as the 'projection' property, as it is similar to how repeated application of a projection operator leaves a vector unchanged. By subtracting the reconstructed solution \(\mathbf{R}\bar{\mathbf{u}}\) from \(\mathbf{u}\) we can define the subgrid-scale (SGS) content \(\mathbf{u}^{\prime}\in\mathbb{R}^{N}\):
\[\mathbf{u}^{\prime}:=\mathbf{u}-\mathbf{R}\bar{\mathbf{u}}. \tag{20}\]
Figure 1: Subdivision of the spatial grid where the dots represent cell centers \(x_{ij}\) and \(X_{i}\) for \(J(1)=J(2)=3\) and \(J(3)=4\).
In addition, we will refer to the SGS content in a single coarse cell \(\Omega_{i}\) as \(\mathbf{\mu}_{i}\in\mathbb{R}^{J(i)}\), see Figure 2. Applying the filter to \(\mathbf{u}^{\prime}\) yields zero:
\[\mathbf{W}\mathbf{u}^{\prime}=\mathbf{W}\mathbf{u}-\underbrace{\mathbf{W} \mathbf{R}}_{=\mathrm{I}}\bar{\mathbf{u}}=\bar{\mathbf{u}}-\bar{\mathbf{u}}= \mathbf{0}_{\Omega}, \tag{21}\]
where \(\mathbf{0}_{\Omega}\) is a vector with all entries equal to zero defined on the coarse grid. This can be seen as the discrete equivalent of a property of a Reynolds operator [5]. As illustration we show each of the introduced quantities, calculated for a 1D sinusoidal wave, in Figure 2.
### Discrete closure problem
After having defined the filter we describe the time evolution of \(\bar{\mathbf{u}}\). Since we employ a spatial filter that does not depend on time, filtering and time-differentiation commute: \(\mathbf{W}\frac{\mathrm{d}\mathbf{u}}{\mathrm{d}t}=\frac{\mathrm{d}(\bar{ \mathbf{W}}\mathbf{u})}{\mathrm{d}t}=\frac{\mathrm{d}\bar{\mathbf{u}}}{ \mathrm{d}t}\). The closure problem arises because such a commutation property is not true for the spatial discretization, i.e.
\[\mathbf{W}f_{h}(\mathbf{u})\neq f_{H}(\mathbf{W}\mathbf{u})=f_{H}(\bar{ \mathbf{u}}), \tag{22}\]
where \(f_{H}\) represents the same spatial discretization scheme as \(f_{h}\), but on the coarse grid. The closure problem is that the equations for \(\bar{\mathbf{u}}\) are 'unclosed', meaning that we require the fine-grid solution \(\mathbf{u}\) to be able to evolve the coarse-grid solution \(\bar{\mathbf{u}}\) in time. The filtered system can be rewritten in closure model form as
\[\frac{\mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t}=f_{H}(\bar{\mathbf{u}})+ \underbrace{(\mathbf{W}f_{h}(\mathbf{u})-f_{H}(\bar{\mathbf{u}}))}_{=: \mathbf{c}(\mathbf{u})}, \tag{23}\]
where \(\mathbf{c}(\mathbf{u})\in\mathbb{R}^{I}\) is the closure term. \(\mathbf{c}(\mathbf{u})\) is essentially the discrete equivalent of the commutator error in LES [5]. One advantage of having first discretized the problem is that \(\mathbf{c}(\mathbf{u})\) now also includes the discretization error. The aim in closure modeling is generally to approximate \(\mathbf{c}(\mathbf{u})\) by a closure model \(\tilde{\mathbf{c}}(\bar{\mathbf{u}};\mathbf{\Theta})\). In section 3 we choose to represent \(\tilde{\mathbf{c}}\) by a neural network (NN), whose parameters \(\mathbf{\Theta}\) are to be trained to make the approximation accurate. In constructing such approximations, we will also use the equation describing the evolution of the SGS content \(\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}\):
\[\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}=\frac{\mathrm{d}\mathbf{u}} {\mathrm{d}t}-\mathbf{R}\frac{\mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t}. \tag{24}\]
### Inner products and energy decomposition
To describe the energy that is present in the system at any given time, we define the following inner products and norms:
\[(\mathbf{a},\mathbf{b})_{\boldsymbol{\xi}} :=\mathbf{a}^{T}\boldsymbol{\xi}\mathbf{b} \tag{25}\] \[||\mathbf{a}||_{\boldsymbol{\xi}}^{2} :=(\mathbf{a},\mathbf{a})_{\boldsymbol{\xi}} \tag{26}\]
for \(\boldsymbol{\xi}\in\{\boldsymbol{\omega},\boldsymbol{\Omega}\}\). With this notation we can represent the inner product on the fine grid, \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{N}\), as well as the coarse grid, \(\mathbf{a},\mathbf{b}\in\mathbb{R}^{I}\), respectively. For \(\boldsymbol{\xi}=\mathbf{I}\) we simply obtain the conventional inner product and two-norm, denoted as \((\mathbf{a},\mathbf{b})=\mathbf{a}^{T}\mathbf{b}\) and \(||\mathbf{a}||_{2}^{2}\), respectively. We also define a joint inner product as the following sum of inner products:
\[(\begin{bmatrix}\mathbf{a}_{1}\\ \vdots\\ \mathbf{a}_{M}\end{bmatrix},\begin{bmatrix}\mathbf{b}_{1}\\ \vdots\\ \vdots\\ \mathbf{b}_{M}\end{bmatrix})_{\boldsymbol{\xi}_{M}}:=\begin{bmatrix} \mathbf{a}_{1}\\ \vdots\\ \mathbf{a}_{M}\end{bmatrix}^{T}\underbrace{\begin{bmatrix}\boldsymbol{\xi}&&\\ &\ddots&\\ &&\boldsymbol{\xi}\end{bmatrix}}_{=\boldsymbol{\xi}_{M}}\begin{bmatrix} \mathbf{b}_{1}\\ \vdots\\ \mathbf{b}_{M}\end{bmatrix}, \tag{27}\]
where vectors \(\mathbf{a}_{i}\) and \(\mathbf{b}_{i}\) (\(i=1,\ldots,M\)) have the appropriate dimensions and are concatenated into a column vector. Furthermore, \(\boldsymbol{\xi}_{M}\) is the extended mass matrix. This notation is introduced in order to later extend our system of equations with additional equations for the subgrid content. Besides the projection property (19) an additional characteristic of the filter/reconstruction pair is that the inner product is conserved under reconstruction (see Appendix A):
\[(\mathbf{R}\bar{\mathbf{a}},\mathbf{R}\bar{\mathbf{b}})_{\omega}=(\bar{ \mathbf{a}},\bar{\mathbf{b}})_{\Omega}. \tag{28}\]
The total energy \(E_{h}\) of the fine-grid solution in terms of inner products reads
\[E_{h}:=\frac{1}{2}||\mathbf{u}||_{\omega}^{2}, \tag{29}\]
which can be decomposed using (20):
\[E_{h} =\frac{1}{2}||\mathbf{u}||_{\omega}^{2}=\frac{1}{2}||\mathbf{R} \bar{\mathbf{u}}+\mathbf{u}^{\prime}||_{\omega}^{2}\] \[=\frac{1}{2}||\mathbf{R}\bar{\mathbf{u}}||_{\omega}^{2}+( \mathbf{R}\bar{\mathbf{u}},\mathbf{u}^{\prime})_{\omega}+\frac{1}{2}|| \mathbf{u}^{\prime}||_{\omega}^{2}.\]
We can simplify this decomposition by noting that the cross-term is zero, i.e. \(\mathbf{R}\bar{\mathbf{u}}\) is orthogonal to \(\mathbf{u}^{\prime}\), see Appendix A. Combining this orthogonality property with property (28) leads to the following important energy decomposition:
\[E_{h}=\underbrace{\frac{1}{2}||\tilde{\mathbf{u}}||_{\Omega}^{2}}_{=:E_{h}}+ \underbrace{\frac{1}{2}||\mathbf{u}^{\prime}||_{\omega}^{2}}_{=:E_{h}^{\prime}}. \tag{30}\]
In other words, our choice of filter and reconstruction operators is such that the total energy of the system can be split into one part (the resolved energy \(\bar{E}_{h}\)) that exclusively depends on the filtered \(\bar{\mathbf{u}}\) and another part (the SGS energy \(E_{h}^{\prime}\)) that depends only on the SGS content \(\mathbf{u}^{\prime}\). The energy conservation law can also be decomposed into a resolved and SGS part:
\[\frac{\mathrm{d}E_{h}}{\mathrm{d}t}=\frac{\mathrm{d}\bar{E}_{h}}{\mathrm{d}t} +\frac{\mathrm{d}E_{h}^{\prime}}{\mathrm{d}t}=(\bar{\mathbf{u}},\frac{ \mathrm{d}\bar{\mathbf{u}}}{\mathrm{d}t})_{\Omega}+(\mathbf{u}^{\prime},\frac{ \mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t})_{\omega}=0, \tag{31}\]
where we used the product rule to arrive at this relation. For Burgers' equation with \(\nu>0\), the last equality sign changes to \(\leq\). This means that even for dissipative systems the resolved energy could in principle increase (so-called 'backscatter'), as long as the total energy is decreasing.
We illustrate the energy decomposition using simulations of the KdV equation. Figure 3 shows the exchange of energy between the subgrid and filtered solutions. Clearly, the energy of the filtered solution is _not_ a conserved quantity.
### Momentum conservation
Next to the energy, we formulate the total discrete momentum in terms of an inner product and investigate if it is conserved upon filtering. The total discrete momentum is given by
\[P_{h}=(\mathbf{1}_{\omega},\mathbf{u})_{\omega}, \tag{32}\]
where \(\mathbf{1}_{\omega}\) is a vector with all entries equal to one, defined on the fine grid. From this definition we can show (see Appendix A) that the discrete momentum does not change upon filtering, i.e.
\[P_{h}=(\mathbf{1}_{\omega},\mathbf{u})_{\omega}=(\mathbf{1}_{\Omega},\bar{ \mathbf{u}})_{\Omega}. \tag{33}\]
This relation allows us to derive a momentum conservation condition on the closure term:
\[\frac{\mathrm{d}P_{h}}{\mathrm{d}t}=(\mathbf{1}_{\omega},f_{h}(\mathbf{u}))_{ \omega}=(\mathbf{1}_{\Omega},\mathbf{W}f_{h}(\mathbf{u}))_{\Omega}=(\mathbf{1 }_{\Omega},f_{H}(\bar{\mathbf{u}})+\mathbf{c}(\mathbf{u}))_{\Omega}=(\mathbf{ 1}_{\Omega},\mathbf{c}(\mathbf{u}))_{\Omega}=0, \tag{34}\]
where we used the fact that the coarse discretization is already momentum conserving.
## 3 Structure-preserving closure modeling framework
The derived discrete energy and momentum balances, before and after filtering, will be used to construct a novel structure-preserving closure model in this section. We will also discuss how to fit the parameters of the model. The ideas will be presented for periodic BCs in 1D, whereas different types of boundary conditions (BCs) are discussed in Appendix C.
### Framework
Many existing closure approaches aim at approximating \(\mathbf{c}(\mathbf{u})\) by a closure model \(\tilde{\mathbf{c}}(\bar{\mathbf{u}};\mathbf{\Theta})\), where \(\mathbf{\Theta}\) are parameters to be determined such that the approximation is accurate. In this work, we propose a novel
Figure 3: Simulation of KdV equation (12) with periodic BCs before and after filtering (left) and corresponding energy decomposition (right).
formulation, in which we extend the system of equations for the \(I\) filtered variables \(\bar{\mathbf{u}}\) with a set of \(I\) auxiliary SGS variables \(\mathbf{s}\in\mathbb{R}^{I}\) that locally model the SGS energy. This extended system of equations has the form
\[\frac{\mathrm{d}}{\mathrm{d}t}\begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}=\begin{bmatrix}f_{H}(\bar{\mathbf{u}})\\ \mathbf{0}\end{bmatrix}+\mathbf{\Omega}_{2}^{-1}(\mathcal{K}-\mathcal{K}^{T}) \begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}-\mathbf{\Omega}_{2}^{-1}\mathcal{Q}^{T}\mathcal{Q} \begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}, \tag{35}\]
where \(\mathcal{K}=\mathcal{K}(\bar{\mathbf{u}},\mathbf{s},\mathbf{\Theta})\in \mathbb{R}^{2I\times 2I}\) and \(\mathcal{Q}=\mathcal{Q}(\bar{\mathbf{u}},\mathbf{s},\mathbf{\Theta})\in \mathbb{R}^{2I\times 2I}\), and \(\mathbf{\Theta}\) represents the parameters. Note that this system is an approximation of the true dynamics. Next to the introduction of the SGS variables \(\mathbf{s}\), the second main novelty in this work is to formulate the closure model in terms of a skew-symmetric term and a dissipative term. The skew-symmetric term is introduced to allow for a local energy exchange between the filtered solution and the SGS variables, and the dissipative term to provide additional dissipation. These operators will be modelled in terms of neural networks (NNs) with trainable parameters (contained in \(\mathbf{\Theta}\)). So even though the notation in (35) suggests linearity of the closure model in \(\bar{\mathbf{u}}\) and \(\mathbf{s}\), the dependence of \(\mathcal{K}\) and \(\mathcal{Q}\) on \(\bar{\mathbf{u}}\) and \(\mathbf{s}\) makes the model non-linear. The construction of the introduced operators will be detailed in sections 3.3 and 3.4. Note the presence of \(\mathbf{\Omega}_{2}^{-1}\) in (35), which is due to the fact that our energy definition includes \(\mathbf{\Omega}\).
The SGS variables \(\mathbf{s}\) are used to represent the SGS energy _on the coarse grid_, such that
\[\frac{1}{2}\mathbf{s}^{2}\approx\frac{1}{2}\mathbf{W}(\mathbf{u}^{\prime})^{2}, \tag{36}\]
where the notation \((.)^{2}\) is again to be interpreted element-wise. In section 3.2 we present how we achieve this approximation. By adding these SGS variables as unknowns into equation (35), we are able to include an approximation of the SGS energy into the simulation, while still significantly reducing the system size (from \(N\) to \(2I\)). Our key insight is that _by explicitly including an approximation of the SGS energy we are able to satisfy the energy conservation balance, equation (31)_. The energy balance serves not only as an important constraint that restrains the possible forms that the closure model (represented by a NN) can take, but also guarantees stability of our closure model, since the (kinetic) energy is a norm of the solution which is bounded in time.
Given the extended system of equations, the total energy is approximated as
\[E_{h}\approx E_{s}:=||\bar{\mathbf{U}}||_{\Omega_{2}}^{2}=\underbrace{\frac{1 }{2}(\bar{\mathbf{u}},\bar{\mathbf{u}})_{\Omega}}_{=E_{h}}+\underbrace{\frac{ 1}{2}(\mathbf{s},\mathbf{s})_{\Omega}}_{=:S}, \tag{37}\]
with \(S\) approximating the SGS energy
\[S\approx\bar{E}_{h}^{\prime}, \tag{38}\]
with evolution
\[\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=\left(\bar{\mathbf{U}},\frac{\mathrm{d} \bar{\mathbf{U}}}{\mathrm{d}t}\right)_{\Omega_{2}}, \tag{39}\]
where we used the joint inner product notation introduced in (27) and concatenated the filtered solution and the SGS variables into a single vector \(\bar{\mathbf{U}}\in\mathbb{R}^{2I}\):
\[\bar{\mathbf{U}}:=\begin{bmatrix}\bar{\mathbf{u}}\\ \mathbf{s}\end{bmatrix}. \tag{40}\]
Upon substituting the closure model form, equation (35), the following evolution equation for the approximated total energy results:
\[\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=(\bar{\mathbf{u}},f_{H}(\bar{\mathbf{u}})) _{\Omega}-||\mathcal{Q}\bar{\mathbf{U}}||_{2}^{2}, \tag{41}\]
as the skew-symmetric term involving \(\mathcal{K}-\mathcal{K}^{T}\) cancels. This equation can be further simplified when choosing a specific \(f_{H}\). For example, if we substitute the structure-preserving discretization of Burgers' equation (9) for \(f_{H}\) (with grid-spacing \(H\)) we obtain
\[\text{Burgers' equation:}\qquad\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=-H\nu||\bar{ \mathbf{Q}}\bar{\mathbf{u}}||_{2}^{2}-||\mathcal{Q}\bar{\mathbf{U}}||_{2}^{2} \leq 0, \tag{42}\]
i.e. energy is dissipated from the system by two terms: the coarse-grid diffusion operator, and an additional (trainable) dissipation term. Here \(\bar{\mathbf{Q}}\) represents the forward difference approximation of the first-order derivative on the coarse grid. This additional dissipation term is required as the diffusion operator, discretized on the fine grid, is more dissipative than on the coarse grid, see Appendix B.
For energy-conserving systems, such as KdV, we set \(\mathcal{Q}\) to zero, and we obtain:
\[\text{KdV equation:}\qquad\frac{\mathrm{d}E_{s}}{\mathrm{d}t}=0. \tag{43}\]
We stress again that by having added an approximation of the subgrid energy into the equation system, we are able to use the concept of energy conservation (or dissipation) in constructing a closure model. Furthermore, as energy is dissipated or conserved the resulting model is stable by design.
### SGS variables
To represent the SGS variables we propose a data-driven linear compression of the SGS content (assuming uniform coarse and fine grids such that \(J(i)=J\)):
\[\mathrm{s}_{i}=\mathbf{t}^{T}\boldsymbol{\mu}_{i},\qquad i=1,\ldots,I, \tag{44}\]
where we recall that \(\boldsymbol{\mu}_{i}\in\mathbb{R}^{J}\) represents the SGS content in a single coarse cell \(\Omega_{i}\). The SGS variable \(\mathrm{s}_{i}\) is a representation of the SGS content within cell \(\Omega_{i}\) encoded by learnable compression parameters \(\mathbf{t}\in\mathbb{R}^{J}\). This linear compression can be written for all coarse-grid points as the following matrix vector product:
\[\mathbf{s}=\mathbf{T}\mathbf{u}^{\prime}, \tag{45}\]
with \(\mathbf{T}(\mathbf{t})\in\mathbb{R}^{I\times N}\) being the (sparse) compression matrix fully defined by the parameters \(\mathbf{t}\). Note that \(\mathbf{T}\) has the same sparsity pattern as \(\mathbf{W}\). Using this notation (40) can be written as
\[\bar{\mathbf{U}}=\mathbf{W}_{\mathbf{T}}\mathbf{u}, \tag{46}\]
where
\[\mathbf{W}_{\mathbf{T}}:=\begin{bmatrix}\mathbf{W}\\ \mathbf{T}(\mathbf{I}-\mathbf{R}\mathbf{W})\end{bmatrix}. \tag{47}\]
The main advantage of defining the compression as a linear operation is that, if we have reference data for \(\mathbf{u}^{\prime}\), we can easily obtain the evolution of \(\mathbf{s}\) as
\[\frac{\mathrm{d}\mathbf{s}}{\mathrm{d}t}=\frac{\partial\mathbf{s}}{\partial \mathbf{u}^{\prime}}\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}=\mathbf{ T}\frac{\mathrm{d}\mathbf{u}^{\prime}}{\mathrm{d}t}. \tag{48}\]
Another advantage is that the Jacobian \(\frac{\partial\mathbf{s}}{\partial\mathbf{u}^{\prime}}=\mathbf{T}\) does not depend on \(\mathbf{u}^{\prime}\), such that we avoid the problem that arises when taking the 'natural' choice of \(\mathbf{s}\), which would be \(\mathbf{s}=\sqrt{\mathbf{W}(\mathbf{u}^{\prime})^{2}}\), namely that the Jacobian
\[\left(\frac{\partial\mathbf{s}}{\partial\mathbf{u}^{\prime}}\right)_{ij}= \frac{\mathrm{W}_{ij}\mathrm{u}_{j}^{\prime}}{\sqrt{\sum_{j=1}^{N}\mathrm{W}_{ ij}(\mathrm{u}_{j}^{\prime})^{2}}}\]
becomes undefined when the denominator is zero. A third advantage is that the linear compression allows us to calculate the contribution of a forcing term to \(\frac{\mathrm{d}\mathbf{s}}{\mathrm{d}t}\) (this will be explained in section 3.5). The parameters \(\mathbf{t}\) are chosen such that the SGS energy is accurately represented on the coarse grid, i.e. we determine the elements of \(\mathbf{t}\) such that they minimize the error made in approximation (36), leading to the loss function
\[\mathcal{L}_{s}(\mathcal{D};\mathbf{t})=\frac{1}{|\mathcal{D}|}\sum_{d\in \mathcal{D}}\frac{1}{|\Omega|}||\frac{1}{2}(\mathbf{T}(\mathbf{t})\mathbf{u}_ {d}^{\prime})^{2}-\frac{1}{2}\mathbf{W}(\mathbf{u}_{d}^{\prime})^{2}||_{ \Omega}^{2}, \tag{49}\]
where the notation \((.)^{2}\) is again to be interpreted element-wise. Here the subscript \(d\) represents a sample from the training dataset \(\mathcal{D}\) containing \(|\mathcal{D}|\) samples. Note that, due to the way \(\mathbf{t}\) appears in the loss function,
negative values for \(\mathbf{s}\) are allowed. To overcome the saddle point at \(\mathbf{t}=\mathbf{0}\) we initialize the elements of \(\mathbf{t}\) with random noise (see Appendix D). For \(J=2\) this minimization problem has an exact solution (see Appendix E).
To illustrate how the compression works in practice we consider a snapshot from a simulation of Burgers' equation (\(\nu=0.01\)) with periodic BCs, see Figure 4. We observe that \(\mathbf{s}\) serves as an energy storage for the SGS content, which is mainly present near shocks.
### Skew-symmetric closure term \(\mathcal{K}\)
Having defined the SGS variables \(\mathbf{s}\), we continue to detail the construction of \(\mathcal{K}\) appearing in equation (35). We propose the following decomposition:
\[\mathcal{K}=\begin{bmatrix}\mathbf{K}_{11}&\mathbf{K}_{12}\\ \mathbf{0}&\mathbf{K}_{22}\end{bmatrix}\quad\rightarrow\quad\mathcal{K}- \mathcal{K}^{T}=\begin{bmatrix}\mathbf{K}_{11}-\mathbf{K}_{11}^{T}&\mathbf{K} _{12}\\ -\mathbf{K}_{12}^{T}&\mathbf{K}_{22}-\mathbf{K}_{22}^{T}\end{bmatrix}, \tag{50}\]
with submatrices \(\mathbf{K}_{ij}(\bar{\mathbf{U}};\boldsymbol{\Theta})\in\mathbb{R}^{I\times I}\), which will depend on the solution \(\bar{\mathbf{U}}\) and trainable parameters \(\boldsymbol{\Theta}\). This decomposition is chosen such that the upper-left submatrix \(\mathbf{K}_{11}\) allows for an energy exchange within the resolved scales, the upper-right submatrix \(\mathbf{K}_{12}\) for an energy exchange between the resolved scales and the SGS variables, and the final submatrix \(\mathbf{K}_{22}\) for an energy exchange within the SGS variables. If all entries of each \(\mathbf{K}_{ij}\) would be taken as parameters, one would have \(\mathcal{O}(I^{2})\) parameters, which is too large for practical problems of interest. Instead, we propose to represent each \(\mathbf{K}_{ij}\) in terms of a matrix \(\boldsymbol{\Phi}_{ij}\in\mathbb{R}^{I\times I}\) of only \(2D+1\) diagonals \(\boldsymbol{\phi}_{d}^{ij}\in\mathbb{R}^{I}\) (\(d=-D,\ldots,D\)), where each diagonal is given by an output channel of a convolutional neural network (CNN, [33]):
\[\boldsymbol{\Phi}_{ij}=\begin{bmatrix}\ddots&&\ddots&\ddots&\ddots&&\ddots&&\\ &\boldsymbol{\phi}_{-D}^{ij}&\cdots&\boldsymbol{\phi}_{-1}^{ij}&\boldsymbol{ \phi}_{0}^{ij}&\boldsymbol{\phi}_{1}^{ij}&\cdots&\boldsymbol{\phi}_{D}^{ij} \\ &&\ddots&&\ddots&\ddots&\ddots&&\ddots\end{bmatrix}. \tag{51}\]
The hyperparameter \(D\) determines the sparsity of \(\boldsymbol{\Phi}_{ij}\) and is taken such that \(D\ll I/2\) to reduce computational costs. In this way only a local neighbourhood is included in the approximation. As the input channels of the CNN we take \(\{\bar{\mathbf{u}},\mathbf{s},f_{H}(\bar{\mathbf{u}})\}\). The dependence of \(\boldsymbol{\phi}_{d}\) on \(\bar{\mathbf{U}}\) through the CNN adds non-linearity to the
Figure 4: Learned SGS compression applied to Burgersβ equation for \(N=1000\), with \(I=20\) and \(J=50\). By filtering and applying the SGS compression the degrees of freedom of this system are effectively reduced from \(N=1000\) to \(2I=40\).
closure model. Multiplying some vector \(\mathbf{v}\) by \(\mathbf{\Phi}_{ij}\) thus corresponds to the following non-linear stencil
\[(\mathbf{\Phi}_{ij}\mathbf{v})_{k}=\sum_{d=-D}^{D}\phi_{dk}^{ij}(\bar{\mathbf{U} };\mathbf{\Theta})\mathrm{v}_{k+d}. \tag{52}\]
A CNN is chosen to represent the diagonals as it is invariant with respect to translations of the input channels. In this way our final closure model inherits this property. In total, the CNN thus consists of three input channels, an arbitrary number of hidden channels (to be specified in the results section), and \(3(2D+1)\) output channels:
\[\mathrm{CNN}:\bar{\mathbf{u}},\mathbf{s},f_{H}(\bar{\mathbf{u}})\mapsto \boldsymbol{\phi}_{d}^{11},\boldsymbol{\phi}_{d}^{12},\boldsymbol{\phi}_{d}^ {22}\qquad d=-D,\ldots D. \tag{53}\]
In the case of periodic BCs we apply circular padding to the input channels of the CNN to connect both ends of the domain. Different BC types are discussed in Appendix C.
Although in principle the matrices \(\mathbf{K}_{ij}\) could be represented directly by matrices of the form (51), such a construction is not momentum-conserving. In the next subsection we will propose an approach to express \(\mathbf{K}_{ij}\) in terms of \(\mathbf{\Phi}_{ij}\) which _is_ momentum conserving.
#### 3.3.1 Momentum-conserving transformation
Requiring momentum conservation for the extended system (35) leads to the following condition (see also (34)):
\[\left(\begin{bmatrix}\mathbf{1}_{\Omega}\\ \mathbf{0}_{\Omega}\end{bmatrix},\mathbf{\Omega}_{2}^{-1}(\mathcal{K}- \mathcal{K}^{T})\bar{\mathbf{U}}\right)_{\Omega_{2}}=\mathbf{1}_{\Omega}^{T}( \mathbf{K}_{11}-\mathbf{K}_{11}^{T})\bar{\mathbf{u}}+\mathbf{1}_{\Omega}^{T} \mathbf{K}_{12}\mathbf{s}=0, \tag{54}\]
such that we impose the following constraints on the \(\mathbf{K}\) matrices:
\[\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11}=\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11} ^{T}=\mathbf{1}_{\Omega}^{T}\mathbf{K}_{12}=\mathbf{0}_{\Omega}. \tag{55}\]
To satisfy conditions (55) we first define the linear operator \(\mathbf{B}\in\mathbb{R}^{I\times I}\) corresponding to the stencil
\[(\mathbf{B}\mathbf{v})_{i}=\sum_{j=-B}^{B}b_{i}\mathrm{v}_{i+j} \tag{56}\]
with \(2B+1\) parameters \(b_{i}\) (\(i=-B,\ldots,B\)), applied to some vector \(\mathbf{v}\). In addition, we define the matrix \(\bar{\mathbf{B}}\in\mathbb{R}^{I\times I}\) whose elements are given by
\[\bar{b}_{i}=b_{i}-\frac{1}{2B+1}\sum_{i=-B}^{B}b_{i}, \tag{57}\]
corresponding to the stencil
\[(\bar{\mathbf{B}}\mathbf{v})_{i}=\sum_{j=-B}^{B}\bar{b}_{i}\mathrm{v}_{i+j}. \tag{58}\]
In the periodic case this matrix satisfies
\[\mathbf{1}_{\Omega}^{T}\bar{\mathbf{B}}=\mathbf{1}_{\Omega}^{T}\bar{\mathbf{B }}^{T}=\mathbf{0}_{\Omega}, \tag{59}\]
independent of the choice of underlying parameters \(b_{i}\). A simple example of a matrix \(\bar{\mathbf{B}}\) that satisfies such conditions is the second order finite difference representation of a first-order derivative: \(B=1\), \(\bar{b}_{-1}=-1/(2H)\), \(\bar{b}_{0}=0\), \(\bar{b}_{1}=1/(2H)\). Our framework allows for more general stencils which are trained based on fine-grid simulations.
These \(\mathbf{B}\) matrices can be used to enforce momentum conservation on the \(\mathbf{\Phi}\) matrices by pre- and post-multiplication. This will be denoted by a superscript, e.g.
\[\mathbf{K}_{12}=\mathbf{\Phi}_{12}^{\bar{\mathbf{B}}\mathbf{B}}=\bar{\mathbf{B}}_ {1}^{\mathbf{\Phi}_{12}}\mathbf{\Phi}_{12}\mathbf{B}_{2}^{\mathbf{\Phi}_{12}} \tag{60}\]
such that \(\mathbf{1}_{\Omega}^{T}\mathbf{K}_{12}=0\) is satisfied. Note that satisfying this condition only requires a \(\bar{(.)}\) over the pre-multiplying \(\mathbf{B}\) matrix. The matrices \(\bar{\mathbf{B}}_{1}^{\mathbf{\Phi}_{12}},\mathbf{B}_{2}^{\mathbf{\Phi}_{12}} \in\mathbb{R}^{I\times I}\) each contain their own unique set of \(2B+1\) underlying parameters. The hyperparameter \(B\) is taken such that \(B\ll I/2\) to enforce sparsity and thus reduce computational costs. Similarly,
\[\mathbf{K}_{11}=\mathbf{\Phi}_{11}^{\bar{\mathbf{B}}\bar{\mathbf{B}}}=\bar{ \mathbf{B}}_{1}^{\mathbf{\Phi}_{11}}\mathbf{\Phi}_{11}\bar{\mathbf{B}}_{2}^{ \mathbf{\Phi}_{11}} \tag{61}\]
such that the constraints \(\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11}=\mathbf{1}_{\Omega}^{T}\mathbf{K}_{11} ^{T}=0\) are met. The additional \(\mathbf{B}\) matrices of \(\mathbf{K}_{11}\) add another set of \(2(2B+1)\) parameters to the framework.
The full matrix \(\mathcal{K}\) follows as
\[\mathcal{K}=\begin{bmatrix}\mathbf{\Phi}_{11}^{\bar{\mathbf{B}}\bar{\mathbf{B }}}&\mathbf{\Phi}_{12}^{\bar{\mathbf{B}}\bar{\mathbf{B}}}\\ \mathbf{0}&\mathbf{\Phi}_{22}^{\bar{\mathbf{B}}\bar{\mathbf{B}}}\end{bmatrix}, \tag{62}\]
where we used a momentum-conserving matrix \(\bar{\mathbf{B}}\) where appropriate. We thus have \(6(2B+1)\) parameters that fully describe the \(\mathbf{B}\) matrices.
### Dissipative term \(\mathcal{Q}\)
In a similar fashion as \(\mathcal{K}\) we decompose \(\mathcal{Q}\) as
\[\mathcal{Q}=\begin{bmatrix}\mathbf{Q}_{11}&\mathbf{Q}_{12}\\ \mathbf{Q}_{21}&\mathbf{Q}_{22}\end{bmatrix}. \tag{63}\]
As for the \(\mathcal{K}\) matrix, we do not represent the entire matrix by parameters but instead use the output channels of the CNN to represent the diagonals of the submatrices. However, in this case we only construct the main and \(D\) upper diagonals. The reason for this will be explained later. The diagonals are again represented by CNN output channels \(\boldsymbol{\psi}^{ij}\in\mathbb{R}^{I}\) defining the matrix \(\mathbf{\Psi}_{ij}\in\mathbb{R}^{I\times I}\). The CNN of section 3.3 is thus extended and represents the mapping
\[\text{CNN}:\bar{\mathbf{u}},\mathbf{s},f_{H}(\bar{\mathbf{u}})\mapsto \boldsymbol{\phi}_{d_{1}}^{11},\boldsymbol{\phi}_{d_{1}}^{12},\boldsymbol{\phi }_{d_{1}}^{22},\boldsymbol{\psi}_{d_{2}}^{11},\boldsymbol{\psi}_{d_{2}}^{12}, \boldsymbol{\psi}_{d_{2}}^{21},\boldsymbol{\psi}_{d_{2}}^{22},\qquad d_{1}=-D, \ldots D,\quad d_{2}=0,\ldots D. \tag{64}\]
The underlying CNN now consists of three input channels, an arbitrary number of hidden channels, and \(3(2D+1)+4(D+1)\) output channels.
Again, like in case of \(\mathbf{\Phi}\), a mapping is needed to make the \(\mathbf{\Psi}\) matrices momentum-conserving. Substituting decomposition (63) into the momentum conservation constraint (34) results in
\[-\left(\begin{bmatrix}\mathbf{1}_{\Omega}\\ \mathbf{0}_{\Omega}\end{bmatrix},\mathbf{\Omega}_{2}^{-1}(\mathcal{Q}^{T} \mathcal{Q})\bar{\mathbf{U}}\right)_{\Omega_{2}}=-\mathbf{1}_{\Omega}^{T}( \mathbf{Q}_{11}^{T}\mathbf{Q}_{11}+\mathbf{Q}_{21}^{T}\mathbf{Q}_{21})\bar{ \mathbf{u}}-\mathbf{1}_{\Omega}^{T}(\mathbf{Q}_{11}^{T}\mathbf{Q}_{12}+ \mathbf{Q}_{21}^{T}\mathbf{Q}_{22})\mathbf{s}=0, \tag{65}\]
leading to the constraints
\[\mathbf{1}_{\Omega}^{T}\mathbf{Q}_{11}^{T}=\mathbf{1}_{\Omega}^{T}\mathbf{Q} _{21}^{T}=\mathbf{0}_{\Omega}. \tag{66}\]
The matrix \(\mathcal{Q}\) that satisfies these constraints follows as
\[\mathcal{Q}=\begin{bmatrix}\mathbf{\Psi}_{11}^{\mathbf{I}\mathbf{B}}&\mathbf{ \Psi}_{12}^{\mathbf{I}\mathbf{B}}\\ \mathbf{\Psi}_{21}^{\mathbf{I}\mathbf{B}}&\mathbf{\Psi}_{22}^{\mathbf{I} \mathbf{B}}\end{bmatrix}, \tag{67}\]
where we used a momentum-conserving matrix \(\bar{\mathbf{B}}\) where appropriate and replaced the pre-multiplying \(\mathbf{B}\) matrix by the identity matrix. The latter, in addition to only constructing the main and upper diagonals of the \(\mathbf{\Psi}\) matrices, makes that the sparsity pattern of \(\mathcal{Q}^{T}\mathcal{Q}\) matches that of \(\mathcal{K}-\mathcal{K}^{T}\). With the addition of this dissipative term all the \(\mathbf{B}\) matrices combined contain in total \(10(2B+1)\) parameters that are to be trained.
Figure 5: Example of a simulation of Burgersβ equation with periodic BCs using our trained structure-preserving closure model for \(I=20\) (left), along with the DNS solution for \(N=1000\) (right).
An example application of the framework is shown in Figure 5, where we simulate Burgers' equation using our structure-preserving closure modeling framework and compare it to a direct numerical simulation (DNS). It is again interesting to see that \(\mathbf{s}\) is largest at the shocks, indicating the presence of significant SGS content there. When comparing the magnitude of the different terms in (35) (see Figure 6), we observe that the \(\mathcal{K}\) term, that is responsible for redistributing the energy, is most important, and in fact more important than the coarse-grid discretization operator \(f_{H}(\bar{\mathbf{u}})\). In other words, our closure model has learned dynamics that are highly significant to correctly predict the evolution of the filtered system.
### Forcing
Our proposed closure modeling framework allows for the presence of a forcing term \(\mathrm{F}_{i}(t)\approx F(\mathbf{x}_{i},t)\) in the RHS of our discretized PDE (3), with \(\mathbf{F}\in\mathbb{R}^{N}\). As long as this term does not depend on the solution \(\mathbf{u}\) the forcing commutes with \(\mathbf{W}\). This means we can simply add \(\bar{\mathbf{F}}=\mathbf{W}\mathbf{F}\) to the RHS of (23) without any contribution to the closure term. In addition, we can account for its contribution to the evolution of \(\mathbf{s}\) by first computing its contribution \(\mathbf{F}^{\prime}\) to the evolution of the SGS content (see (24)) as
\[\mathbf{F}^{\prime}:=\mathbf{F}-\mathbf{R}\bar{\mathbf{F}}. \tag{68}\]
The contribution to the evolution \(\mathbf{s}\) is then given by \(\mathbf{T}\mathbf{F}^{\prime}\), see (48).
The full closure modeling framework is thus summarized by
\[\frac{\mathrm{d}\bar{\mathbf{U}}}{\mathrm{d}t}=\mathcal{G}_{\mathbf{\Theta}}( \bar{\mathbf{U}}):=\begin{bmatrix}f_{H}(\bar{\mathbf{u}})\\ \mathbf{0}\end{bmatrix}+\mathbf{\Omega}_{2}^{-1}(\mathcal{K}-\mathcal{K}^{T} )\bar{\mathbf{U}}-\mathbf{\Omega}_{2}^{-1}\mathcal{Q}^{T}\mathcal{Q}\bar{ \mathbf{U}}+\mathbf{W}_{\mathbf{T}}\mathbf{F}, \tag{69}\]
depending on parameters \(\mathbf{\Theta}\). Note that we separated the forcing from \(f_{H}\) (the RHS of the coarse discretization). In the results section we use a forcing term in some of the Burgers' equation simulations.
### Finding the optimal parameter values
The optimal parameter values \(\mathbf{\Theta}^{*}\), where \(\mathbf{\Theta}\) includes the weights of the CNN along with the parameters of the \(\mathbf{B}\) matrices, can be obtained numerically by minimizing
\[\mathcal{L}(\mathcal{D};\mathbf{\Theta}):=\frac{1}{|\mathcal{D}|}\sum_{d\in \mathcal{D}}\frac{1}{2|\Omega|}||\mathcal{G}_{\mathbf{\Theta}}(\mathbf{W}_{ \mathbf{T}}\mathbf{u}_{d})-\mathbf{W}_{\mathbf{T}}f_{h}(\mathbf{u}_{d})||_{ \Omega_{2}}^{2} \tag{70}\]
Figure 6: Magnitude of each of the different terms present in (35) corresponding to the simulation in Figure 5.
with respect to \(\mathbf{\Theta}\) for the training set \(\mathcal{D}\) containing \(|\mathcal{D}|\) samples. We will refer to this approach as 'derivative fitting', as we minimize the residual between the predicted and the true RHS. In (70) the true RHS is obtained by applying \(\mathbf{W_{T}}\) to the fine-grid RHS \(f_{h}(\mathbf{u}_{d})\). The subscript \(d\) indicates a sample from the training set.
We will combine this method with a different approach in which we directly optimize \(\mathbf{\Theta}\) such that the solution itself is accurately reproduced. To achieve this we minimize
\[\mathcal{L}_{n}(\mathcal{D};\mathbf{\Theta}):=\frac{1}{|\mathcal{D}|}\sum_{d \in\mathcal{D}}\frac{1}{n}\sum_{i=1}^{n}\frac{1}{2|\Omega|}||\mathcal{S}^{i}_{ \mathbf{\Theta}}(\mathbf{W_{T}}\mathbf{u}_{d})-\mathbf{W_{T}}\mathcal{S}^{i( \overline{\Delta t}/\Delta t)}(\mathbf{u}_{d})||^{2}_{\Omega_{2}}, \tag{71}\]
where \(\mathcal{S}^{i}_{\mathbf{\Theta}}(\mathbf{W_{T}}\mathbf{u}_{d})\) represents the successive application of an explicit time integration scheme for \(i\) time steps, with step size \(\overline{\Delta t}\), starting from initial condition \(\mathbf{W_{T}}\mathbf{u}_{d}\), using the introduced closure model. The fine-grid counterpart is indicated by \(\mathcal{S}^{i(\overline{\Delta t}/\Delta t)}(\mathbf{u}_{d})\), with step size \(\Delta t\), starting from initial condition \(\mathbf{u}_{d}\). Note the appearance of the ratio \(\overline{\Delta t}/\Delta t\), as the coarser grid for \(\tilde{\mathbf{u}}\) allows us to take larger time steps [34]. This further reduces the required computational resources. We will refer to this method of finding the optimal parameters as 'trajectory fitting'. This approach has been shown to yield more accurate and stable closure models [14; 15; 21; 22; 23], as this approach also accounts for the time discretization error.
In practice, we employ a hybrid approach in which we first use derivative fitting and subsequently continue with trajectory fitting, as the latter requires more computational effort.
## 4 Results
To test our closure modeling framework we consider the previously introduced Burgers' equation with \(\nu=0.01\) on the spatial domain \(\Omega=[0,2\pi]\) for two test cases: (i) periodic BCs without forcing and (ii) inflow/outflow (I/O) BCs with time-independent forcing. The implementation of BCs is discussed in Appendix C. We also consider a third test case: (iii) the KdV equation with \(\varepsilon=6\) and \(\mu=1\) on the spatial domain \(\Omega=[0,32]\) for periodic BCs. Parameter values for Burgers' and KdV are taken from [35]. Reference simulations are carried out on a uniform grid of \(N=1000\) for Burgers' and \(N=600\) for KdV up to time \(t=T=10\). The data that is generated from these reference simulations is split into a training set and a validation set. The simulation conditions (initial conditions, BCs, and forcing) for training and testing purposes are generated randomly, as described in Appendix D. In addition to this, the construction of a training and validation set, the training procedure, and the chosen hyperparameters are also described in Appendix D.
For the analysis, we will compare our structure-preserving framework (SP) to a vanilla CNN that models the closure term as \(\mathbf{c}(\mathbf{u})\approx\bar{\mathbf{Q}}\text{CNN}(\bar{\mathbf{u}},f_{ H}(\bar{\mathbf{u}});\boldsymbol{\theta})\) (with parameters \(\boldsymbol{\theta}\)). Multiplication of the CNN output channel by the coarse-grid forward difference operator \(\bar{\mathbf{Q}}\) takes care of the momentum conservation condition (this has been shown to yield more accurate closure models [26]). The same trick is not applied for our SP closure, as it would destroy the derived evolution of the (approximated) total energy, see (42) and (43). Instead we resort to the described pre- and post-multiplication by the parameterized \(\mathbf{B}\) matrices to satisfy momentum conservation. Furthermore, we consider the no closure (NC) case, i.e. \(\bar{\mathbf{c}}=\mathbf{0}_{\Omega}\), which corresponds to a coarse-grid solution of the PDEs. To make a fair comparison we compare closure models with the same number of degrees of freedom (DOF). For SP we have \(\text{DOF}=2I\), as we obtain an additional set of \(I\) degrees of freedom corresponding to the addition of the SGS variables. For the CNN and NC we simply have \(\text{DOF}=I\).
To march the solution forward in time we employ an explicit RK4 scheme [30] with \(\overline{\Delta t}=0.01\) (\(4\times\) larger than the DNS) for use cases (i) and (ii) and \(\overline{\Delta t}=5\times 10^{-3}\) (\(50\times\) larger than the DNS) for use case (iii). The SP closure models contain in total 7607 parameters (consisting of two hidden layers with each 30 channels and a kernel size of 5 for the underlying CNN) for use cases (i) and (ii) and 3905 (consisting of two hidden layers with each 20 channels and a kernel size of 5) for use case (iii). The purely CNN-based closure models consist of 3261 parameters (two hidden layers with each 20 channels and a kernel size of 7) for every use case. These settings are based on the hyperparameter tuning procedure in Appendix D. In between hidden layers we employ the ReLU activation function, whereas we apply a linear activation function to the final
layer for both SP and the vanilla CNN. For SP we choose \(D=B=1\) for the construction of the \(\mathbf{B}\) and \(\mathbf{\Phi}/\mathbf{\Psi}\) matrices for use cases (i) and (ii) matching the width of the coarse discretization \(f_{H}(\bar{\mathbf{u}})\). For (iii) we do the same and therefore take \(D=B=2\). Note that the same set of compression matrices and closure models are used for (i) and (ii), as they both correspond to the same equation. These closure models are thus trained on a dataset containing both simulation conditions. As stated earlier, the model parameters are optimized by first derivative fitting and then trajectory fitting. This is specified in Appendix D. We implement our closure models in the Julia programming language [36] using the Flux.jl package [37; 38]. The code can be found at [https://github.com/tobyvg/ECNCM_1D](https://github.com/tobyvg/ECNCM_1D).
### Closure model performance
We first examine the performance of the trained closure models based on how well the filtered DNS solution is reproduced for cases (i)-(iii) and unseen simulation conditions. During our comparison we will make extensive use of the normalized root-mean-squared error (NRMSE) metric, defined as
\[\text{NRMSE }\bar{\mathbf{u}}(t)=\sqrt{\frac{1}{|\Omega|}||\bar{\mathbf{u}}(t)- \bar{\mathbf{u}}^{\text{DNS}}(t)||_{\Omega}^{2}}, \tag{72}\]
to compare the approximated solution \(\bar{\mathbf{u}}\) at time \(t\), living on the coarse grid, to the ground truth \(\bar{\mathbf{u}}^{\text{DNS}}\) obtained from the DNS. We will refer to this metric as the solution error. In addition, we define the integrated-NRMSE (I-NRMSE) as
\[\text{I-NRMSE }\bar{\mathbf{u}}(t)=\frac{1}{t}\sum_{i}\overline{\Delta t} \text{ NMRSE }\bar{\mathbf{u}}(i\overline{\Delta t}),\qquad 0\leq i\overline{\Delta t}\leq t, \tag{73}\]
such that the sum represents integrating the solution error in time. We will refer to this metric as the integrated solution error.
#### 4.1.1 Convergence
As we refine the resolution of the coarse grid, and with this increase the number of DOF, we expect convergence of both the compression error \(\mathcal{L}_{s}\) (defined in equation (49)) and the solution error. We consider \(\text{DOF}\in\{20,30,40,50,60,70,80,90,100\}\), each with a different set of trained closure models. If the fine-grid resolution \(N\) is not divisible by the coarse-grid resolution \(I\) we first project the fine-grid solution on a grid with a resolution that is divisible by \(I\) to generate reference data. This is necessary for constructing the spatial averaging filter (see section 2.3). In total 36 closure models are trained: two (SP and CNN) for each combination of the 9 considered coarse-grid resolutions and equation (Burgers' or KdV). Closure models corresponding to Burgers' equation are applied to both use case (i) periodic and (ii) I/O conditions.
The SGS compression error evaluated over the validation set is shown in Figure 7. We observe monotonic convergence of the compression error as we refine the grid. We expect the compression error to further converge to zero until the exact solution is reached at \(\text{DOF}=N\) (\(J=2\)), see Appendix E. The faster convergence for the KdV equation is likely caused by the lower fine-grid resolution of \(N=600\), as opposed to \(N=1000\) for Burgers' equation.
Next, we look at the integrated solution error averaged over 20 simulations with unseen simulation conditions, generated as described in Appendix D, for each of the considered numbers of DOF, see Figure 8. For test cases (i) and (ii) we observe, for both SP and NC, almost monotonic convergence of the solution error as we increase the number of DOF in the simulation, with SP improving upon NC with roughly one order of magnitude. On the other hand, the solution error for the CNN behaves quite erratically: sometimes more accurate than SP, sometimes unstable (e.g. in case (ii) and DOF = 80, all 20 simulations were unstable), and sometimes less accurate than NC (case (i), \(\text{DOF}=90\)).
For test case (iii) we find that for most numbers of DOF the CNN outperforms SP, while not resulting in stable closure models for \(\text{DOF}\in\{90,100\}\). Overall, the conclusion is that our proposed SP closure model leads to much more robust simulations while being on par in terms of accuracy with a CNN closure model. Furthermore, for the lower numbers of DOF we observe similar performance for SP and the CNN. From this we conclude that the compression error (see Figure 7) is likely not the limiting factor of the closure model performance.
#### 4.1.2 Consistency of the training procedure
It is important to note that the closure models trained in the previous section possess a degree of randomness, caused by the (random) initialization of the network weights and the random selection the mini-batches. This can possibly lead to the irregular convergence behavior shown in the previous section. In order to evaluate this effect, we train 10 separate replica models for \(\text{DOF}=60\), which only differ in the random seed.
The trained models are evaluated in terms of stability (number of unstable simulations) and integrated solution error. A simulation is considered unstable when it produces NaN values for \(\bar{\mathbf{u}}(t)\) (\(t\leq T\)). In total 20 simulations per closure model are carried out using the same simulation conditions as in the convergence study. The results are depicted in Figure 9. With regards to stability we observe that all trained SP closure models produced exclusively stable simulations. This is in accordance with the earlier derived stability conditions (42) and (43) for the periodic cases. In addition, for the non-periodic test case (ii) we also observe a clear stability advantage, as all of the trained SP closure models still produced only stable simulations with a consistently low integrated solution error.
Regarding this integrated solution error, we observe that the SP closure models all perform very con
Figure 8: Integrated solution error evaluated at \(T=10\) averaged over 20 simulations for the different use cases (i)-(iii) and an increasing number of \(\text{DOF}\). Only stable simulations are considered for the depicted averages. Absence of a scatter point indicates no stable simulations.
Figure 7: Convergence of the SGS compression error when refining the coarse grid, evaluated on the validation set for Burgersβ equation (\(N=1000\)) and KdV equation (\(N=600\)).
sistently (errors are almost overlapping). The CNNs sometimes outperform SP for test cases (i) and (iii), but also show very large outliers. This confirms our conclusion of the previous section that our SP closure models are much more robust than the CNNs, which can be 'hit or miss' depending on the randomness in the training procedure.
#### 4.1.3 Error behavior in time
To further exemplify how structure preservation aids in robustness and accuracy we consider a single simulation of Burgers' equation with periodic BCs. We choose DOF = 90 (the value for which the CNN closure model performed poorly during the convergence study) and randomly select one of the simulations from the convergence study for the analysis. The resulting solution error trajectory and energy trajectories for this simulation are displayed in Figure 10. We find that the resolved energy for the CNN starts showing erratic behavior around the time the solution error surpasses the one of NC. Around \(t=4\) the resolved energy even increases drastically. The other three methods show no increase in energy. This is in accordance with the derived evolution of the energy: equation (11) for NC and the DNS, and equation (42) for SP. From this we conclude that there is a clear stability and accuracy benefit to adhering to physical structure, as compared to using a standard CNN.
Figure 10: Solution error (left) and resolved energy (right) trajectories for a simulation of Burgersβ equation with periodic BCs starting from an unseen initial condition. The presented results correspond to DOF = 90. For SP and the DNS the (approximated) total energy is displayed, as the SGS energy is small. These trajectories overlap for the entirety of the simulation.
Figure 9: Integrated solution error evaluated at \(T=10\) averaged over 20 simulations and % of unstable simulations for each closure model in the trained ensemble of closure models (DOF = 60). Use cases (i)-(iii) are considered. For (ii) two CNN closure models produced 100% unstable simulations and are therefore omitted from the graph.
### Structure preservation
To analyze how well the SP closure models adhere to physical structure we consider a single simulation of Burgers' and KdV with periodic BCs, i.e. use case (i) and (iii), and unseen simulation conditions. For the purpose of this analysis we stick to closure models corresponding to \(\text{DOF}=40\).
#### 4.2.1 Burgers' equation
For Burgers' equation the results are depicted in Figure 11. With regards to momentum conservation we find that each of the considered closures preserves momentum within machine precision. NC and the DNS achieve this through a structure-preserving discretization, the CNN achieves this through the multiplication by the forward difference operator \(\bar{\mathbf{Q}}\), and the SP model through the construction of \(\mathcal{K}\) and \(\mathcal{Q}\).
With regards to the energy, both the resolved energy \(\bar{E}_{h}\) as well as the (approximated) total energy \(E_{s}/E_{h}\) are considered. The first observation is that the energy of NC is strictly decreasing but remains at a too high level as compared to the DNS, which is consistent with our analysis in Appendix B. For SP the approximated total energy is also always decreasing, as derived in (42), thus successfully mimicking the property that the total energy should be decreasing for viscous flows and periodic BCs, in the absence of forcing. Furthermore, when looking only at the resolved energy we find that SP nicely captures the back and forth energy transfer between the resolved and SGS energy, similar to the DNS result. This means that it successfully allows for backscatter, without sacrificing stability. The CNN is omitted from this analysis, as earlier we observed that it is not strictly dissipative, see Figure 10.
#### 4.2.2 Korteweg-de Vries equation
Next, we study the KdV equation. With regards to momentum we observe that it is again conserved up to machine precision for each of the closures, see Figure 12. However, in contrast to Burgers' equation with viscosity, the total energy should now be exactly conserved. We mimic this by not including the dissipative \(\mathcal{Q}\) term in the SP closure model. We find that the approximated total energy is indeed conserved up to a time integration error, due to the use of an explicit RK4 integration scheme [30] instead of a structure-preserving time integration method such as implicit midpoint. This is done as implicit time integration schemes are incompatible with trajectory fitting. The energy error decreases with \(\mathcal{O}(\Delta t^{4})\) when the time step is decreased and is at machine precision for \(\overline{\Delta t}=10^{-4}\).
Based on the results for Burgers' and KdV equation, we conclude that our proposed SP closure model successfully achieves stability by mimicking the energy conservation law of the full system, while still allowing for backscatter to be modelled correctly.
Figure 11: Change in momentum \(\Delta_{t}P_{h}=P_{h}(t)-P_{h}(0)\) (left) and evolution of resolved and total energy (right) for a simulation of Burgersβ equation with periodic BCs starting from an unseen initial condition. The presented results correspond to \(\text{DOF}=40\).
### Extrapolation in space and time
As a final experiment we evaluate how well the closure models are capable of extrapolating in space and time. We consider the KdV equation on an extended spatial domain \(\Omega=[0,96]\), which is three times the size of the domain in the training data, and run the simulation until \(T=50\) (five times longer than present in the training data). As closure models, we use the ones trained during the convergence study that correspond to the grid-spacing of the employed grid. The resulting DNS (\(N=3\times 600\)), and absolute error (AE) for the NC, CNN, and SP simulations (\(\mathrm{DOF}=3\times 40\)) are shown in Figure 13. We observe that SP and the CNN both improve upon NC in the earlier stages of the simulation (\(t\leq 20\)), but less so for longer time spans. However, since the absolute error is sensitive to small translations in the solution (as observed in the later stages of the simulation), we require a more thorough analysis to further compare the two machine learning-based closure models.
For this purpose we first look at the trajectory of the resolved energy. This is presented in Figure 14. We find that for SP the resolved energy (in black) stays in close proximity to its corresponding filtered DNS simulation (in green). This is in contrast to the CNN (in red) which starts to diverge from the DNS (in brown) around \(t=5\). The resolved energy for the CNN also exceeds the maximum allowed total energy \(E_{h}\) (in orange) at different points in the simulation, which is unphysical. We thus conclude that adding the SGS variables and conserving the total energy helps with capturing the delicate energy balance between resolved and SGS energy that characterizes the DNS. It is also interesting to note that NC conserves the resolved energy, as the coarse discretization conserves the discrete energy. However, this is not desired, as the resolved energy is not a conserved quantity, see Figure 3.
To make a more quantitative analysis of this phenomenon we investigate the trajectory of the solution error and the Gaussian kernel density estimate (KDE) [39] of the resolved energy distribution, for both the CNN and SP. The latter analysis is carried out to analyze whether the closure models capture the correct energy balance between the resolved and SGS energy. The results for \(\mathrm{DOF}\in\{40,60,80\}\) are depicted in Figure 15. Looking at the solution error trajectories we find that at the earlier stages of the simulation the CNN outperforms SP (for \(\mathrm{DOF}=60\) and \(\mathrm{DOF}=80\)). However, SP slowly overtakes the CNN past the training region (\(t\leq 10\)). For \(\mathrm{DOF}=40\), SP outperforms the CNN roughly throughout the entire simulation. With regards to the resolved energy distribution we find that for each of the considered numbers of \(\mathrm{DOF}\) SP is capable reproducing the DNS distribution. On the other had, the CNN closure models struggle to capture this distribution. For \(\mathrm{DOF}=40\) a significant part of the distribution even exceeds the total energy present in the DNS, i.e. there occurs a nonphysical influx of energy.
From this we conclude that both the SP and CNN closure models are capable of extrapolating beyond
Figure 12: Change in momentum \(\Delta_{t}P_{h}=P_{h}(t)-P_{h}(0)\) (left) and change in (approximated) total energy \(\Delta_{t}E_{s/h}=E_{s/h}(t)-E_{s/h}(0)\) (right) for a simulation of KdV equation with periodic BCs starting from an unseen initial condition. The presented results correspond to \(\mathrm{DOF}=40\).
the training data. However, the fact that SP is capable of correctly capturing the energy balance between the resolved and unresolved scales allows it to more accurately capture the statistics of the DNS results. This in turn leads to more robust long-term solution error behavior.
## 5 Conclusion
In this paper we proposed a novel way of constructing machine learning-based closure models in a structure-preserving fashion by taking the 'discretize first and filter next' approach. We started off by applying a spatial averaging filter to a fine-grid discretization and writing the resulting filtered system in closure model form, where the closure term requires modeling. Next, we showed that by applying the filter we effectively remove part of the energy. We then introduced a linear compression of the subgrid-scale (SGS) content into a set of SGS variables living on the coarse grid. These SGS variables serve as a means of reintroducing the removed energy back into the system, allowing us to use the concept of kinetic energy conservation. In turn we introduced an extended system of equations that models the evolution of the filtered solution as well as the evolution of the compressed SGS variables. For this extended system we propose a structure-preserving closure modeling framework that allows for energy exchange between the filtered solution and the SGS variables, in addition to dissipation. This framework serves to constrain the underlying convolutional neural network (CNN) such that no additional energy enters the system for periodic boundary conditions (BCs). In this way we achieve stability by abiding by the underlying energy conservation law, while still allowing for backscatter through the energy present in the SGS variables. The framework is constructed such that momentum conservation is also satisfied.
Figure 13: Absolute errors for the simulations produced by the NC, CNN, and SP closures, as well as the DNS solution, for solving the KdV equation on an extended spatial \(\Omega=[0,96]\) and temporal domain \(t=[0,50]\). The grid resolutions correspond to \(\text{DOF}=3\times 40\) for the closure models and \(N=3\times 600\) for the DNS. The area enclosed within the dashed lines indicates the size of the domain used for training.
A convergence study showed that the learned SGS variables are able to accurately match the original SGS energy content, with accuracy consistently improving when refining the coarse-grid resolution.
Given the SGS compression operator, our proposed structure-preserving framework (SP) was compared to a vanilla CNN (adapted to be momentum-conserving). Overall, the SP method performed on par with the CNN in terms of accuracy, _provided that the CNN produced stable results_. However, the results for the CNN were typically inconsistent, not showing clear convergence of the integrated solution error upon increasing the degrees of freedom, in addition to suffering from stability issues. On the other hand, our SP method produced stable results in all cases, while also consistently improving upon the 'no closure model' results by roughly an order of magnitude in terms of the integrated solution error.
This conclusion was further strengthened by training an ensemble of closure models, where we investigated the consistency of the closure model performance with respect to the randomness inherent in the neural network training procedure. We observed that the trained vanilla CNNs differed significantly in performance and stability, whereas the different SP models performed very similarly to each other and displayed no stability issues. Our SP model is therefore more robust and successfully resolves the stability issues that plague conventional CNNs.
Our numerical experiments confirmed the structure-preserving properties of our method: exact momentum conservation, energy conservation (in the absence of dissipation) up to a time discretization error, and strict energy decrease in the presence of dissipation. We also showed that our method succeeds in accurately modeling backscatter. Furthermore, when extrapolating in space and time, the advantage of including the SGS variables and embedding structure-preserving properties became even more apparent: our method is much better at capturing the delicate energy balance between the resolved and SGS energy. This in turn yielded better long-term error behavior.
Based on these results we conclude that including the SGS variables, as well as adherence to the underlying energy conservation law, has the important advantages of stability and long-term accuracy, in addition to consistent performance. This work therefore serves as an important starting point for building physical constraints into machine learning-based turbulence closure models. In the future we aim to apply our SP framework to the Navier-Stokes equations in 2D and 3D, locally modeling the turbulent kinetic energy by a set of SGS variables. More generally, our framework is potentially applicable to a wide range of systems that possess multiscale behavior while also possessing a secondary conservation law, for example incompressible
Figure 14: Trajectory of the resolved energy \(\bar{E}_{h}\) for the simulation presented in Figure 13 for each of the different models corresponding to DOF = 40. The DNS resolved energy is depicted for both \(I=\) DOF (to compare with the CNN) and \(I=\) DOF/2 (to compare with SP).
pipe flow [40] and the streamfunction-vorticity formulation of Navier-Stokes in 2D [41].
**CRediT authorship contribution**
**T. van Gastelen:** Conceptualization, Methodology, Software, Writing - original draft. **W. Edeling:** Writing - review & editing. **B. Sanderse:** Conceptualization, Methodology, Writing - review & editing, Funding acquisition.
**Data availability**
The code used to generate the training data and the implementation of the neural networks can be found at [https://github.com/tobyvg/ECNCM_1D](https://github.com/tobyvg/ECNCM_1D).
**Acknowledgements**
This publication is part of the project "Unraveling Neural Networks with Structure-Preserving Computing" (with project number OCENW.GROOT.2019.044 of the research programme NWO XL which is financed by the Dutch Research Council (NWO)). In addition, part of this publication is funded by Eindhoven University of Technology.
Figure 15: Solution error trajectory (top) and KDEs estimating the distribution of \(\bar{E}_{h}\) (bottom) for the trained closure models corresponding to different numbers of DOF. These quantities are computed for a simulation of the KdV equation with the same initial condition on the extended spatial and temporal domain. In the top row the vertical black line indicates the maximum time present in the training data, while in the bottom row it indicates the total energy of the DNS (which should not be exceeded). The DNS resolved energy is again depicted for both \(I=\) DOF (to compare with the CNN) and \(I=\) DOF/2 (to compare with SP). |
2309.00648 | Extragradient method with feasible inexact projection to variational
inequality problem | The variational inequality problem in finite-dimensional Euclidean space is
addressed in this paper, and two inexact variants of the extragradient method
are proposed to solve it. Instead of computing exact projections on the
constraint set, as in previous versions extragradient method, the proposed
methods compute feasible inexact projections on the constraint set using a
relative error criterion. The first version of the proposed method provided is
a counterpart to the classic form of the extragradient method with constant
steps. In order to establish its convergence we need to assume that the
operator is pseudo-monotone and Lipschitz continuous, as in the standard
approach. For the second version, instead of a fixed step size, the method
presented finds a suitable step size in each iteration by performing a line
search. Like the classical extragradient method, the proposed method does just
two projections into the feasible set in each iteration. A full convergence
analysis is provided, with no Lipschitz continuity assumption of the operator
defining the variational inequality problem. | R. DΓaz MillΓ‘n, O. P. Ferreira, J. Ugon | 2023-08-31T08:01:28Z | http://arxiv.org/abs/2309.00648v2 | # Extragradient method with feasible inexact projection to variational inequality problem
###### Abstract
The variational inequality problem in finite-dimensional Euclidean space is addressed in this paper, and two inexact variants of the extragradient method are proposed to solve it. Instead of computing exact projections on the constraint set, as in previous versions extragradient method, the proposed methods compute feasible inexact projections on the constraint set using a relative error criterion. The first version of the proposed method provided is a counterpart to the classic form of the extragradient method with constant steps. In order to establish its convergence we need to assume that the operator is pseudo-monotone and Lipschitz continuous, as in the standard approach. For the second version, instead of a fixed step size, the method presented finds a suitable step size in each iteration by performing a line search. Like the classical extragradient method, the proposed method does just two projections into the feasible set in each iteration. A full convergence analysis is provided, with no Lipschitz continuity assumption of the operator defining the variational inequality problem.
**keywords:** Variational inequality problem, Extragradient method, Frank-Wolfe algorithm, conditional gradient method, feasible inexact projection. **MSC 2020:** 65K05, 90C30, 90C25
## 1 Introduction
This paper addresses the variational inequality problem in finite-dimensional Euclidean space. This problem is formally stated as follows: Let \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) be an operator and \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty and closed convex set. The variational
inequality problem (\(\mathrm{VIP}(F,\mathcal{C})\)) associated with \(F\) and \(\mathcal{C}\) consists in finding a \(x^{*}\in\mathcal{C}\) such that
\[\langle F(x^{*}),x-x^{*}\rangle\geq 0,\qquad\forall\ x\in\mathcal{C}. \tag{1}\]
We denote by \(\mathcal{C}^{*}\) the _solution set_ of problem 1, which we will assume to be nonempty. The variational inequality problem is an issue that has attracted the interest of the mathematics programming community not only by their own interest but because it is an abstract model for several families of problems in nonlinear analysis and its applications. For instance, if \(F=\nabla f\), where \(f:\mathbb{R}^{n}\to\mathbb{R}\) is a differentiable function, then \(\mathrm{VIP}(F,\mathcal{C})\) correspond to the problem of minimizing the function \(f\) constrained the set \(\mathcal{C}\). When \(\mathcal{C}\) is a cone \(\mathcal{K}\), the \(\mathrm{VIP}(F,\mathcal{C})\) is a complementary problem, which is stated in the following form: Compute \(x^{*}\in\mathbb{R}^{n}\) such that \(x^{*}\in\mathcal{K}\), \(F(x^{*})\in\mathcal{K}^{*}\) and \(\langle F(x^{*}),x^{*}\rangle=0\), where \(\mathcal{K}^{*}\) denotes the dual of \(\mathcal{K}\). For a comprehensive study of theory and applications of variational inequality, see [17, 16].
The extragradient method was proposed in [24] in the 1970s and continues to attract the interest of variational inequality experts, see [10, 25, 29, 3], and the references therein. This method is attractive due to the fact that it requires only two operator evaluations for each iteration, making it numerically stable and hence potentially suited for addressing large-scale problems. Apart from the projection needed in its definition, which is responsible for approximately all of the computational demands if the constraint set projection is difficult to compute, it is a relatively simple method. In addition, the method converges with mild assumptions. All these features motivate the study of it, resulting in many different versions of the method throughout the years culminating in a large body of literature on the subject, including [16, 17, 12, 31, 7, 9], and the references therein.
Another subject that has motivated the development of numerical methods for addressing constrained problems is dealing with the computation of the projection, which is the step that accounts for practically all of the computing demands of methods that utilize projections, like the Extragradient method. In general, computing the projection requires solving a quadratic problem constrained to the feasible set at each iteration, which can significantly raise the cost per iteration if the number of unknowns is large. In light of this, it may not be reasonable to do exact projections when the iterates of the method are distant from the solution of the problem under consideration. Through the years, various inexact procedures that become increasingly accurate as the solution of the problem under consideration is approached have been proposed in an effort to reduce the computational cost required for projections, leading to more effective projection-based methods; see for example [5, 6, 19, 20, 27, 30, 26, 15, 14].
The purpose of this paper is to present two variants of the extragradient method which employ the use of feasible inexact projections. In the variants of the extragradient method that we propose, we will employ a version of the scheme proposed in [30, Example 1] in which the inexact projection over the feasible set is calculated allowing an appropriate relative error tolerance. Firstly, we present a variation of the extragradient method with constant stepsize and
show that it preserves the same convergence result as the classic method, see [17, 24]. We show that if \(F\) is a pseudo monotone operator on \(\mathcal{C}\) with respect to \(\mathcal{C}^{*}\) and Lipschitz continuous, the sequence generated converges to a solution of \(\mathrm{VIP}(F,\mathcal{C})\). It is important to note in this version that the Lipschitz constant is required to compute the stepsize. Considering that the Lipschitz constant is not accessible or difficult to compute in almost every application, we propose and analyse a feasible inexact projection version of the extragradient method using an Armiljo-type line search. It is worth noting that, like the classical extragradient method, the method does just two projections into the feasible set in each iteration. The full convergence of the sequence to a solution is shown, with \(F\) being a pseudo monotone operator on \(\mathcal{C}\) with respect to \(\mathcal{C}^{*}\) and no Lipschitz continuity assumption, which is the same results as the version with exact projection, see [3, 8, 32, 23, 28].
The organization of the paper is as follows. In section 2, we present some notation and basic results used throughout the paper. In Section 3 we will revisit the concept of feasible inexact projection onto a closed and convex set and describe some new properties of the feasible inexact projection. Section 4 describe and analyze the extragradient method with a feasible inexact projection for solving problem (1). In Section 5 is introduced and analyzed an inexact variant of the extragradient method with line search for solve \(\mathrm{VIP}(F,\mathcal{C})\). Finally, some concluding remarks are made in Section 7.
## 2 Preliminaries
In this section, we present some preliminary results used throughout the paper. We denote: \(\mathbb{N}:=\{1,2,3,\ldots\}\), \(\langle\cdot,\cdot\rangle\) is the usual inner product and \(\|\cdot\|\) is the Euclidean norm. Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be closed, convex and nonempty set, the _projection_ is the map \(\mathcal{P}_{\mathcal{C}}:\mathbb{R}^{n}\to\mathcal{C}\) defined by
\[\mathcal{P}_{\mathcal{C}}(v):=\arg\min_{z\in\mathcal{C}}\|v-z\|.\]
In the next lemma, we present some important properties of the projection mapping.
**Lemma 1**.: _Given a convex and closed set \(\mathcal{C}\subset\mathbb{R}^{n}\) and \(v\in\mathbb{R}^{n}\), the following properties hold:_
1. \(\langle v-\mathcal{P}_{\mathcal{C}}(v),z-\mathcal{P}_{\mathcal{C}}(v)\rangle\leq 0\)_, for all_ \(z\in\mathcal{C}\)_;_
2. \(\|\mathcal{P}_{\mathcal{C}}(v)-z\|^{2}\leq\|v-z\|^{2}-\|\mathcal{P}_{\mathcal{ C}}(v)-v\|^{2}\)_, for all_ \(z\in\mathcal{C}\)_._
Proof.: The item (i) is proved in [2, Theorem 3.14]. For item (ii), combine \(\|v-z\|^{2}=\|\mathcal{P}_{\mathcal{C}}(v)-v\|^{2}+\|\mathcal{P}_{\mathcal{C} }(v)-z\|^{2}-2\langle\mathcal{P}_{\mathcal{C}}(v)-v,\mathcal{P}_{\mathcal{C} }(v)-z\rangle\) with item (i).
For the formula in the next proposition see, for example, [2, Example 3.21].
**Proposition 2**.: _Let \(a,v\in\mathbb{R}^{n}\) and \(H=\{x\in\mathbb{R}^{n}:\ \langle v,x-a\rangle\leq 0\}\). If \(\bar{x}\notin H\), then_
\[\mathcal{P}_{H}(\bar{x})=\bar{x}-\frac{1}{\|v\|^{2}}\langle v,\bar{x}-a\rangle v.\]
Let \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be an operator, \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty and closed convex set. The operator \(F\) is said to be _pseudo monotone on \(\mathcal{C}\) with respect to the solution set \(\mathcal{C}^{*}\)_ of problem 1 if the set \(\mathcal{C}^{*}\) is nonempty and, for every \(x^{*}\in\mathcal{C}^{*}\), there holds:
\[\langle F(x),x-x^{*}\rangle\geq 0,\qquad\forall x\in\mathcal{C}.\]
**Definition 1**.: _Let \(S\) be a nonempty subset of \(\mathbb{R}^{n}\). A sequence \((v_{k})_{k\in\mathbb{N}}\subset\mathbb{R}^{n}\) is said to be quasi-Fejer convergent to \(S\), if and only if, for all \(v\in S\) there exists \(\bar{k}\geq 0\) and a summable sequence \((\epsilon_{k})_{k\in\mathbb{N}}\), such that \(\|v_{k+1}-v\|^{2}\leq\|v_{k}-v\|^{2}+\epsilon_{k}\) for all \(k\geq\bar{k}\)._
In the following lemma, we state the main properties of quasi-Fejer sequences that we will need; a comprehensive study on this topic can be found in [11].
**Lemma 3**.: _Let \(S\) be a nonempty subset of \(\mathbb{R}^{n}\) and \((v_{k})_{k\in\mathbb{N}}\) be a quasi-Fejer sequence convergent to \(S\). Then, the following conditions hold:_
1. _the sequence_ \((v_{k})_{k\in\mathbb{N}}\) _is bounded;_
2. _if a cluster point_ \(\bar{v}\) _of_ \((v_{k})_{k\in\mathbb{N}}\) _belongs to_ \(S\)_, then_ \((v_{k})_{k\in\mathbb{N}}\) _converges to_ \(\bar{v}\)_._
## 3 Feasible inexact projection
In this section, we will revisit the concept of feasible inexact projection onto a closed and convex set. This concept has already been utilized in [1, 13, 14, 15]. We also describe some new properties of the feasible inexact projection, which is employed throughout the work. The definition of feasible inexact projection is as follows.
**Definition 2**.: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a closed convex set and \(\gamma\in\mathbb{R}_{+}\) is a given error tolerance vector. The feasible inexact projection mapping relative to \(u\in\mathcal{C}\) with error tolerance vector \(\gamma\), denoted by \(\mathcal{P}_{\mathcal{C}}^{\gamma}(u,\cdot):\mathbb{R}^{n}\rightrightarrows \mathcal{C}\), is the set-valued mapping defined as follows_
\[\mathcal{P}_{\mathcal{C}}^{\gamma}(u,v):=\left\{w\in\mathcal{C}:\ \left\langle v-w,y-w \right\rangle\leq\gamma\|w-u\|^{2},\ \forall y\in\mathcal{C}\right\}. \tag{2}\]
_Each point \(w\in\mathcal{P}_{\mathcal{C}}^{\gamma}(u,v)\) is called a feasible inexact projection of \(v\) onto \(\mathcal{C}\) relative to \(u\) with error tolerance \(\varphi_{\gamma}\)._
The feasible inexact projection generalizes the concept of usual projection. In the following, we present some remarks about this concept.
**Remark 1**.: _Let \(\gamma\in\mathbb{R}_{+}\) be error tolerance vector, \(\mathcal{C}\subset\mathbb{R}^{n}\), \(u\in\mathcal{C}\) and \(\gamma\) be as in Definition 2. For all \(v\in\mathbb{R}^{n}\), it follows from (2) that \(\mathcal{P}_{\mathcal{C}}^{0}(u,v)\) is the exact projection of \(v\) onto \(\mathcal{C}\); see [4, Proposition 2.1.3, p. 201]. Moreover, \(\mathcal{P}_{\mathcal{C}}^{0}(u,v)\subset\mathcal{P}_{\mathcal{C}}^{\gamma}(u,v)\) which implies that \(\mathcal{P}_{\mathcal{C}}^{\gamma}(u,v)\neq\varnothing\), for all \(u\in\mathcal{C}\) and \(v\in\mathbb{R}^{n}\). In general, if \(\gamma\leq\bar{\gamma}\) then \(\mathcal{P}_{\mathcal{C}}^{\gamma}(u,v)\subset\mathcal{P}_{\mathcal{C}}^{\bar{ \gamma}}(u,v)\)._
Below we present a particular counterpart of the firm non-expansiveness of the projection operator to a feasible inexact projection operator, its proof follows the same idea of [14].
**Proposition 4**.: _Let \(v\in\mathbb{R}^{n}\) and \(\gamma\geq 0\). If \(w\in\mathcal{P}_{C}^{\gamma}(u,v)\) and \(\bar{w}=\mathcal{P}_{C}(\bar{v})\), then_
\[\|w-\bar{w}\|^{2}\leq\|v-\bar{v}\|^{2}-\|(v-\bar{v})-(w-\bar{w})\|^{2}+2\gamma \|w-u\|^{2}.\]
Proof.: Since \(w\in\mathcal{P}_{C}^{\gamma}(u,v)\) and \(\bar{w}=\mathcal{P}_{C}(\bar{v})\), it follows from (2) and Lemma 1 that
\[\big{\langle}v-w,\bar{w}-w\big{\rangle}\leq\gamma\|w-u\|^{2},\qquad\big{\langle} \bar{v}-\bar{w},w-\bar{w}\big{\rangle}\leq 0\]
By adding the last two inequalities, some algebraic manipulations yield
\[-\big{\langle}\bar{v}-v,\bar{w}-w\big{\rangle}+\|w-\bar{w}\|^{2}\leq\gamma\| w-u\|^{2}.\]
Since \(\|(\bar{v}-v)-(\bar{w}-w)\|^{2}=\|\bar{v}-v\|^{2}-2\big{\langle}\bar{v}-v, \bar{w}-w\big{\rangle}+\|\bar{w}-w\|^{2}\), the desired inequality follows by combination with the last inequality.
**Lemma 5**.: _Let \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a operator, \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty, closed, and convex set, \(x\in\mathcal{C}\) and \(0\leq\gamma<1\). Take \(z\in\mathbb{R}^{n}\) and any inexact projection_
\[w(\alpha)\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(z)),\qquad\alpha \in(0,+\infty).\]
_Then, there hold:_
1. \(\big{\langle}F(z),w(\alpha)-x\big{\rangle}\leq\frac{\gamma-1}{\alpha}\|w( \alpha)-x\|^{2}\)_;_
2. \(\|w(\alpha)-x\|\leq\frac{\alpha}{1-\gamma}\|F(z)\|\)_._
Proof.: Since \(w(\alpha)\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(z))\) we obtain \(\big{\langle}x-\alpha F(z)-w(\alpha),x-w(\alpha)\big{\rangle}\leq\gamma\|w( \alpha)-x\|^{2}\), which after some algebraic manipulations yields
\[\|w(\alpha)-x\|^{2}-\alpha\big{\langle}F(z),x-w(\alpha)\big{\rangle}\leq \gamma\|w(\alpha)-x\|^{2}.\]
Thus, item \((i)\) follows from the last inequality.
We proceed to prove the tem \((ii)\). For that, first note that the item \((i)\) is equivalent to
\[0\leq\frac{1-\gamma}{\alpha}\|w(\alpha)-x\|^{2}\leq\big{\langle}F(z),x-w( \alpha)\big{\rangle}. \tag{3}\]
If \(w(\alpha)=x\), then the inequality holds trivially. Assume that \(w(\alpha)\neq x\). Thus, the inequality in item (ii) follows by combining the inequality (3) with \(\langle F(z),x-w(\alpha)\rangle\leq\|F(z)\|\|x-w(\alpha)\|\).
**Corollary 6**.: _The following statements are equivalent:_
1. \(x\) _is a solution of the VIP(F,_\(\mathcal{C}\)_);_
2. \(x\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\)_, for all_ \(\alpha\in(0,+\infty)\)
_._
3. _there exists_ \(\bar{\alpha}>0\) _such that_ \(\langle F(x),w(\bar{\alpha})-x\rangle\geq 0\) _for_ \(w(\bar{\alpha})\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\bar{\alpha}F(x))\)_._
Proof.: Proof of equivalence between item \((i)\) and item \((ii)\): We first assume that item \((i)\) holds, i.e., \(x\) is a solution for problem (1). In this case, by taking \(w(\alpha)\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\), we find that \(w(\alpha)\in\mathcal{C}\). Consequently, we have \(\big{\langle}F(x),w(\alpha)-x\big{\rangle}\geq 0\). Considering that \(\alpha>0\) and \(0\leq\gamma<1\), the last inequality, along with item \((i)\) of Lemma 5 for \(z=x\), implies that \(w(\alpha)=x\). Hence, \(x\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\), and item \((ii)\) also holds. Reciprocally, assuming that item \((ii)\) holds, if \(x\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\), then applying (2) with \(w=x\), \(v=x-\alpha F(x)\), and \(u=x\) yields \(\big{\langle}x-\alpha F(x)-x,y-x\big{\rangle}\leq 0\), for all \(y\in\mathcal{C}\). Given that \(\alpha>0\), the last inequality is equivalent to \(\big{\langle}F(x),y-x\big{\rangle}\geq 0\), for all \(y\in\mathcal{C}\). Thus, \(x\) is a solution for problem (1), and item \((i)\) holds as well.
Proof of equivalence between item \((ii)\) and item \((iii)\): Let us assume that item \((ii)\) holds. Thus, item \((i)\) also holds, and \(x\) is a solution for problem (1), which implies that \(\langle F(x),y-x\rangle\geq 0\) for all \(y\in\mathcal{C}\). Considering that for any \(w(\bar{\alpha})\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\bar{\alpha}F(x))\), we have \(w(\bar{\alpha})\in\mathcal{C}\), it follows that \(\langle F(x),w(\bar{\alpha})-x\rangle\geq 0\), and item \((iii)\) holds. Conversely, we assume, for contradiction, that item \((ii)\) does not hold. Therefore, \(x\notin\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\), and considering that \(w(\alpha)\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\), we conclude that \(x\neq w(\alpha)\). As a result, because \(\alpha>0\) and \(0<\gamma\leq\bar{\gamma}\), it follows from item \((i)\) of Lemma 5 that \(\big{\langle}F(x),w(\alpha)-x\big{\rangle}<0\), for all \(\alpha\in(0,+\infty)\). Thus, item \((iii)\) does not hold, which leads to a contradiction. Therefore, \((iii)\) implies \((ii)\).
**Corollary 7**.: _Let \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a operator, \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty, closed, and convex set, \(x\in\mathcal{C}\), \(0\leq\gamma<1\) and \(\alpha>0\). If \(y\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\) and \(x^{+}\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(y))\), then the following inequalities hold:_
1. \(\|y-x\|\leq\frac{\alpha}{1-\gamma}\|F(x)\|\)_;_
2. \(\|x^{+}-x\|\leq\frac{\alpha}{1-\gamma}\|F(y)\|\)_._
_As a consequence, if \(F\) is Lipschitz continuous on \(\mathcal{C}\) with constant \(L>0\), then it holds:_
\[\|x^{+}-x\|\leq\frac{\alpha(1-\gamma+\alpha L)}{(1-\gamma)^{2}}\|F(x)\|. \tag{4}\]
Proof.: Applying item \((ii)\) of Lemma 5 with \(w(\alpha)=y\) and \(z=x\) we obtain item \((i)\) and with \(w(\alpha)=x^{+}\) and \(z=y\) we obtain item (ii). We proceed to prove (4). For that, we first note that due to \(F\) be Lipschitz continuous on \(\mathcal{C}\) with constant \(L>0\), we obtain that
\[\|F(y)\|\leq\|F(y)-F(x)\|+\|F(x)\|\leq L\|y-x\|+\|F(x)\|.\]
Thus, considering that \(y\in\mathcal{P}_{\mathcal{C}}^{\gamma}(x,x-\alpha F(x))\), we can apply the item \((i)\) to obtain
\[\|F(y)\|\leq\frac{\alpha L}{1-\gamma}\|F(x)\|+\|F(x)\|=\frac{1-\gamma+\alpha L }{1-\gamma}\|F(x)\|.\]
By combing the last inequality with item \((ii)\), the desired inequality follows.
Extragradient inexact method with constant step size
In this section, we describe the extragradient method with a feasible inexact projection for solving problem (1). It should be noted that the proposed method uses appropriate relative error criteria to compute inexact projections on the constraint set, unlike the extragradient method which uses exact projections on the constraint set.
The inexact version of the classical extragradient method is stated as follows:
```
1: Take \(\alpha>0\), \(0<\bar{\gamma}<1/2\) and \((a_{k})_{k\in\mathbb{N}}\) satisfying \(\sum_{k\in\mathbb{N}}a_{k}<+\infty\). Let \(x^{1}\in\mathcal{C}\) and set \(k=1\).
2: Choose a error tolerance \(\gamma_{k}\) such that \[0\leq\gamma_{k}\|F(x^{k})\|^{2}\leq a_{k},\qquad 0\leq\gamma_{k}<\bar{\gamma},\] (5) and compute the following feasible inexact projections: \[y^{k} \in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}- \alpha F(x^{k})\big{)};\] (6) \[x^{k+1} \in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}- \alpha F(y^{k})\big{)}.\] (7)
3: If \(x^{k}=y^{k}\) or \(y^{k}=x^{k+1}\), stop, else set \(k\gets k+1\), and go to step 2.
```
**Algorithm 1** Extragradient inexact projection method-ElnexPM
Let us examine the main features of the EInexPM. To begin, we select a constant step size \(\alpha>0\), an upper bound for error tolerances \(\bar{\gamma}\) such that \(0<\bar{\gamma}<1/2\) and select an exogenous summable sequence \((a_{k})_{k\in\mathbb{N}}\) to control the error tolerance. The stopping criterion \(F(x^{k})=0\) is then evaluated in the current iteration \(x^{k}\). If this criterion is not satisfied, a non-negative error tolerance \(\gamma_{k}\) that fulfils the requirements (5) is selected. By using an inner procedure compute \(y^{k}\) as any feasible inexact projection \(x^{k}-\alpha F(x^{k})\) onto the feasible set \(\mathcal{C}\) relative to \(x^{k}\), i.e. \(y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\alpha F(x^{ k})\big{)}\). Finally, using again an inner procedure, the next iterate \(x^{k+1}\) is computed as any feasible inexact projection of \(x^{k}-\alpha F(y^{k})\) onto the feasible set \(\mathcal{C}\) relative to \(x^{k}\), i.e., \(x^{k+1}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\alpha F(y ^{k})\big{)}\).
It is worth noting that if \(\gamma_{k}\equiv 0\), then Remark 1 implies that inexact projections are the exact ones. Hence, EInexPM corresponds to the classical extragradient method introduced in [24]. It is important to note that \(\gamma_{k}\) in (5) can be selected as any nonnegative real number fulfilling \(0\leq\gamma_{k}\|F(x^{k})\|^{2}\leq a_{k}\), for a prefixed sequence \((a_{k})_{k\in\mathbb{N}}\). In this case, we have
\[\sum_{k\in\mathbb{N}}\big{(}\gamma_{k}\|F(x^{k})\|^{2}\big{)}<+\infty. \tag{8}\]
Since \(x^{1}\in\mathcal{C}\) and, for all \(k\in\mathbb{N}\), \(x^{k+1}\) is a feasible inexact projection onto \(\mathcal{C}\)
we conclude \((x^{k})_{k\in\mathbb{N}}\subset\mathcal{C}\). As a consequence of \(\mathcal{C}\) being a closed set, any cluster point of \((x^{k})_{k\in\mathbb{N}}\), if any exists, belongs to \(\mathcal{C}\).
Next we present two examples of sequences \((a_{k})_{k\in\mathbb{N}}\) satisfying \(\sum_{k\in\mathbb{N}}a_{k}<+\infty\).
**Example 1**.: _Sequences \((a_{k})_{k\in\mathbb{N}}\) satisfying \(\sum_{k\in\mathbb{N}}a_{k}<+\infty\) are obtained by taking \(a_{k}:=b_{k-1}-b_{k}\) and \(\bar{b}>0\) satisfying one the following conditions: (i) \(b_{0}=2\bar{b}\), \(b_{k}=\bar{b}/k\), for all \(k=1,2,\ldots\); (ii)\(b_{0}=2\bar{b}\), \(b_{k}=\bar{b}/\ln(k+1)\), for all \(k=1,2,\ldots\)._
The convergence analysis of the sequence \((x^{k})_{k\in\mathbb{N}}\) produced by EInexPM will be discussed in the following sections.
### Convergence analysis
We will show in this section that the sequence \((x^{k})_{k\in\mathbb{N}}\) generated by EInexPM converges to a solution of VIP(F, \(\mathcal{C}\)) for Lipschitz continuous pseudo-monotone operator \(F\) with Lipschitz constant \(L\geq 0\). To state our first result, let us recall that the solution set of VIP(F, \(\mathcal{C}\)) is denoted by \(\mathcal{C}^{*}\) and define the following constants:
\[\bar{\eta}:=1-\alpha^{2}L^{2}-2\bar{\gamma},\qquad\qquad\bar{\nu}:=\frac{ \alpha^{2}(1-\bar{\gamma}+\alpha L)^{2}}{(1-\bar{\gamma})^{4}}. \tag{9}\]
**Lemma 8**.: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty and closed convex set and \((x^{k})_{k\in\mathbb{N}}\) the sequence generated by Algorithm 4. Assume that \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a pseudo monotone operator on \(\mathcal{C}\) with respect to \(\mathcal{C}^{*}\) and Lipschitz continuous on \(\mathcal{C}\) with constant \(L>0\). Then, for any \(x^{*}\in\mathcal{C}^{*}\), there holds:_
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\bar{\eta}\|x^{k}-y^{k}\|^{2}+\bar {\nu}\gamma_{k}\|F(x^{k})\|^{2},\qquad k=1,2,\ldots.\]
Proof.: First note that
\[\big{\langle}x^{k}-\alpha F(y^{k})-y^{k},x^{k+1}-y^{k}\big{\rangle} =\big{\langle}x^{k}-\alpha F(x^{k})-y^{k},x^{k+1}-y^{k}\big{\rangle}\\ +\alpha\big{\langle}F(x^{k})-F(y^{k}),x^{k+1}-y^{k}\big{\rangle}.\]
Since (6) implies that \(y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\alpha F(x^{k })\big{)}\), we conclude that
\[\big{\langle}x^{k}-\alpha F(y^{k})-y^{k},x^{k+1}-y^{k}\big{\rangle}\leq\gamma _{k}\|y^{k}-x^{k}\|^{2}+\alpha\big{\langle}F(x^{k})-F(y^{k}),x^{k+1}-y^{k} \big{\rangle}. \tag{10}\]
For simplicity set \(z^{k}:=x^{k}-\alpha F(y^{k})\). As nothing more than a result of some algebraic manipulations, we arrive to the conclusion that
\[\|x^{k+1}-x^{*}\|^{2} =\|x^{k+1}-z^{k}\|^{2}+\|z^{k}-x^{*}\|^{2}-2\langle x^{k+1}-z^{k}, x^{*}-z^{k}\rangle\] \[=-\|x^{k+1}-z^{k}\|^{2}+\|z^{k}-x^{*}\|^{2}+2\langle z^{k}-x^{k+1 },x^{*}-x^{k+1}\rangle.\]
Using that \(x^{k+1}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},z^{k}\big{)}\) we conclude that \(\langle z^{k}-x^{k+1},x^{*}-x^{k+1}\rangle\leq\gamma_{k}\|x^{k+1}-x^{k}\|^{2}\), which combined with the previous equality give us
\[\|x^{k+1}-x^{*}\|^{2}\leq\|z^{k}-x^{*}\|^{2}-\|x^{k+1}-z^{k}\|^{2}+2\gamma_{k} \|x^{k+1}-x^{k}\|^{2} \tag{11}\]
Taking into account that \(z^{k}=x^{k}-\alpha F(y^{k})\), some calculations show that
\[\|z^{k}-x^{*}\|^{2}-\|x^{k+1}-z^{k}\|^{2} =\|x^{k}-x^{*}-\alpha F(y^{k})\|^{2}-\|x^{k}-x^{k+1}-\alpha F(y^{k} )\|^{2}\] \[=\|x^{k}-x^{*}\|^{2}-\|x^{k}-x^{k+1}\|^{2}+2\alpha\big{<}F(y^{k}), x^{*}-x^{k+1}\big{>}. \tag{12}\]
On the other hand, considering that \(x^{*}\in\mathcal{C}^{*}\), \(y^{k}\in\mathcal{C}\) and \(F\) is pseudo monotone operator on \(\mathcal{C}\) with respect to \(\mathcal{C}^{*}\) we have \(\big{<}F(y^{k}),y^{k}-x^{*}\big{>}\geq 0\). Thus, we conclude that
\[\big{<}F(y^{k}),x^{*}-x^{k+1}\big{>}\leq\big{<}F(y^{k}),y^{k}-x^{k+1}\big{>}.\]
The last inequality, when combined with (12), implies that
\[\|z^{k}-x^{*}\|^{2}-\|x^{k+1}-z^{k}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|x^{k}-x^{k+ 1}\|^{2}+2\alpha\big{<}F(y^{k}),y^{k}-x^{k+1}\big{>}.\]
The previous inequality is now combined with (11) to provide the following inequality
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|x^{k}-x^{k+1}\|^{2}+\gamma_{k} \|x^{k+1}-x^{k}\|^{2}+2\alpha\big{<}F(y^{k}),y^{k}-x^{k+1}\big{>}.\]
Since \(\|x^{k}-x^{k+1}\|^{2}=\|x^{k}-y^{k}\|^{2}+\|y^{k}-x^{k+1}\|^{2}+2\langle x^{k} -y^{k},y^{k}-x^{k+1}\rangle\), the last inequality is equivalent to
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|x^{k}-y^{k}\|^{2}-\|y^{k}-x^{k+ 1}\|^{2}+\gamma_{k}\|x^{k+1}-x^{k}\|^{2}+2\big{<}x^{k}-\alpha F(y^{k})-y^{k},x ^{k+1}-y^{k}\big{>}.\]
The last inequality together with (10) yield
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|x^{k}-y^{k}\|^{2}- \|y^{k}-x^{k+1}\|^{2}+\gamma_{k}\|x^{k+1}-x^{k}\|^{2}\\ +2\gamma_{k}\|y^{k}-x^{k}\|^{2}+2\alpha\big{<}F(x^{k})-F(y^{k}),x ^{k+1}-y^{k}\big{>}.\]
Considering that \(F\) is Lipschitz continuous on \(\mathcal{C}\) with constant \(L>0\), we have
\[\langle F(x^{k})-F(y^{k}),x^{k+1}-y^{k}\rangle\leq L\|x^{k}-y^{k}\|\|x^{k+1}- y^{k}\|,\]
which combined with the last inequality yields
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|x^{k}-y^{k}\|^{2}- \|y^{k}-x^{k+1}\|^{2}+\gamma_{k}\|x^{k+1}-x^{k}\|^{2}\\ +2\gamma_{k}\|y^{k}-x^{k}\|^{2}+2\alpha L\|x^{k}-y^{k}\|\|x^{k+1}- y^{k}\|.\]
or equivalently,
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-(1-\alpha^{2}L^{2}- 2\gamma_{k})\|x^{k}-y^{k}\|^{2}\\ +\gamma_{k}\|x^{k+1}-x^{k}\|^{2}-(\alpha L\|x^{k}-y^{k}\|-\|x^{k+ 1}-y^{k}\|)^{2}.\]
Hence, we have
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-(1-\alpha^{2}L^{2}-2\gamma_{k})\| x^{k}-y^{k}\|^{2}+\gamma_{k}\|x^{k+1}-x^{k}\|^{2}.\]
Thus, taking into account that \(x^{k+1}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\alpha F(y^{k} )\big{)}\), by applying Corollary 7 with \(x^{+}=x^{k+1}\), \(x=x^{k}\) and \(\gamma=\gamma_{k}\) we obtain that
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-(1-\alpha^{2}L^{2}-2\gamma_{k})\| x^{k}-y^{k}\|^{2}+\frac{\alpha^{2}(1-\gamma_{k}+\alpha L)^{2}}{(1-\gamma_{k})^{4}} \gamma_{k}\|F(x^{k})\|^{2}.\]
Therefore, using (9) and considering that \(0\leq\gamma_{k}<\bar{\gamma}\), the desired inequality follows.
**Theorem 9**.: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty and closed convex set and \((x^{k})_{k\in\mathbb{N}}\) the sequence generated by Algorithm 4. Assume that \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a pseudo monotone operator on \(\mathcal{C}\) with respect to \(\mathcal{C}^{*}\) and Lipschitz continuous on \(\mathcal{C}\) with constant \(L>0\). If_
\[0<\alpha<\frac{\sqrt{1-2\bar{\gamma}}}{L}, \tag{13}\]
_then the sequence \((x^{k})_{k\in\mathbb{N}}\) converges to a solution of the VIP(F,\(\mathcal{C}\))._
Proof.: Let \(x^{*}\in\mathcal{C}^{*}\) be a arbitrary solution of the VIP(F,\(\mathcal{C}\)). The condition (13) and \(0<\bar{\gamma}<1/2\) imply that \(\bar{\eta}>0\) and \(\bar{\nu}>0\). Thus, it follows from Lemma 8 that
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}+\bar{\nu}\gamma_{k}\|F(x^{k})\|^{ 2},\qquad k=1,2,\ldots.\]
The last inequality together with (8) implies that \((x^{k})_{k\in\mathbb{N}}\) is quasi-Fejer convergent to \(\mathcal{C}^{*}\). Considering that \(\mathcal{C}^{*}\) is nonempty, the item (i) of Lemma 3 implies that \((x^{k})_{k\in\mathbb{N}}\) is bounded. Let \(\bar{x}\) be a cluster point of \((x_{k})_{k\in\mathbb{N}}\) and \((x_{k_{j}})_{j\in\mathbb{N}}\) a subsequence of \((x_{k})_{k\in\mathbb{N}}\) such that \(\lim_{j\to+\infty}x_{k_{j}}=\bar{x}\). To continue the proof, keep in mind that Lemma 8 also implies that
\[\bar{\eta}\|x^{k}-y^{k}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}+ \bar{\nu}\gamma_{k}\|F(x^{k})\|^{2},\qquad k=1,2,\ldots.\]
By adding both sides of the previous inequality and using (8), we arrive at the conclusion that
\[\bar{\eta}\sum_{k=0}^{+\infty}\|x^{k}-y^{k}\|^{2}\leq\|x^{1}-x^{*}\|^{2}+\bar {\nu}\sum_{k=0}^{+\infty}\gamma_{k}\|F(x^{k})\|^{2}<+\infty.\]
Hence, we have \(\lim_{k\to+\infty}\|x^{k}-y^{k}\|=0\). Thus, taking into account that \(\lim_{j\to+\infty}x_{k_{j}}=\bar{x}\), we conclude that \(\lim_{j\to+\infty}y_{k_{j}}=\bar{x}\). It follows from (6) that
\[y^{k_{j}}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k_{j}}}\big{(}x^{k_{j}},x^{k_{ j}}-\alpha F(x^{k_{j}})\big{)}.\]
Considering that \(\gamma_{k_{j}}<\bar{\gamma}\), the last inclusion and Definition 2 imply that
\[\big{\langle}x^{k_{j}}-\alpha F(x^{k_{j}})-y^{k_{j}},y-y^{k_{j}}\big{\rangle} \leq\gamma\|y^{k_{j}}-x^{k_{j}}\|^{2},\qquad\qquad\forall y\in\mathcal{C}.\]
Since \(\lim_{j\to+\infty}x_{k_{j}}=\bar{x}\) and \(\lim_{j\to+\infty}y_{k_{j}}=\bar{x}\), taking the limit in the previous inequality as \(j\) tending to infinity yields
\[\big{\langle}\bar{x}-\alpha F(\bar{x})-\bar{x},y-\bar{x}\big{\rangle}\leq\bar{ \gamma}\|\bar{x}-\bar{x}\|,\qquad\qquad\forall y\in\mathcal{C}.\]
which, by using that \(\alpha>0\), is equivalent to \(\big{\langle}F(\bar{x}),y-\bar{x}\big{\rangle}\geq 0\), for all \(y\in\mathcal{C}\). Hence, \(\bar{x}\in\mathcal{C}^{*}\). Given that \(\bar{x}\) is a cluster point of \((x_{k})_{k\in\mathbb{N}}\), item (ii) of Lemma 3 implies that \(\lim_{k\to+\infty}x_{k}=\bar{x}\), and the proof is complete.
Extragradient inexact method with line search
In this section, we introduce an inexact variant of the extragradient method for VIP\((F,\mathcal{C})\) with \(F\) pseudo monotone operator on \(\mathcal{C}\) with respect to the solution set \(\mathcal{C}^{*}\) of problem 1, see for example [21, 3, 23]. Instead of a fixed step size, the method presented finds a suitable step size in each iteration by performing an Armijo type line search. It is worth noting that, like the classical extragradient method, the method does just two projections into the feasible set in each iteration. A full convergence analysis is provided, with no Lipschitz continuity assumption of the operator defining the variational inequality problem.
The inexact version of the proposed version of the extragradient method is stated as follows:
```
1:Take \(0<\hat{\beta}\leq\bar{\beta}\) and \((\beta_{k})_{k\in\mathbb{N}}\) satisfying \(0<\hat{\beta}\leq\beta_{k}\leq\bar{\beta}\). Take also \(\sigma\), \(\rho\) and \(\alpha\in(0,1)\), and \[0<\bar{\gamma}<\min\big{\{}1-\rho,2-\sqrt{3}\big{\}}.\] (14) Let \(x^{1}\in\mathcal{C}\) and set \(k=1\).
2: Choose a error tolerance \(\gamma_{k}\) such that \(0\leq\gamma_{k}<\bar{\gamma}\) and compute the following feasible inexact projection: \[y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\beta_{k}F( x^{k})\big{)}.\] (15)
3: If \(y^{k}=x^{k}\), then stop; otherwise, compute \[i_{k}:=\min\Big{\{}i\in\mathbb{N}:\ \big{\langle}F\big{(}x^{k}+\sigma\alpha^{i}(y ^{k}-x^{k})\big{)},y^{k}-x^{k}\big{\rangle}\leq\rho\big{\langle}F(x^{k}),y^{k }-x^{k}\big{\rangle}\Big{\}},\] (16) and set \[z^{k}:=x^{k}+\sigma\alpha^{i_{k}}(y^{k}-x^{k}).\] (17)
4: Compute the next iteration as a feasible inexact projection: \[x^{k+1}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\lambda_{k }F(z^{k})\big{)},\qquad\quad\lambda_{k}:=-\frac{1}{\|F(z^{k})\|^{2}}\big{\langle} F(z^{k}),z^{k}-x^{k}\big{\rangle}.\] (18)
5: Update \(k\gets k+1\) and go to Step 2.
```
**Algorithm 2** Extragradient inexact projection method with line search:
Let us go through the main features of the EInexPMLS. First, we must select some parameters that will control the behaviour of the algorithm and will be essential in your convergence analysis. The most important of these parameters is the upper bound \(\bar{\gamma}\) for error tolerance \(\gamma_{k}\), which is connected to the line search parameter \(\rho\). In step 2 of the algorithm, by using an inner procedure compute \(y^{k}\) as any feasible inexact projection of \(x^{k}-\beta_{k}F(x^{k})\) onto the feasible set
relative to \(x^{k}\), i.e. (15). Then, in step 3, the conceptual stopping criterion \(y^{k}=x^{k}\) is then evaluated in the current iteration. If this stopping criterion is not satisfied, a line search in the segment between the points \(x^{k}\) and \(y^{k}\) is done in order to decrease the mapping
\[(0,1)\ni t\mapsto\langle F(x^{k}+t(y^{k}-x^{k})),y^{k}-x^{k}\rangle.\]
In step 4, the line search resultant point \(z^{k}\) is utilized to define the following half space
\[H_{k}:=\Big{\{}x\in\mathbb{R}^{n}:\ \langle F(z^{k}),x-z^{k}\rangle\leq 0\Big{\}}, \tag{19}\]
whose boundary separates the current iterate \(x_{k}\) of the solution set \(\mathcal{C}^{*}\). Then, by applying Proposition 2, is computed the projection of \(x^{k}\) onto the hyperplane \(H_{k}\) as follows
\[\mathcal{P}_{H_{k}}(x^{k})=x^{k}-\lambda_{k}F(z^{k}),\qquad\quad\lambda_{k}:= -\frac{1}{\|F(z^{k})\|^{2}}\big{\langle}F(z^{k}),z^{k}-x^{k}\big{\rangle}. \tag{20}\]
Finally, using again an inner procedure, the next iterate \(x^{k+1}\) is computed as any feasible inexact projection of \(\mathcal{P}_{H_{k}}(x^{k})\) onto the feasible set \(\mathcal{C}\) relative to \(x^{k}\). Thus, by using (20), we conclude that (18) is equivalently stated as follows
\[x^{k+1}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},\mathcal{P}_{H_{ k}}(x^{k})\big{)}, \tag{21}\]
It is noteworthy that if \(\gamma_{k}\equiv 0\), then Remark 1 implies that inexact projection is the exact one. Consequently, ElnexPMLS corresponds to a version of the extragradient method addressed in [21], see also [3, 8].
**Remark 2**.: _The stopping criteria is well defined, i.e., if \(x^{k}=y^{k}\), then \(x^{k}\) is a solution of the Problem 1. In fact, \(y^{k}=x^{k}\) and (15) implies that \(x^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\alpha_{k}F( x^{k})\big{)}.\) Thus, it follows from Corollary 6 that \(x^{k}\in\mathcal{C}^{*}.\)_
Next, we show that the Algorithm 5 is well defined, namely that there exists \(i_{k}\) fulfilling (16).
**Proposition 10**.: _Step 3 is well-defined, i.e., there exists \(i_{k}\) satisfying 16._
Proof.: Assume by contradiction that \(\big{\langle}F\big{(}x^{k}+\sigma\alpha^{i}(y^{k}-x^{k})\big{)},y^{k}-x^{k} \big{\rangle}>\big{\langle}F(x^{k}),y^{k}-x^{k}\big{\rangle},\) for all \(i\in\mathbb{N}\) and \(y^{k}\neq x^{k}\). Since \(F\) is continuous and \(0<\alpha<1\), taking the limit in the last inequality as \(i\) tends to infinity, we conclude that \(\big{\langle}F(x^{k}),y^{k}-x^{k}\big{\rangle}\geq\rho\big{\langle}F(x^{k}), y^{k}-x^{k}\big{\rangle}.\) Thus, taking into account that \(0<\rho<1\), the last inequality that
\[\big{\langle}F(x^{k}),y^{k}-x^{k}\big{\rangle}\geq 0. \tag{22}\]
Considering that \(y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}(x^{k},x^{k}-\beta_{k}F(x^{k}))\) and \(x^{k}\in\mathcal{C}\), it follows from Definition 2 that
\[\langle x^{k}-\beta_{k}F(x^{k})-y^{k},x^{k}-y^{k}\rangle\leq\gamma_{k}\|y^{k} -x^{k}\|^{2}.\]
Since \(0<\hat{\beta}\leq\beta_{k}\) and \(0<\bar{\gamma}<1\), we can deduce from some algebraic manipulations in the preceding inequality that
\[0\leq\left\langle F(x^{k}),y^{k}-x^{k}\right\rangle\leq\frac{\gamma_{k}-1}{\beta _{k}}\|y^{k}-x^{k}\|^{2}<0,\]
which is a contradiction. Therefore, there exists \(i_{k}\) satisfying (16).
Let \((x_{k})_{k\in\mathbb{N}}\)_be the sequence generated by Algorithm 5_. Since \(x^{1}\in\mathcal{C}\) and, for any \(k\in\mathbb{N}\), (18) implies that \(x^{k+1}\) is a feasible inexact projection onto \(\mathcal{C}\), we conclude that \((x^{k})_{k\in\mathbb{N}}\subset\mathcal{C}\). Consequently, due to \(\mathcal{C}\) be a closed set, each clustering point of \((x^{k})_{k\in\mathbb{N}}\) that exists belongs to \(\mathcal{C}\).
### Convergence analysis
In the previous section, we show that EInexPMLs generates a \((x^{k})_{k\in\mathbb{N}}\) belonging to the set \(\mathcal{C}\). In this section we will show that \((x^{k})_{k\in\mathbb{N}}\) converges to a solution of VIP(F, \(\mathcal{C}\)). To this purpose, we will begin by establishing a few initial results. First, we show that the boundary of the half space \(H_{k}\) defined as in (19) separates the current iterates \(x_{k}\) of the solution set \(\mathcal{C}^{*}\).
**Proposition 11**.: _Let \(H_{k}\) be defined as in (19). Then, \(x^{k}\in H_{k}\) if and only if \(x^{k}\in\mathcal{C}^{*}\)._
Proof.: First, we assume that \(x^{k}\in H(z^{k})\). Thus, taking into account (17) we conclude that
\[0\geq\left\langle F(z^{k}),x^{k}-z^{k}\right\rangle=\sigma\alpha^{i_{k}}\left \langle F(z^{k}),x^{k}-y^{k}\right\rangle,\]
which implies that \(\left\langle F(z^{k}),y^{k}-x^{k}\right\rangle\geq 0\). Hence, by using (16) and (17), we conclude that \(\left\langle F(x^{k}),y^{k}-x^{k}\right\rangle\geq 0\). Therefore, using (15) together with Corollary 6 we conclude that \(x^{k}\in\mathcal{C}^{*}\).
Conversely, we assume that \(x^{k}\in\mathcal{C}^{*}\). Since \(x^{k}\) and \(y^{k}\) belong to \(\mathcal{C}\), it follows from (17) that \(z^{k}\in\mathcal{C}\) by convexity of \(\mathcal{C}\), for all \(k\in\mathbb{N}\). Thus, due to \(x^{k}\in\mathcal{C}^{*}\), we have \(\left\langle F(x^{k}),z^{k}-x^{k}\right\rangle\geq 0\). Since \(F\) is a pseudo-monotone operator, \(\left\langle F(x^{k}),z^{k}-x^{k}\right\rangle\geq 0\) implies \(\left\langle F(z^{k}),z^{k}-x^{k}\right\rangle\geq 0\), which means that \(x^{k}\in H_{k}\).
**Proposition 12**.: _The following inequality holds:_
\[\left\langle F(x^{k}),x^{k}-y^{k}\right\rangle\geq\frac{\max\{\rho,\sqrt{3}-1 \}}{\bar{\beta}}\|y^{k}-x^{k}\|^{2},\qquad k=1,2,\ldots. \tag{23}\]
Proof.: Keeping in mind that \(y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}(x^{k},x^{k}-\beta_{k}F(x^{k}))\) and \(x^{k}\in\mathcal{C}\), from Definition 2 we have
\[\left\langle x^{k}-\beta_{k}F(x^{k})-y^{k},x^{k}-y^{k}\right\rangle\leq\gamma _{k}\|y^{k}-x^{k}\|^{2},\]
which, after some algebraic manipulation, is rewritten as follows
\[\left\langle F(x^{k}),x^{k}-y^{k}\right\rangle\geq\frac{1-\gamma_{k}}{\beta_{k }}\|y^{k}-x^{k}\|^{2}. \tag{24}\]
Considering that \(0<\beta_{k}\leq\bar{\beta}\) and \(0\leq\gamma_{k}<\bar{\gamma}<\min\left\{1-\rho,2-\sqrt{3}\right\}\), we conclude that
\[\frac{1-\gamma_{k}}{\beta_{k}}\geq\frac{\max\{\rho,\sqrt{3}-1\}}{\bar{\beta}}.\]
The combination of (24) with the previous inequality yields the desired inequality.
Next, we are going to establish two important inequalities to show the convergence of Algorithm 4.
**Lemma 13**.: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty and closed convex set and \((x^{k})_{k\in\mathbb{N}}\) the sequence generated by Algorithm 4. Assume that \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a pseudo monotone operator on \(\mathcal{C}\) with respect to the solution set \(\mathcal{C}^{*}\) of problem 1 and \(x^{k}\notin\mathcal{C}^{*}\), for all \(k=1,2,\ldots\) Then, for any \(x^{*}\in\mathcal{C}^{*}\), there holds:_
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\frac{1}{(1-\bar{\gamma})^{2}} \left(\bar{\gamma}^{2}-4\bar{\gamma}+1\right)\lambda_{k}^{2}\|F(z^{k})\|^{2},\qquad k=1,2,\ldots. \tag{25}\]
_As a consequence, \((x^{k})_{k\in\mathbb{N}}\) is Fejer convergent to \(\mathcal{C}^{*}\), i.e., for any \(x^{*}\in\mathcal{C}^{*}\) there holds_
\[\|x^{k+1}-x^{*}\|\leq\|x^{k}-x^{*}\|,\qquad k=1,2,\ldots. \tag{26}\]
Proof.: Take \(x^{*}\in\mathcal{C}^{*}\). Denotes the boundary of \(H_{k}\) by \(L_{k}\), which is given by
\[L_{k}:=\{x\in\mathbb{R}^{n}:\ \left\langle F(z^{k}),x-z^{k}\right\rangle=0\}. \tag{27}\]
Since \(x^{k}\) and \(y^{k}\) belong to the convex set \(\mathcal{C}\), we obtain from (17) that \(z^{k}\in\mathcal{C}\), for all \(k\in\mathbb{N}\). Thus, due to \(F\) be pseudo monotone on \(\mathcal{C}\) with respect to the solution set \(\mathcal{C}^{*}\) of problem 1, we have
\[\left\langle F(z^{k}),x^{k}-x^{*}\right\rangle\geq 0.\]
The last inequality and the definition of \(H_{k}\) in (19) imply that \(x^{*}\in H_{k}\). Thus, we conclude that
\[\mathcal{P}_{H_{k}}(x^{*})=x^{*}. \tag{28}\]
We know as well that applying Proposition 4 with \(v=x^{k}-\lambda_{k}F(z^{k})\), \(u=x^{k}\), \(\gamma=\gamma_{k}\), \(w=x^{k+1}\) and \(\bar{w}=\bar{v}=x^{*}\) we obtain that
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-\lambda_{k}F(z^{k})-x^{*}\|^{2}-\|(x^{k}- \lambda_{k}F(z^{k})-x^{*})-(x^{k+1}-x^{*})\|^{2}+2\gamma_{k}\|x^{k+1}-x^{k}\|^ {2},\]
which implies that
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-\lambda_{k}F(z^{k})-x^{*}\|^{2}+2\gamma_{k} \|x^{k+1}-x^{k}\|^{2}. \tag{29}\]
Using (28) and the item \((ii)\) of Lemma 1 we have
\[\|\mathcal{P}_{H_{k}}(x^{k})-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|\mathcal{P} _{H_{k}}(x^{k})-x^{k}\|^{2}.\]
Since \(x^{k}\notin\mathcal{C}^{*}\), Proposition 11 implies that \(x^{k}\notin H_{k}\). Hence, the last inequality together with the first equality in (20) yield
\[\|x^{k}-\lambda_{k}F(z^{k})-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\lambda_{k}^{2}\| F(z^{k})\|^{2}.\]
As a result of combining the last inequality with (29), we arrive at the conclusion that
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\lambda_{k}^{2}\|F(z^{k})\|^{2}+2 \gamma_{k}\|x^{k+1}-x^{k}\|^{2}. \tag{30}\]
Since \(x^{k+1}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\lambda_{k }F(z^{k})\big{)}\), applying item (ii) of Lemma 5 with \(\gamma=\gamma_{k}\), \(x=x^{k}\), \(z=z^{k}\), \(\alpha=\lambda_{k}\) and \(w(\alpha)=x^{k+1}\) we obtain that
\[\|x^{k+1}-x^{k}\|\leq\frac{\lambda_{k}}{1-\gamma_{k}}\|F(z^{k})\|,\]
which combined with (30) yields
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\lambda_{k}^{2}\|F(z^{k})\|^{2}+2 \gamma_{k}\frac{\lambda_{k}^{2}}{(1-\gamma_{k})^{2}}\|F(z^{k})\|^{2},\]
or equivalently,
\[\|x^{k+1}-x^{*}\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\frac{1}{(1-\gamma_{k})^{2}}\left( \gamma_{k}^{2}-4\gamma_{k}+1\right)\lambda_{k}^{2}\|F(z^{k})\|^{2}. \tag{31}\]
Since \(0<\bar{\gamma}<2-\sqrt{3}\), the function \((0,\bar{\gamma}]\mapsto\left(\gamma_{k}^{2}-4\gamma_{k}+1\right)/(1-\gamma_{k })^{2}\) is increasing and positive, the inequality (25) follows (31). As a consequence, (26) follows from (25) and the proof is complete.
**Theorem 14**.: _Let \(\mathcal{C}\subset\mathbb{R}^{n}\) be a nonempty and closed convex set and \((x^{k})_{k\in\mathbb{N}}\) the sequence generated by Algorithm 4. Assume that \(F:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is a pseudo monotone operator on \(\mathcal{C}\) with respect to the solution set \(\mathcal{C}^{*}\) of problem 1. If \(\mathcal{C}^{*}\neq\varnothing\), then Algorithm 4 either ends at iteration \(k\), in which case \(x^{k}\in\mathcal{C}^{*}\), or generates an infinite sequence \((x^{k})_{k\in\mathbb{N}}\) that converges to a point belonging to \(\mathcal{C}^{*}\)._
Proof.: First, we assume that Algorithm 4 ends at iteration \(k\). In this case, we have \(y^{k}=x^{k}\) and Remark 2 implies that \(x^{k}\in\mathcal{C}^{*}\). Now, we assume that the sequence \((x^{k})_{k\in\mathbb{N}}\) is infinite. Hence, we have \(x^{k}\notin\mathcal{C}^{*}\), for all \(k=1,2,\ldots\).
Since \((x^{k})_{k\in\mathbb{N}}\) satisfies (26) in Lemma 13, it also satisfies Definition 1. Thus, due to \(\mathcal{C}^{*}\neq\varnothing\), it follows from item \((ii)\) of Lemma 3 that \((x^{k})_{k\in\mathbb{N}}\) is bounded. Using (25) of Lemma 13 we have
\[0<\frac{1}{(1-\bar{\gamma})^{2}}\left(\bar{\gamma}^{2}-4\bar{\gamma}+1\right) \lambda_{k}^{2}\|F(z^{k})\|^{2}\leq\|x^{k}-x^{*}\|^{2}-\|x^{k+1}-x^{*}\|^{2}, \qquad k=1,2,\ldots. \tag{32}\]
On the other hand, (26) implies that the sequence \((\|x^{k}-x^{*}\|)_{k\in\mathbb{N}}\) is monotone non-increasing and bounded from below. Thus, \((\|x^{k}-x^{*}\|)_{k\in\mathbb{N}}\) converges.
Hence, taking the limit in (32) as \(k\) tends to infinity, we have \(\lim_{k\to+\infty}\lambda_{k}\|F(z^{k})\|=0.\) And, in view of (18) we conclude that
\[\lim_{k\to+\infty}\lambda_{k}\|F(z^{k})\|=\lim_{k\to+\infty}\frac{1}{\|F(z^{k}) \|}\big{\langle}F(z^{k}),z^{k}-x^{k}\big{\rangle}=0. \tag{33}\]
Since \(y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\beta_{k}F(x ^{k})\big{)}\), applying item \((i)\) of Corollary 7 with \(\gamma=\gamma_{k}\), \(x=x^{k}\), \(\alpha=\beta_{k}\) and \(y=y^{k}\) we obtain that
\[\|y^{k}-x^{k}\|\leq\frac{\beta_{k}}{1-\beta_{k}}\|F(x^{k})\|.\]
Because \(0<\hat{\beta}<\beta_{k}<\bar{\beta}\), \(0\leq\gamma_{k}<\bar{\gamma}\) and \((x^{k})_{k\in\mathbb{N}}\) is bounded, the latter inequality implies that \((y^{k})_{k\in\mathbb{N}}\) is bounded. Hence, it follows from (17) that \((z^{k})_{k\in\mathbb{N}}\) is also bounded. In addition, due to \(F\) be continuous, we conclude that \((F(z^{k}))_{k\in\mathbb{N}}\) is bounded. Thus, from (33) we have \(\lim_{k\to+\infty}\big{\langle}F(z^{k}),z^{k}-x^{k}\big{\rangle}=0\). Therefore, it follows from the last equality and (17) that
\[\lim_{k\to+\infty}\sigma\alpha^{i_{k}}\big{\langle}F(z^{k}),y^{k}-x^{k}\big{ \rangle}=0. \tag{34}\]
Since the sequences \((x^{k})_{k\in\mathbb{N}}\subset\mathcal{C}\), \((y^{k})_{k\in\mathbb{N}}\subset\mathcal{C}\) and \((z^{k})_{k\in\mathbb{N}}\subset\mathcal{C}\) are bounded, we can take subsequences \((x^{k_{j}})_{j\in\mathbb{N}}\), \((y^{k_{j}})_{j\in\mathbb{N}}\) and \((z^{k_{j}})_{j\in\mathbb{N}}\) of them, respectively, and \(\bar{x}\in\mathcal{C}\), \(\bar{y}\in\mathcal{C}\) and \(\bar{z}\in\mathcal{C}\) such that \(\lim_{j\to+\infty}x^{k_{j}}=\bar{x}\), \(\lim_{j\to+\infty}y^{k_{j}}=\bar{y}\) and \(\lim_{j\to+\infty}z^{k_{j}}=\bar{z}\). Furthermore, due to \(0<\alpha<1\), \(0\leq\gamma_{k}<\bar{\gamma}\) and \(0<\hat{\beta}<\beta_{k}<\bar{\beta}\) for all \(k\in\mathbb{N}\), we can also assume without loss of generality that \(\lim_{j\to+\infty}\alpha^{i_{k_{j}}}=\bar{\alpha}\in[0,1]\), \(\lim_{j\to+\infty}\gamma_{k_{j}}=\hat{\gamma}\leq\bar{\gamma}\) and \(\lim_{j\to+\infty}\beta_{k_{j}}=\bar{\beta}\geq\hat{\beta}\). We have two possibilities for \(\bar{\alpha}\): \(\bar{\alpha}>0\) or \(\bar{\alpha}=0\).
First we assume that \(\bar{\alpha}>0\). In this case, it follows from (34) that
\[0=\lim_{j\to+\infty}\sigma\alpha^{i_{k_{j}}}\big{\langle}F(z^{k_{j}}),y^{k_{j }}-x^{k_{j}}\big{\rangle}=\sigma\bar{\alpha}\big{\langle}F(\bar{z}),\bar{y}- \bar{x}\big{\rangle}.\]
Because we are assuming that \(\bar{\alpha}>0\), we conclude that \(\langle F(\bar{z}),\bar{y}-\bar{x}\rangle=0\). Using, (16) and (17) together with Proposition 12 we conclude that
\[\big{\langle}F(z^{k_{j}}),y^{k_{j}}-x^{k_{j}}\big{\rangle}\leq\rho\big{\langle} F(x^{k_{j}}),y^{k_{j}}-x^{k_{j}}\big{\rangle}\leq-\rho\frac{\max\{\rho,\sqrt{3}-1 \}}{\bar{\beta}}\|y^{k_{j}}-x^{k_{j}}\|^{2}.\]
Taking the limit in the previous inequality as \(j\) tending to infinity and taking into account \(\lim_{j\to+\infty}x^{k_{j}}=\bar{x}\), \(\lim_{j\to+\infty}y^{k_{j}}=\bar{y}\) and \(\langle F(\bar{z}),\bar{y}-\bar{x}\rangle=0\), we conclude that \(\bar{y}=\bar{x}\). Considering that \(y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\beta_{k}F(x ^{k})\big{)}\), it follows from Definition 2 that
\[\big{\langle}x^{k_{j}}-\beta_{k_{j}}F(x^{k_{j}})-y^{k_{j}},y-y^{k_{j}}\big{ \rangle}\leq\gamma_{k_{j}}\|y^{k_{j}}-x^{k_{j}}\|^{2},\qquad\quad\forall y\in \mathcal{C}.\]
Thus, taking the limit in the previous inequality as \(j\) tending to infinity, using that \(\bar{y}=\bar{x}\) and \(\lim_{j\to+\infty}\beta_{k_{j}}=\tilde{\beta}>0\), we obtain that
\[\langle F(\bar{x}),y-\bar{x}\rangle\geq 0,\qquad\forall\ y\in\mathcal{C},\]
which implies that \(\bar{x}\in\mathcal{C}^{*}\). Since \(\bar{x}\) is a cluster point of \((x_{k})_{k\in\mathbb{N}}\) and the sequence \((x_{k})_{k\in\mathbb{N}}\) is Fejer convergent to \(\mathcal{C}^{*}\), item (ii) of Lemma 3 implies that \(\lim_{k\to+\infty}x_{k}=\bar{x}\).
Now, let us assume that \(\bar{\alpha}=0\). We proceed to prove that in this case, \((x^{k})_{k\in\mathbb{N}}\) likewise converges to some point belonging to the set \(\mathcal{C}^{*}\). For that, we consider the auxiliary sequence \((\hat{z}_{k})_{k\in\mathbb{N}}\) defined by
\[\hat{z}^{k}:=x^{k}+\sigma\frac{\alpha^{i_{k}}}{\alpha}(y^{k}-x^{k}),\qquad \qquad k=1,2,\ldots, \tag{35}\]
where \(i_{k}\) is defined in (16). Since \(\lim_{j\to+\infty}x^{k_{j}}=\bar{x}\), \(\lim_{j\to+\infty}y^{k_{j}}=\bar{y}\) and \(\lim_{j\to+\infty}\alpha_{k_{j}}=\bar{\alpha}=0\), it follows from (35) that \(\lim_{j\to+\infty}\hat{z}^{k_{j}}=\bar{x}\). The definition of \(i_{k_{j}}\) in (16) implies that
\[\big{\langle}F\big{(}\hat{z}^{k_{j}}\big{)},y^{k_{j}}-x^{k_{j}}\big{\rangle}> \rho\big{\langle}F(x^{k_{j}}),y^{k_{j}}-x^{k_{j}}\big{\rangle}. \tag{36}\]
Thus, (36) implies that \(\big{\langle}F(\bar{x}),\bar{y}-\bar{x}\big{\rangle}\geq\rho\big{\langle}F( \bar{x}),\bar{y}-\bar{x}\big{\rangle}\), and since \(\rho<1\) we conclude that
\[\big{\langle}F(\bar{x}),\bar{y}-\bar{x}\big{\rangle}\geq 0. \tag{37}\]
Given that \(y^{k}\in\mathcal{P}_{\mathcal{C}}^{\gamma_{k}}\big{(}x^{k},x^{k}-\alpha_{k}F( x^{k})\big{)}\), it follows from Definition 2 that
\[\big{\langle}x^{k_{j}}-\beta_{k_{j}}F(x^{k_{j}})-y^{k_{j}},y-y^{k_{j}}\big{\rangle}\leq \gamma_{k_{j}}\|y^{k_{j}}-x^{k_{j}}\|^{2},\qquad\qquad\forall y\in\mathcal{C}.\]
Taking the limit in the previous inequality as \(j\) going to infinity, and using that \(\hat{\gamma}\leq\bar{\gamma}\), we have
\[\langle\bar{x}-\bar{y},y-\bar{y}\rangle-\tilde{\beta}\big{\langle}F(\bar{x}), y-\bar{y}\big{\rangle}\leq\bar{\gamma}\|\bar{y}-\bar{x}\|^{2},\qquad\qquad \forall y\in\mathcal{C}. \tag{38}\]
Substituting \(y\in\mathcal{C}\) for \(\bar{x}\in\mathcal{C}\) in the last inequality, after some algebraic manipulations yields
\[\tilde{\beta}\big{\langle}F(\bar{x}),\bar{y}-\bar{x}\big{\rangle}\leq(\bar{ \gamma}-1)\|\bar{y}-\bar{x}\|^{2}. \tag{39}\]
Combining (37) with the latter inequality we obtain that \((1-\bar{\gamma})\|\bar{y}-\bar{x}\|^{2}\leq 0\). Hence, because (14) implies that \(1-\bar{\gamma}>0\), we conclude that \(\bar{y}=\bar{x}\). Therefore, due to \(\tilde{\beta}>0\) and \(\bar{y}=\bar{x}\), it follows from (38) that
\[\big{\langle}F(\bar{x}),y-\bar{x}\big{\rangle}\geq 0,\qquad\qquad\forall y\in \mathcal{C},\]
which also implies that \(\bar{x}\in\mathcal{C}^{*}\). Again, because \(\bar{x}\) is a cluster point of \((x_{k})_{k\in\mathbb{N}}\) and the sequence \((x_{k})_{k\in\mathbb{N}}\) is Fejer convergent to \(\mathcal{C}^{*}\), item (ii) of Lemma 3 implies that \(\lim_{k\to+\infty}x_{k}=\bar{x}\) and the proof is concluded.
## 6 Numerical Results
In order to demonstrate the behaviour of the proposed algorithm we now present the results of numerical experiments. We implemented Algorithms 4 and 5 and applied them on two test problems adapted from [8]. In both cases we slightly
change the feasible set \(C\) to be the norm 10 unit ball defined by \(C=\{x\in\mathbb{R}^{2}:{x_{1}}^{10}+{x_{2}}^{10}\leq 1\}\). Projections on this set are more challenging than the projection over the feasible sets (the Euclidean ball and the set \([0,1]\times[0,1]\) in the original problems. The algorithms were implemented in the Julia language. The code can be obtained from [https://github.com/ugonj/extragradient](https://github.com/ugonj/extragradient).
### Lipschitz operator
In this section, we show the convergence of Algorithm 4 on a Lipschitz continuous operator.
Consider the operator
\[T_{1}(x)=\begin{bmatrix}-1&-1\\ 1&-1\end{bmatrix}x+\begin{bmatrix}3/2\\ 1/2\end{bmatrix}\]
We applied Algorithm 4 on VIP(\(T_{1}\),\(C\)). The iterates are depicted in Figure 1.
Table 1 shows how the algorithm performed for various values of \(\alpha\) and \(\gamma\). The third columns represent the number of steps taken by the extragradient algorithm to reach the solution, and the last column represents the total number of linear searches applied by the Frank-Wolfe method. It is interesting to compare the case when \(\gamma\) is small (say, \(\gamma=0.01\)) to the cases when \(\gamma\) is larger: it can be seen that although the algorithm takes the same number of steps to reach the solution, when \(gamma\approx 0\) (in the case when the projection is almost exact), significantly more Frank-Wolfe iterations were performed. In other words, since performing approximate projections does not increase the total number of steps in the extragradient method, it is beneficial to use them, as each step of this method requires less work.
### Non-Lipschitz operator
Let \(t(x_{1},x_{2})=(x_{1}+\sqrt{x_{1}^{2}+4x_{2}})/2\) and define \(T_{2}(x_{1},x_{2})=-t/(1+t)(1,1)\). The operator \(T\) is quasimonotone, and it is pseudomonotone with respect to
Figure 1: Convergence of Algorithm 4 with exact and inexact projections. On this problem the number of steps is the same in both cases.
the solution set, as it has a unique solution \((1,1)\) (see [8]). However it is not Lipschitz. We applied Algorithm 5 on VIP(\(T_{2}\),\(C\)), with initial point at \((0,1)\). The iterates are depicted in Figure 1.
## 7 Conclusions
We investigate the Extragradient method for solving variational inequality problems using inexact projections onto the feasible set in this paper. We expect that our study can contribute to further research on the subject, particularly in solving large-scale problems where the computing effort of each iteration is connected with projections into the feasible set. Indeed, the idea of employing
\begin{table}
\begin{tabular}{r r r r} \hline \hline \(\alpha\) & \(\gamma\) & N. steps & N. linear searches \\ \hline
0.01 & 0.01 & 109 & 1.0317e7 \\
0.01 & 0.106 & 109 & 6.62431e6 \\
0.01 & 0.49 & 109 & 6.57129e6 \\ \hline
0.11 & 0.01 & 16 & 12997 \\
0.11 & 0.106 & 16 & 1085 \\
0.11 & 0.394 & 16 & 966 \\ \hline
0.21 & 0.01 & 11 & 2444 \\
0.21 & 0.106 & 11 & 237 \\
0.21 & 0.394 & 11 & 239 \\ \hline
0.31 & 0.01 & 9 & 935 \\
0.31 & 0.106 & 9 & 129 \\
0.31 & 0.298 & 9 & 126 \\ \hline
0.41 & 0.01 & 9 & 476 \\
0.41 & 0.106 & 9 & 113 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Behaviour of the algorithm for various values of \(\alpha\) and \(\gamma\).
Figure 2: Convergence of Algorithm 5.
inexactness in the projection rather than exactness is very attractive from a computational perspective. It is worth noting that the Frank-Wolfe approach has a low computing cost for each iteration, resulting in great computational performance in various types of compact sets, as cited in [18, 22]. Searching for new efficient methods, such as the Frank-Wolfe method which generates inexact projections, is a subject that calls for attention.
## 8 Acknowledgements
The first and last authors were supported by the Australian Research Council (ARC), Solving hard Chebyshev approximation problems through nonsmooth analysis (Discovery Project DP180100602).
The second author was supported in part by CNPq grant 304666/2021-1.
|
2309.11745 | PIE: Simulating Disease Progression via Progressive Image Editing | Disease progression simulation is a crucial area of research that has
significant implications for clinical diagnosis, prognosis, and treatment. One
major challenge in this field is the lack of continuous medical imaging
monitoring of individual patients over time. To address this issue, we develop
a novel framework termed Progressive Image Editing (PIE) that enables
controlled manipulation of disease-related image features, facilitating precise
and realistic disease progression simulation. Specifically, we leverage recent
advancements in text-to-image generative models to simulate disease progression
accurately and personalize it for each patient. We theoretically analyze the
iterative refining process in our framework as a gradient descent with an
exponentially decayed learning rate. To validate our framework, we conduct
experiments in three medical imaging domains. Our results demonstrate the
superiority of PIE over existing methods such as Stable Diffusion Walk and
Style-Based Manifold Extrapolation based on CLIP score (Realism) and Disease
Classification Confidence (Alignment). Our user study collected feedback from
35 veteran physicians to assess the generated progressions. Remarkably, 76.2%
of the feedback agrees with the fidelity of the generated progressions. To our
best knowledge, PIE is the first of its kind to generate disease progression
images meeting real-world standards. It is a promising tool for medical
research and clinical practice, potentially allowing healthcare providers to
model disease trajectories over time, predict future treatment responses, and
improve patient outcomes. | Kaizhao Liang, Xu Cao, Kuei-Da Liao, Tianren Gao, Wenqian Ye, Zhengyu Chen, Jianguo Cao, Tejas Nama, Jimeng Sun | 2023-09-21T02:46:32Z | http://arxiv.org/abs/2309.11745v2 | # PIE: Simulating Disease Progression via Progressive Image Editing
###### Abstract
The trajectories of disease progression could greatly affect the quality and efficacy of clinical diagnosis, prognosis, and treatment. However, one major challenge is the lack of longitudinal medical imaging monitoring of individual patients over time. To address this issue, we develop a novel framework termed Progressive Image Editing (PIE) that enables controlled manipulation of disease-related image features, facilitating precise and realistic disease progression simulation in imaging space. Specifically, we leverage recent advancements in text-to-image generative models to simulate disease progression accurately and personalize it for each patient. We also theoretically analyze the iterative refining process in our framework as a gradient descent with an exponentially decayed learning rate. To validate our framework, we conduct experiments in three medical imaging domains. Our results demonstrate the superiority of PIE over existing methods such as Stable Diffusion Video and Style-Based Manifold Extrapolation based on CLIP score (Realism) and Disease Classification Confidence (Alignment). Our user study collected feedback from 35 veter physicians to assess the generated progressions. Remarkably, \(76.2\%\) of the feedback agrees with the fidelity of the generated progressions. PIE can allow healthcare providers to model disease imaging trajectories over time,
Figure 1: Illustrative examples of disease progression simulation using PIE. The top progression sequence depicts a patientβs heart increasing in size (red), indicating Cardiomegaly. The bottom sequence demonstrates the expanding mass areas (blue) in a patientβs lung, indicating Edema.
predict future treatment responses, fill in missing imaging data in clinical records, and improve medical education. *
Footnote *: Equal Contribution. Code and checkpoints for replicating our results can be found at github.com/IrohXu/PIE and huggingface.co/IrohXu/stable-diffusion-mimic-cxr-v0.1.
## 1 Introduction
Disease progression refers to how an illness develops over time in an individual. By studying the progression of diseases, healthcare professionals can create effective treatment strategies and interventions. It allows them to predict the disease's course, identify possible complications, and adjust treatment plans accordingly. Furthermore, monitoring disease progression allows healthcare providers to assess the efficacy of treatments, measure the impact of interventions, and make informed decisions about patient care. A comprehensive understanding of disease progression is essential for improving patient outcomes, advancing medical knowledge, and finding innovative approaches to prevent and treat diseases.
However, disease progression modeling in the imaging space poses a formidable challenge primarily due to the lack of continuous monitoring of individual patients over time and the high cost to collect such longitudinal data (Sukkar et al., 2012; Wang et al., 2014; Liu et al., 2015; Cook and Bies, 2016; Severson et al., 2020). The intricate and multifaceted dynamics of disease progression, combined with the lack of comprehensive and continuous image data of individual patients, result in the absence of established methodologies (Hinrichs et al., 2011; Ray, 2011; Lee et al., 2019). Moreover, disease progression exhibits significant variability and heterogeneity across patients and disease sub-types, rendering a uniform approach impracticable.
Past disease progression simulation research has limitations in terms of its ability to incorporate clinical textual information, generate individualized predictions based on individualized conditions, and utilize non-longitudinal data. This highlights the need for more advanced and flexible simulation frameworks to accurately capture the complex and dynamic nature of disease progression in imaging data. To incorporate the generation model into a conditioned simulation of disease progression, we propose a progressive framework PIE, for disease progression simulation that combines text and image modalities. Specifically, we aim to progressively add and subtract disease-related features, controlled by a text encoder, to conditionally progress the disease without significantly altering the original base image features (see Figure 1). Our framework is built based on the invertibility of denoising diffusion probabilistic models (Ho et al., 2020; Song et al., 2020). Our theoretical analysis shows PIE can be viewed as a gradient descent toward the objective maximum log-likelihood of given text conditioning. The learning rate in this iterative process is decaying exponentially with each iteration forward, which means that the algorithm is effectively exploring the solution space while maintaining a balance between convergence speed and stability. This theoretical analysis guarantees that our framework is moving the instance toward the targeted manifold and ensures modification is bounded.
We evaluate PIE on three distinct medical imaging datasets with non-longitudinal disease progression data, including Chexpert (Irvin et al., 2019), Diabetic Retinopathy Detection (CHF, 2015) and ISIC 2018 (Codella et al., 2019). We demonstrate that our framework leads to more accurate and individualized disease progression predictions on these datasets, which can improve clinical diagnosis, treatment planning, and enhance patient records by filling in missing imaging data and potentially helping medical education. We also conducted a user study with physicians to evaluate the effectiveness of PIE for disease progression simulation. The study presented physicians with a set of simulated disease images and progressions, and then asked them to assess the accuracy and quality of each generated image and progression.
* We propose a temporal medical imaging simulation framework PIE, which allows for more precise and controllable manipulation of disease-related image features and leads to more accurate and individualized longitudinal disease progression simulation.
* We provide theoretical evidence that our iterative refinement process is equivalent to gradient descent with an exponentially decaying learning rate, which helps to establish a deeper understanding of the underlying mechanism and provides a basis for further improvement.
* We demonstrate the superior performance of PIE over baselines in disease progression prediction with three medical domains. The results show that PIE produces more accurate and high-quality disease progression prediction.
* We also conducted a user study with physicians to evaluate the effectiveness of our proposed framework for disease progression simulation. The physicians agree that simulated disease progressions generated by PIE closely matched physicians' expectations \(76.2\%\) of the time, indicating high accuracy and quality.
## 2 Related Works
**Disease Progression Simulation** Longitudinal disease progression data derived from individual electronic health records offer an exciting avenue to investigate the nuanced differences in the progression of diseases over time (Schulam & Arora, 2016; Stankeviciute et al., 2021; Chen et al., 2022; Mikhael et al., 2023; Koval et al., 2021). Most of the previous works are based on HMM (Wang et al., 2014; Liu et al., 2015; Alaa et al., 2017) and deep probabilistic models (Alaa & van der Schaar, 2019). Some recent works start to resolve disease progression simulation by using deep generation models. (Ravi et al., 2022) utilized GAN-based model and linear regressor with individual's sequential monitoring data for Alzheimer's disease progression simulation in MRI imaging space. However, all these methods have to use full sequential images and fail to address personalized healthcare in the imaging space. The lack of such time-series data, in reality, poses a significant challenge for disease progression simulation (Xue et al., 2020; Chen, 2022; Berrevoets et al., 2023).
**Generative Models** Generative models like Variational Autoencoders (VAEs) (Kingma & Welling, 2013) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2020) have been widely employed in medical imaging applications (Nie et al., 2017; Isola et al., 2017; Cao et al., 2020). Recent GAN models (Kang et al., 2023; Patashnik et al., 2021) have harnessed the power of CLIP (Radford et al., 2021) embedding to guide image editing based on contextual prompts. However, GAN-based models are unstable and difficult to optimize in general. Denoising Diffusion Models (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020; Rombach et al., 2022; Karras et al., 2022) have become increasingly popular in recent years due to their ability to create photo-realistic images from textual descriptions. One major advantage of these models is their ability to learn from large-scale datasets. Among the various text-to-image models, Stable Diffusion (Rombach et al., 2022) has received considerable attention because of its impressive performance in generating high-quality images and its relatively low cost to fine-tune. Its denoising process works similarly to the diffusion models but in a latent space, and this process results in a final image that is highly consistent with the input text, making it an excellent tool for text-guided image editing. Diffusion models can also be effortlessly incorporated into an image-to-image editing pipeline (Brooks et al., 2022; Parmar et al., 2023; Orgad et al., 2023), thus providing users the ability to edit scenarios across multiple modalities and assess potential imaging progressive editing paths. However, existing image-to-image methods can only be used for single-step editing, which makes it difficult to simulate personalized time-series progression data in the medical domain.
## 3 Problem Statement
In the traditional disease progression simulation setting, assume having sequential time-series image-text data pairs \(\{(\mathbf{x_{0}},y_{0}),(\mathbf{x_{1}},y_{1}),...,(\mathbf{x_{T}},y_{T})\}\) from each patient. The clinical image-text data pair \((\mathbf{x},y)\in\mathcal{X}\times\mathcal{Y}\) is sampled from a non-independent identically distribution, where \(\mathcal{Y}=\mathbb{R}^{n}\) denote the medical report space and \(\mathcal{X}=\mathbb{R}^{m}\) denote the medical imaging space. All the prior works either rely heavily on probability modeling: \(f_{\theta}(y_{0:t-1})\to y_{t}\)(Liu et al., 2015; Alaa & van der Schaar, 2019), or rely on using longitude data to train regression models for imaging simulation: \(f_{\theta}(x_{0:t-1},y_{t-1})\to y_{t}\)(Han et al., 2022; Ravi et al., 2022). However, it is hard to obtain sequential longitude data as most patients may not go to the same hospital for follow-up treatment. And the hospitals also lack medical imaging and clinical reports in the early stages of the disease.
In this paper, we redefine disease progression simulation using a data-driven generative model without the need for sequential time-series data or clinical prior knowledge. Anyone with access to discrete imaging and medical report data could individually train the model to predict disease progressions
without profound medical prior, significantly reducing the amount of work required for feature engineering and data collection.
**Definition 1** (**Simulate disease progression with non-sequential data**): _Assume \(h_{\phi}\) is a generative model learned from the data space: \(\Omega=\{(\mathbf{x},y)\in\chi\times\Gamma\}\), assuming it is independent identically distributed and each \((\mathbf{x},y)\) is from different individuals. In training phase, \(h_{\phi}\) models the mapping: \(\Gamma\rightarrow\chi\). In the inference phase, given an initial test data sample \((x_{t},y_{t})\) at progression stage \(t\), \(h_{\phi}\) converts input imaging \(x_{t}\) and input clinical context \(y_{T}\) to \(x_{T}\), where \(y_{T}\) is the language model inferred final step clinical report from \(y_{0}\) and \(x_{t},x_{t+1},...,x_{T}\) is the simulated sequential imaging progression._
In the following sections, we picked DDIM as a base step of our proposed method, because of its reversible theoretical properties that allow smooth transitions and convergence based on Definition 1. The proof is shown in the supplementary section.
## 4 Progressive Image Editing (PIE)
Progressive image editing (PIE) is a novel framework proposed to refine and enhance images in an iterative and discrete manner, allowing the use of additional prompts for small and precise adjustments to simulate semantic modification while keeping realism. Unlike traditional image editing techniques, PIE involves a multi-stage process where each step builds upon the previous one, with the aim of achieving a final result that is more refined and smooth than if all changes were made at once. The approach also enables precise control over specific semantic features of the image to be adjusted without significant impacts on other regions. The main purpose of PIE is to simulate disease progression from multi-modal input data.
**Procedure.** The inputs to PIE are a discrete medical image \(x_{0}^{(0)}\) depicting any start or middle stage of a disease and a corresponding clinical report latent \(y\) as the text conditioning (Rombach et al., 2022). \(y\) is generated from a pretrained text encoder from CLIP (Radford et al., 2021) [clip-vit-large-patch14], where the raw text input could either be a real report or synthetic report, providing the potential hint of the patient's disease progression. The output generated is a sequence of images, \(\{x_{0}^{(0)},x_{0}^{(1)},...,x_{0}^{(N)}\}\), illustrating the progression of the disease as per the input report. The iterative PIE procedure is defined as follows:
Figure 2: Overview of the PIE inference pipeline. PIE is illustrated using an example of disease progression editing X-ray from a healthy state to cardiomegaly. For any given step \(n\) in PIE, we first utilize DDIM inversion to procure an inverted noise map. Subsequently, we denoise it using clinical reports imbued with progressive cardiomegaly information. The output of DDIM denoising serves as the input for step \(n+1\), thus ensuring a gradual and controllable disease progression simulation. After simulating \(N\) steps, the image is converged to the final state.
**Proposition 1**: _Let \(x_{0}^{(N)}\sim\chi\), where \(\chi\) is distribution of photo-realistic images, \(y\) be the text conditioning, running \(PIE^{(n)}(\cdot,\cdot)\) recursively is denoted as following, where \(N\geq n\geq 1\),_
\[x_{0}^{(n)}=PIE^{(n)}(x_{0}^{(n-1)},y) \tag{1}\]
_Then, the resulting output \(x_{0}^{(N)}\) maximizes the posterior probability \(p(x_{0}^{(N)}|\,x_{0}^{(0)},y)\)._
With each round of editing as shown in Figure 2, the image gets closer to the objective by moving in the direction of \(-\nabla\log p(x|y)\). Due to the properties of DDIM, the step size would gradually decrease with a constant factor. Additional and more detailed proofs will be available in Supplementary B.
**Proposition 2**: _Assuming \(\|x_{0}^{(0)}\|\leq C_{1}\) and \(\|\epsilon_{\theta}(x,y)\|\leq C_{2}\), \((x,y)\in(\chi,\Gamma)\), for any \(\delta>0\), if_
\[n>\frac{2}{\log(\alpha_{0})}\cdot(log(\delta)-C) \tag{2}\]
_then,_
\[\|x_{0}^{(n+1)}-x_{0}^{(n)}\|<\delta \tag{3}\]
_where, \(\lambda=\frac{\sqrt{\alpha_{0}-\alpha_{0}\alpha_{1}}-\sqrt{\alpha_{1}-\alpha_{ 0}\alpha_{1}}}{\sqrt{\alpha_{1}}}\), \(\chi\) is the image distribution, \(\Gamma\) is the text condition distribution, and \(C=\log((\frac{1}{\sqrt{\alpha_{0}}}-1)\cdot C_{1}+\lambda\cdot C_{2})\)_
**Proposition 3**: _For all \(N>1\), \(\|x_{0}^{(N)}-x_{0}^{(0)}\|\leq[(\frac{1}{\sqrt{\alpha_{0}}}-1)\cdot C_{1}+ \lambda\cdot C_{2}]\)_
In addition, Proposition 2 and 3 show as \(n\) grows bigger, the changes between steps would grow smaller. Eventually, the difference between steps will get arbitrarily small. Hence, the convergence of \(PIE\) is guaranteed and modifications to any inputs are bounded by a constant.
```
0: Original input image \(x_{0}^{(0)}\) at the start point, input image \(x_{0}^{(n-1)}\) at stage \(n\), number of diffusion steps \(T\), text conditional vector \(y\), noise strength \(\gamma\), stable diffusion parameterized denoiser \(\epsilon_{\theta}\), a ROI mask \(M_{ROI}\), \(M_{ROI}^{i,j}\in[0,1]\) Output: Modified image \(x^{\prime}\) as \(x_{0}^{n}\)
1\(x^{\prime}\gets x_{0}^{(n-1)}\)
2\(k\leftarrow\gamma\cdot T\)
3\(\epsilon\leftarrow\mathcal{N}(0,\mathcal{I})\)
4\(x^{\prime}\leftarrow\sqrt{\alpha_{k}}\cdot x^{\prime}+\sqrt{1-\alpha_{k}}\cdot\epsilon\)
5for\(t=k\)to 1do
6\(x^{\prime}\leftarrow\sqrt{\alpha_{t-1}}(\frac{x^{\prime}-\sqrt{1-\alpha_{t}} \epsilon_{\theta}^{(t)}(x^{\prime},y)}{\sqrt{\alpha_{t}}})+\sqrt{1-\alpha_{t- 1}}\cdot\epsilon_{\theta}^{(t)}(x^{\prime},y)\)
7
8 end for
9\(x^{\prime}\leftarrow(\beta_{1}\cdot(x^{\prime}-x_{0}^{(0)})+x_{0}^{(0)})\cdot( 1-M_{ROI})+(\beta_{2}\cdot(x^{\prime}-x_{0}^{(0)})+x_{0}^{(0)})\cdot M_{ROI}\) return\(x^{\prime}\)as \(x_{0}^{(n)}\)
```
**Algorithm 1**Progressive Image Editing \(n\)-th step (_PIE\({}^{(n)}\)_)
## 5 Experiments and Results
In this section, we present experiments on various disease progression tasks. Experiments results demonstrate that PIE can simulate the disease-changing trajectory that is influenced by different medical conditions. Notably, PIE also preserves unrelated visual features from the original medical imaging report, even as it progressively edits the disease representation. Figure 5 showcases a set of disease progression simulation examples across three distinct types of medical imaging. Details for Stable Diffusion fine-tuning, pretraining model for confidence metrics settings are available in Supplementary D.
### Experimental Setups
**Implementation Details.** We present the details of single-step PIE in Algorithm 1. For _PIE\({}^{(n)}\)_, we define \(\alpha_{k}\) according to the DDIM case. Line 8 in Algorithm 1 ensures progressive and limited modifications between the original input image \(x_{0}^{(0)}\), the single-step edited output \(x^{\prime}\), and the region guide selector \(M_{ROI}\) through the utilization of interpolation average parameters \(\beta_{1}\) and \(\beta_{2}\). These parameters dictate the modification ratio between the ROI mask-guided space and the original input space. As \(\beta_{1}\) increases, the multi-step editing process becomes smoother, though it may sacrifice some degree of realism.
**Datasets for Disease Progression.** We validate the disease progression analysis through end-to-end medical domain-specific image inference. Specifically, we evaluate the pretrained domain-specific stable diffusion model on three different types of disease datasets in classification tasks: CheXpert for chest X-ray classification (Irvin et al., 2019), ISIC 2018 / HAM10000 (Codella et al., 2019; Tschandl et al., 2018) for skin cancer prediction, and Kaggle Diabetic Retinopathy Detection Challenge (CHF, 2015). Each of these datasets presents unique challenges and differ in scale, making them suitable for testing the robustness and versatility of PIE. We also collected over 30 healthy data among the test set from these datasets. These data were used for disease progression simulation. Three groups of progression visualization results can be found in Figure 5.
**Evaluation Metrics.** The assessment of generated disease progression images relies on two crucial aspects: alignment to edited disease feature and subject fidelity. To measure these characteristics, we utilize two primary metrics: the CLIP-I score and the classification confidence score. The CLIP-I
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Chest X-ray**} & \multicolumn{2}{c}{**Retinopathy**} & \multicolumn{2}{c}{**Skin Lesion Image**} \\ \cline{2-7} & **Conf** (\(\uparrow\)) & **CLIP-I** (\(\uparrow\)) & **Conf** (\(\uparrow\)) & **CLIP-I** (\(\uparrow\)) & **Conf** (\(\uparrow\)) & **CLIP-I** (\(\uparrow\)) \\ \hline Stable Diffusion Video & 0.389 & 0.923 & 0.121 & 0.892 & 0.201 & 0.886 \\ Extrapolation & 0.0543 & **0.972** & 0.0742 & 0.991 & 0.226 & 0.951 \\ PIE & **0.690** & 0.968 & **0.807** & **0.992** & **0.453** & **0.958** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons with multi-step editing simulations. The backbone of PIE and baseline approaches are Stable Diffusion with the same pre-trained weight.
Figure 4: Using PIE, SD Video, Extrapolation to simulate Edema progression with clinical reports as input prompt.
Figure 3: Cardiomegarly disease progression absolute difference heatmap simulated by PIE. The highlighted red portion illustrates the progression of the pathology at each step.
score (range from [-1, 1] in theory ) represents the average pairwise cosine similarity between the CLIP embeddings of generated and real images (Radford et al., 2021; Ruiz et al., 2022). The classification confidence score is determined using supervised train deep networks for binary classification between negative (healthy) and positive (disease) samples. It is denoted as \(\textbf{Conf}=Sigmoid(f_{\theta}(x))\) and represent whether the simulation results are aligned to target disease. In our experiments, we train the DeepAUC maximization method (Yuan et al., 2021) (SOTA of Chexpert and ISIC 2018 task 3) using DenseNet121 (Huang et al., 2017) as the backbone to compute the classification confidence score.
**Baselines** To our knowledge, there are no existing image editing models specifically designed for simulating disease progression without sequential training data. To underscore the unique strengths of PIE, we compare it against two of the most promising state-of-the-art baseline methods. One of them is Stable Diffusion Video (SD Video) (Raw, 2022) for short video generation. SD Video is the code implementation based on recent latent-based video generation methods (Blattmann et al., 2023; Wu et al., 2022). Another one is the Style-Based Manifold Extrapolation (Extrapolation) (Han et al., 2022) for generating progressive medical imaging, as it don't need diagnosis labelled data (Ravi et al., 2019; Han et al., 2022), which is similar to PIE's definition setting but need progression inference prior. During the comparison, all baseline methods are using the same Stable Diffusion finetuned weights and also applied \(M_{ROI}\) for region guided.
### Progression Simulation Comparison
In order to demonstrate the superior performance of PIE in disease progression simulation over other single-step editing methods, we perform experiments on three datasets previously mentioned. For each disease in these datasets, we used 10 healthy samples in the test set as simulation start point and run PIE, SD Video, Extrapolation with 5 random seeds. We obtain at least 50 disease imaging trajectories for each patient. Table 1 showcases that PIE consistently surpasses both SD Video and Extrapolation in terms of disease confidence scores while maintaining high CLIP-I scores. For Chexpert dataset, the 0.690 final confidence score is the average score among 5 classes. For Diabetic Retinopathy and ISIC 2018 datasets, we compare PIE with SD Video, Extrapolation for editing image to the most common seen class since these datasets are highly imbalanced. Figure 6 illustrates the evolution of disease confidence scores during the progression simulation in each step. We observe that PIE is able to produce more faithful and realistic progressive editing compared to the other two baselines. Interestingly, while the CLIP-I score of Extrapolation is comparable to that of PIE, it fails to effectively edit the key disease features of the input images as its confidence scores are low throughout and at the end of the progression. We also visualize the absolute differences between initial stage and each progression stage of Cardiomegaly in Figure. 3.
Figure 5: Disease Progression Simulation of PIE. The top progression is for Cardiomegaly. The middle progression is for Diabetic Retinopathy. The bottom progression is for Melanocytic Nevus.
Figure 4 showcases a group of progression simulation results for Edema in chest X-rays with CheXpert clinical report prompt. It is evident from our observations that while SD Video can significantly alter the input image in the initial step, it fails to identify the proper direction of progression in the manifold after a few steps and would easily create uncontrollable noise. Conversely, Extrapolation only brightens the Chest X-ray without making substantial modifications. PIE, on the other hand, not only convincingly simulates the disease trajectory but also manages to preserve unrelated visual features from the original medical imaging. Further visual comparisons among different datasets are presented in Supplementary E.
### Ablation Study
**Medical heuristic guidance.** During the PIE simulation, the region guide masks play a big role as prior information. Unlike other randomly inpainting tasks (Lugmayr et al., 2022), ROI mask for medical imaging can be extracted from real or synthetic clinical reports (Boag et al., 2020; Lovelace and Mortazavi, 2020) using domain-specific Segment Anything models (Kirillov et al., 2023; Ma and Wang, 2023). It helps keep unrelated regions consistent through the progressive changes using PIE or baseline models. In order to generate sequential disease imaging data, PIE uses noise strength \(\gamma\) to control the influence from the patient's clinically reported and expected treatment regimen at time \(n\). \(N\) is used to control the duration of the disease occurrence or treatment regimen. PIE allows the user to make such controls over the iterative process, and running \(\textit{PIE}^{(n)}\) multiple times can improve the accuracy of disease imaging tracking and reduce the likelihood of missed or misinterpreted changes. Related ablation study results for \(M_{ROI}\), \(\gamma\), \(N\), \(\beta_{1}\), \(\beta_{2}\) is available in Supplementary E.
**Compare with real longitude medical imaging sequence.** Lack of longitudinal data is a common problem in current chest X-ray datasets. However, due to the spread of COVID, part of the latest released dataset contains limited longitudinal data. In order to validate the disease sequence modeling that PIE can match real disease trajectories, we conduct experiments on generating edema disease progression from 10 patients in BrixIA COVID-19 Dataset (Signoroni et al., 2021). The input image is the day 1 image, and we use PIE to generate future disease progression based on real clinical reports for edema.
Figure 6: PIE excels in comparison to all the baseline methods across six different disease progression simulations. The inputs utilized are genuine healthy images from the test sets. For each image, we apply five random seeds to simulate disease progression over ten steps. The confidence score, a value that ranges from 0 to 1, signifies the classification confidence for a specific disease.
**Case study: co-occurring diseases.** PIE is capable of generating images for co-occurring diseases, although the performance slightly trails behind that of single disease generation. To evaluate this ability, we use 10 chest X-ray reports for co-occurring Cardiomegaly, Edema, and Pleural Effusion. 6 cases successfully obtained co-occurring diseases simulation sequence and agreed with experienced clinicians. Figure 8 illustrates an example of disease progression simulation. After 10 steps, all diseases achieve a high confidence score, indicating successful simulation.
### User Study
To further assess the quality of our generated images, we surveyed 35 physicians and radiologists with \(14.4\) years of experience on average to answer a questionnaire on chest X-rays. The questionnaire includes disease classifications on the generated and real X-ray images and evaluations of the realism of generated disease progression sequences of Cardiomegaly, Edema, and Pleural Effusion. More details of the questionnaire and the calculation of the statistics are presented in Supplementary F.1. The participating physicians have agreed with a probability of \(\mathbf{76.2}\%\) that the simulated progressions on the targeted diseases fit their expectations.
Table 2 provides an interesting insight into experienced physicians' performance in predicting the pathology on real and generated X-rays. Surprisingly, we find users' performance on generated X-rays is superior to their performance on real images, with substantially higher recall and F1. In addition, the statistical test suggests that the F1 scores of generated scans are significantly higher (p-value of \(0.0038\)) than the real scans. One plausible explanation is due to the nature of PIE, the result of running progressive image editing makes pathological features more evident. The aggregated results from the user study demonstrate our framework's ability to simulate disease progression to meet real-world standards.
## 6 Conclusion
In conclusion, our proposed framework, Progressive Image Editing (PIE), holds great potential as a tool for medical research and clinical practice in simulating disease progression. By leveraging recent
\begin{table}
\begin{tabular}{l|c c|c} \hline \hline
**Data** & **Precision** & **Recall** & **F1** \\ \hline Real & 0.505 & 0.415 & 0.455 \\ PIE & 0.468 & 0.662 & 0.549 \\ \hline \hline \end{tabular}
\end{table}
Table 2: To quantitatively analyze the responses of experienced physicians, we consider each pathology class independent and calculate the precision, recall, and F1 score across all diseases and physicians.
Figure 8: PIE can successfully simulate co-occurring disease progression (Patentβs clinical report shows high probability to be Cardiomegaly, Edema, Pleural Effusion at the same time).
Figure 7: Evaluating the confidence scores of PIE progression trajectories highlights the alignment with realistic progression. The mean absolute error between two trajectories is approximately \(0.0658\).
advancements in text-to-image generative models, PIE achieves high fidelity and personalized disease progression simulations. The theoretical analysis shows that the iterative refining process is equivalent to gradient descent with an exponentially decayed learning rate, and practical experiments on three medical imaging datasets demonstrate that PIE surpasses baseline methods, in several quantitative metrics. Furthermore, a user study conducted with veteran physicians confirms that the simulated disease progressions generated by PIE meet real-world standards. Despite current limitations due to the lack of large amounts of longitude data and detailed medical reports, our framework has vast potential in modeling disease trajectories over time, restoring missing data from previous records, predicting future treatment responses, and improving clinical education. Moving forward, we aim to incorporate more data with richer descriptions and different monitoring modalities, such as chemical biomarkers and physiological recordings, into fine-tuning generative models, enabling our framework to more precise control over disease simulation through text conditioning.
|
2309.06239 | Risk-Aware Reinforcement Learning through Optimal Transport Theory | In the dynamic and uncertain environments where reinforcement learning (RL)
operates, risk management becomes a crucial factor in ensuring reliable
decision-making. Traditional RL approaches, while effective in reward
optimization, often overlook the landscape of potential risks. In response,
this paper pioneers the integration of Optimal Transport (OT) theory with RL to
create a risk-aware framework. Our approach modifies the objective function,
ensuring that the resulting policy not only maximizes expected rewards but also
respects risk constraints dictated by OT distances between state visitation
distributions and the desired risk profiles. By leveraging the mathematical
precision of OT, we offer a formulation that elevates risk considerations
alongside conventional RL objectives. Our contributions are substantiated with
a series of theorems, mapping the relationships between risk distributions,
optimal value functions, and policy behaviors. Through the lens of OT, this
work illuminates a promising direction for RL, ensuring a balanced fusion of
reward pursuit and risk awareness. | Ali Baheri | 2023-09-12T13:55:01Z | http://arxiv.org/abs/2309.06239v1 | # Risk-Aware Reinforcement Learning through Optimal Transport Theory
###### Abstract
In the dynamic and uncertain environments where reinforcement learning (RL) operates, risk management becomes a crucial factor in ensuring reliable decision-making. Traditional RL approaches, while effective in reward optimization, often overlook the landscape of potential risks. In response, this paper pioneers the integration of Optimal Transport (OT) theory with RL to create a risk-aware framework. Our approach modifies the objective function, ensuring that the resulting policy not only maximizes expected rewards but also respects risk constraints dictated by OT distances between state visitation distributions and the desired risk profiles. By leveraging the mathematical precision of OT, we offer a formulation that elevates risk considerations alongside conventional RL objectives. Our contributions are substantiated with a series of theorems, mapping the relationships between risk distributions, optimal value functions, and policy behaviors. Through the lens of OT, this work illuminates a promising direction for RL, ensuring a balanced fusion of reward pursuit and risk awareness.
## I Introduction
Reinforcement learning (RL) has witnessed remarkable advancements in recent years, fueling innovations across diverse fields such as robotics, finance, aviation, and intelligent transportation systems [1, 2, 3, 4]. While traditional RL methods are focused on maximizing cumulative rewards, real-world applications often demand a more comprehensive approach that considers the inherent risks associated with decision-making. Specifically, in scenarios where actions may lead to high-stake consequences or where the environment is intrinsically uncertain, simply aiming for reward maximization without considering risk can lead to suboptimal or even catastrophic outcomes [5].
Safety in RL is instrumental to its advancements. Prominent techniques include model-based strategies for assessing action safety [6, 7, 8], shielding mechanisms to counter unsafe decisions [9, 10, 11], constrained optimization for policy adherence [12, 13, 14], and formal methods underpinning rigorous safety with mathematical constructs [15, 16, 17]. Amid this landscape, risk-aware RL stands out. Techniques for risk-aware RL range from incorporating financial risk metrics like Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) [18, 19, 20], to embracing Distributional RL that models the entire return distribution [21, 22, 23], to formulating risk-sensitive policies that inherently favor safer actions [24, 25]. This adaptation in strategy ensures that agents are not only aiming for high rewards but are also cautious of rare yet consequential adverse events, striking a balance between reward-seeking and prudence in complex environments.
Building on the foundations of risk-sensitive RL, our work proposes a novel perspective by leveraging the powerful mathematical framework of Optimal Transport (OT). The OT provides tools to measure the distance between probability distributions in a geometrically meaningful way [26]. In the context of RL, this allows us to treat risk as a divergence between the desired (or target) distribution of outcomes and the distribution induced by the agent's policy. By framing risk management as an OT problem, we can inherently consider the entire distribution of returns, capturing both the expected rewards and the associated risks. At its core, our approach aims to minimize the OT distance between the state distribution generated by the policy and a _predefined_ target risk distribution. Such a formulation fosters a balanced trade-off between reward maximization and risk mitigation. It accounts for the variability in outcomes, promoting policies that not only achieve high expected rewards but also align closely with the desired risk profile. The contributions of this paper are twofold:
* We present a formulation for risk-aware RL, harnessing the capabilities of OT theory. This formulation integrates risk considerations into the RL paradigm, charting a novel direction for risk-sensitive decision-making.
* We elucidate this framework with a series of theorems that highlight the interplay between risk distributions, value functions, and policy dynamics. These theorems reveal the balance between seeking rewards and navigating risks, emphasizing that the minimization of OT costs can pave the way for the derivation of policies that optimize rewards while maintaining safety.
## II Preliminaries
### _Reinforcement Learning_
RL is a framework for decision-making problems where an agent interacts with an environment in order to achieve a certain goal [27]. The environment is typically modeled as a Markov Decision Process (MDP), denoted by a tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\), where \(\mathcal{S}\) is the state space, representing all possible states the agent could inhabit in the environment. \(\mathcal{A}\) is the action space, indicating all possible actions the agent can take. \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) is the transition probability function, where \(\mathcal{P}(s^{\prime}|s,a)\) represents the probability of transitioning to state \(s^{\prime}\) when action \(a\) is taken in state \(s\). \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, with \(\mathcal{R}(s,a)\) denoting the expected immediate reward for taking action \(a\) in state \(s\). \(\gamma\in[0,1]\) is the discount factor, which determines the present value of future rewards.
The agent's behavior is defined by a policy \(\pi:\mathscr{S}\times\mathscr{A}\rightarrow[0,1]\), which is a probability distribution over actions given the current state. The goal of the agent is to learn an optimal policy \(\pi^{*}\) that maximizes the expected cumulative discounted reward, defined as:
\[\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}\!\!\cdot\!\mathscr{R}\left(s _{t},a_{t}\right)\right] \tag{1}\]
where the expectation is taken over the trajectory of states and actions \((s_{0},a_{0},s_{1},a_{1},...)\) generated by following policy \(\pi\).
### _Optimal Transport Theory_
OT theory provides a means of comparing different probability measures by computing the minimum cost required to transform one distribution into another. Originally developed by Gaspard Monge in the 18th century and later extended by Leonid Kantorovich, OT theory has found applications in numerous fields including economics, computer graphics, and machine learning [28]. Let \(\mathscr{P}(\mathscr{S})\) denote the set of probability measures over the state space \(\mathscr{S}\). An OT plan between two probability measures \(\mu,\nu\in\mathscr{P}(\mathscr{S})\) is a joint distribution \(\gamma\) over \(\mathscr{S}\times\mathscr{S}\) with marginal distributions \(\mu\) and \(\nu\). In other words, for all \(A,B\subseteq\mathscr{S}\), we have:
\[\gamma(A\times\mathscr{S})=\mu(A),\quad\gamma(\mathscr{S}\times B)=\nu(B) \tag{2}\]
The cost of a transport plan \(\gamma\) under a cost function \(c:\mathscr{S}\times\mathscr{S}\rightarrow\mathbb{R}\) is given by:
\[\int_{\mathscr{S}\times\mathscr{S}}c\left(s,s^{\prime}\right)d\gamma\left(s,s ^{\prime}\right) \tag{3}\]
The OT problem involves finding the transport plan that minimizes this cost:
\[\gamma^{*}=\arg\min_{\gamma\in\Gamma(\mu,\nu)}\int_{\mathscr{S}\times\mathscr{ S}}c\left(s,s^{\prime}\right)d\gamma\left(s,s^{\prime}\right) \tag{4}\]
where \(\Gamma(\mu,\nu)\) is the set of all transport plans between \(\mu\) and \(\nu\). The OT cost or distance is the cost of the OT plan:
\[D_{OT}(\mu,\nu)=\min_{\gamma\in\Gamma(\mu,\nu)}\int_{\mathscr{S}\times\mathscr{ S}}c\left(s,s^{\prime}\right)d\gamma\left(s,s^{\prime}\right) \tag{5}\]
This cost can be interpreted as a distance metric between probability distributions, which induces a metric space structure on \(\mathscr{P}(\mathscr{S})\).
## III Risk-Aware Reinforcement Learning with Optimal Transport
In this section, we propose a novel approach to risk-sensitive RL that leverages the mathematical theory of OT. Our approach aims to guide the learning process of an RL agent not only by the expected return but also by the similarity between the state distribution under the current policy and a given risk distribution.
**Problem Formulation.** Consider an RL agent interacting with an environment defined by an MDP. We define a risk metric that assigns a risk value to each state, and form a risk distribution \(P_{r}:\mathscr{S}\rightarrow[0,1]\) over the states. The risk distribution represents the agent's prior knowledge or preferences regarding the safety of different states. The state distribution under a policy \(\pi\), denoted by \(P_{\pi}\), is the stationary distribution of the Markov chain induced by \(\pi\) in the MDP. The state distribution reflects the likelihood of the agent visiting different states under policy \(\pi\). Our objective is to find a policy that not only maximizes the expected return but also minimizes the OT cost between the state distribution under the policy and the risk distribution. The OT cost serves as a measure of the risk associated with the policy. A low OT cost indicates that the policy is aligned with the risk distribution, i.e., the agent is more likely to visit safe states and avoid risky states. The OT cost between the state distribution \(P_{\pi}\) and the risk distribution \(P_{r}\) is defined as:
\[D_{OT}(P_{\pi},P_{r})=\inf_{\gamma\in\Pi(P_{\pi},P_{r})}\mathbb{E}_{(s,s^{ \prime})\sim\gamma}[c(s,s^{\prime})], \tag{6}\]
where \(\Pi(P_{\pi},P_{r})\) is the set of all joint distributions on \(S\times S\) with \(P_{\pi}\) and \(P_{r}\) as marginals, and \(c:S\times S\rightarrow\mathbb{R}\) is a cost function that measures the cost of transporting probability mass from state \(s\) to state \(s^{\prime}\). In this work, we consider the squared Euclidean distance as the cost function, i.e., \(c(s,s^{\prime})=||s-s^{\prime}||^{2}\). The agent's objective is to find a policy \(\pi\) that maximizes the expected discounted reward while minimizing the OT cost. This leads to the following optimization problem:
\[\max_{\pi}\mathbb{E}_{\pi}[G_{t}]-\lambda D_{OT}(P_{\pi},P_{r}), \tag{7}\]
where \(G_{t}=\sum_{k=0}^{\infty}\gamma^{k}R_{t+k+1}\) is the return at time \(t\), and \(\lambda>0\) is a risk sensitivity coefficient that determines the trade-off between reward maximization and risk minimization. We propose a modified Q-learning algorithm to solve this optimization problem. The Q-function is updated as follows:
\[Q(s,a) \gets Q(s,a)+\alpha\bigg{[}R(s,a)-\lambda C(s)\] \[+\gamma\max_{a^{\prime}\in\mathscr{A}}Q(s^{\prime},a^{\prime})- Q(s,a)\bigg{]}, \tag{8}\]
where \(C(s)=D_{OT}(P_{\pi},P_{r})\) is the OT cost from state \(s\), \(\alpha\) is the learning rate, and \(s^{\prime}\) is the next state. The above formulation presents a novel approach to risk-sensitive RL, providing a means to incorporate safety considerations directly into the learning process. The following sections will provide theoretical analysis to demonstrate the performance and advantages of this approach.
## IV Theoretical Results
In this section, we delve into the mathematical underpinnings of risk-aware RL using OT theory. The objective is to provide a comprehensive understanding of how risk, as captured by OT metrics, interacts with fundamental concepts in RL:
**Safety of Policy (Theorem 1):** Theorem 1 postulates the relationship between the policy that minimizes OT costs and its intrinsic safety. Specifically, by minimizing the OT
distance between the induced state distribution of a policy and a given risk distribution, the policy can be intuitively understood as "safer".
**Optimal Value Function and OT (Theorem 2):** Building on the implications of safety, Theorem 2 presents the impacts of embedding OT costs into the objective function of an MDP. The theorem presents a comparative analysis between the optimal value functions with and without the consideration of the OT metric, emphasizing the conservative nature of the risk-aware formulation.
**Sensitivity Analysis of Optimal Policies (Theorem 3):** Expanding the discourse to the dynamics of risk sensitivity, Theorem 3 investigates how variations in the risk sensitivity parameter influence the derived optimal policies. The results underscore a systematic relationship between risk sensitivity and OT distances for respective optimal policies.
**State Visits and Risk Distribution (Theorem 4):** Finally, our discourse culminates with Theorem 4, which offers a perspective on state visitation patterns. By focusing on states proximate to a target risk distribution, this theorem bridges the gap between policy safety and state distribution, highlighting how an optimal policy in the OT sense also maximizes the expectation of visiting states that align closely with the risk distribution.
**Theorem 1.**_Given an MDP and a risk distribution \(p_{\pi}\), the policy \(\pi\) that minimizes the OT cost \(D_{OT}(p_{\pi},p_{r})\) is a "safer" policy in the sense that it induces a state distribution closer to the risk distribution._
**PROOF.** We will prove this by contradiction. Suppose there exists a policy \(\pi^{\prime}\) such that \(\pi^{\prime}\) is safer than \(\pi\), i.e., the state distribution \(p_{\pi^{\prime}}\) induced by \(\pi^{\prime}\) is closer to the risk distribution \(p_{r}\) than \(p_{\pi}\), but \(\pi\) minimizes the OT cost \(D_{OT}(p_{\pi},p_{r})\). In mathematical terms, this means that \(D_{OT}(p_{\pi^{\prime}},p_{r})<D_{OT}(p_{\pi},p_{r})\), but \(D_{OT}(p_{\pi},p_{r})\leq D_{OT}(p_{\pi^{\prime}},p_{r})\), where the second inequality comes from the assumption that \(\pi\) minimizes the OT cost. However, this leads to a contradiction because it would imply that \(D_{OT}(p_{\pi^{\prime}},p_{r})\) is both less than and greater than or equal to \(D_{OT}(p_{\pi},p_{r})\), which is not possible. Therefore, our assumption must be wrong, and there cannot exist a policy \(\pi^{\prime}\) that is safer than \(\pi\). This means that \(\pi\) is the safest policy in the sense that it induces a state distribution closer to the risk distribution.
This formalizes the intuition that if a policy \(\pi\) minimizes the OT cost between the state distribution under the policy and the risk distribution, then the state distribution under the policy must be closer to the risk distribution (in the sense of the OT cost) than any state distribution induced by a different policy. This is what we mean when we say that \(\pi\) is a "safer" policy.
**Implications.** Theorem 1 establishes a foundational bridge between the concept of risk minimization in RL and the OT metric. In essence, it provides a formal justification for the use of OT as an effective means to quantify and address the risk in RL. The theorem showcases that when we optimize for OT in the context of RL, we are inherently driving our policy towards safer behaviors. This reinforces the rationale behind introducing OT in RL frameworks, especially for safety-critical tasks.
**Theorem 2 (Impact of OT on the Value Function.)**_Given an MDP and a risk sensitivity parameter \(\lambda\), the optimal value function \(V^{*}\) that incorporates the OT cost as a part of the objective function, is less than or equal to the optimal value function \(V^{*}_{0}\) that does not consider the OT cost, i.e., \(V^{*}\leq V^{*}_{0}\)._
**PROOF.** By definition, the optimal value function \(V^{*}_{0}\) for an MDP is given by the maximum expected discounted reward over all policies, i.e., \(V^{*}_{0}=\max_{\pi}\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})]\), where the expectation is taken over the randomness in the transitions and the policy. The optimal value function \(V^{*}\) that incorporates the OT cost as a part of the objective function is given by:
\[V^{*}=\max_{\pi}\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}(R(s_{t},a_{t})- \lambda D_{OT}(p_{\pi},p_{r}))] \tag{9}\]
Since \(D_{OT}(p_{\pi},p_{r})\geq 0\) for all policies \(\pi\) and \(\lambda\geq 0\), we have:
\[R(s_{t},a_{t})-\lambda D_{OT}(p_{\pi},p_{r})\leq R(s_{t},a_{t}) \tag{10}\]
for all states \(s_{t}\) and actions \(a_{t}\). Therefore,
\[\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}(R(s_{t},a_{t})-\lambda D_{OT}(p_{\pi},p_{r}))]\leq\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})] \tag{11}\]
for all policies \(\pi\). Taking the maximum over all policies on both sides, we get \(V^{*}\leq V^{*}_{0}\).
This result intuitively makes sense because adding the OT cost to the objective function can only decrease the maximum expected discounted reward. If the OT cost were zero for some policy, then that policy would achieve the same expected discounted reward as in the standard MDP without the OT cost. However, if the OT cost is positive for a policy, then that policy would achieve a lower expected discounted reward compared to the standard MDP. Therefore, the maximum expected discounted reward over all policies is lower when the OT cost is incorporated into the objective function.
**Implications.** The theorem mathematically validates that the incorporation of the OT cost can serve as a mechanism to steer the agent's behavior. As the penalty due to deviating from the desired risk distribution increases, the agent is more inclined to select actions that conform to the risk profile, even if those actions may not yield the highest immediate reward.
**Theorem 3.**_Given an MDP and a risk sensitivity parameter \(\lambda\), the optimal policy \(\pi^{*}\) is non-decreasing in \(\lambda\), i.e., if \(\lambda_{1}>\lambda_{2}\), then \(D_{OT}(p_{\pi^{*}_{1}},p_{r})\leq D_{OT}(p_{\pi^{*}_{2}},p_{r})\), where \(\pi^{*}_{1}\) and \(\pi^{*}_{2}\) are the optimal policies for \(\lambda_{1}\) and \(\lambda_{2}\), respectively._
**PROOF.** This theorem could be proved by showing that a higher \(\lambda\) leads to a higher penalty for deviation from the risk distribution in the objective function, thus leading to a policy that induces a state distribution closer to the risk distribution. Let's assume for contradiction that the statement is not true. This would mean that there exists a \(\lambda_{1}>\lambda_{2}\) such
that \(D_{OT}(p_{\pi^{*}_{1}},p_{r})>D_{OT}(p_{\pi^{*}_{2}},p_{r})\), where \(\pi^{*}_{1}\) and \(\pi^{*}_{2}\) are the optimal policies for \(\lambda_{1}\) and \(\lambda_{2}\), respectively. Since \(\pi^{*}_{1}\) is optimal for \(\lambda_{1}\), we know that \(J(\pi^{*}_{1},\lambda_{1})\geq J(\pi^{*}_{2},\lambda_{1})\), where \(J(\pi,\lambda)\) is the objective function. Expanding this gives:
\[E_{\pi^{*}_{1}}[R]-\lambda_{1}D_{OT}(p_{\pi^{*}_{1}},p_{r})\geq E_{\pi^{*}_{2} }[R]-\lambda_{1}D_{OT}(p_{\pi^{*}_{2}},p_{r}) \tag{12}\]
Rearranging the terms gives:
\[\lambda_{1}(D_{OT}(p_{\pi^{*}_{2}},p_{r})-D_{OT}(p_{\pi^{*}_{1}},p_{r}))\geq E _{\pi^{*}_{2}}[R]-E_{\pi^{*}_{1}}[R] \tag{13}\]
Since \(\lambda_{1}>\lambda_{2}\), we can multiply both sides by \(\frac{\lambda_{2}}{\lambda_{1}}\) (which is less than 1) to get:
\[\lambda_{2}(D_{OT}(p_{\pi^{*}_{2}},p_{r})-D_{OT}(p_{\pi^{*}_{1}},p_{r}))\geq \frac{\lambda_{2}}{\lambda_{1}}(E_{\pi^{*}_{2}}[R]-E_{\pi^{*}_{1}}[R]) \tag{14}\]
Adding \(E_{\pi^{*}_{1}}[R]-\lambda_{2}D_{OT}(p_{\pi^{*}_{1}},p_{r})\) to both sides gives:
\[E_{\pi^{*}_{1}}[R]-\lambda_{2}D_{OT}(p_{\pi^{*}_{1}},p_{r})\geq E_{\pi^{*}_{2} }[R]-\lambda_{2}D_{OT}(p_{\pi^{*}_{2}},p_{r}) \tag{15}\]
This contradicts the assumption that \(\pi^{*}_{2}\) is optimal for \(\lambda_{2}\), as it implies that \(\pi^{*}_{1}\) is at least as good as \(\pi^{*}_{2}\) under \(\lambda_{2}\). Thus, our initial assumption was wrong, and \(D_{OT}(p_{\pi^{*}_{1}},p_{r})\leq D_{OT}(p_{\pi^{*}_{2}},p_{r})\) when \(\lambda_{1}>\lambda_{2}\). This concludes the proof.
**Implications.** This theorem demonstrates how the proposed risk-aware RL with OT allows for flexible risk-aversion by adjusting the risk sensitivity parameter \(\lambda\). As \(\lambda\) increases, the optimal policy becomes more risk-averse, as evidenced by the decrease in the OT distance to the risk distribution.
**Theorem 4**.: _Given an MDP, let \(p_{r}\) be a target risk distribution, and let \(B_{\delta}\left(p_{r}\right)=\left\{s:D_{OT}\left(p_{s},p_{r}\right)\leq\delta\right\}\) be the set of states that are within OT distance \(\delta\) of the risk distribution. Then a policy \(\pi\) that minimizes \(D_{OT}(p_{\pi},p_{r})\) also maximizes \(E_{\pi}[N_{B_{\delta}(p_{r})}]\), the expected number of visits to states in \(B_{\delta}(p_{r})\), where the expectation is taken over trajectories generated by policy \(\pi\)._
**PROOF.** Consider the probability simplex \(\Delta_{S}\) over the state space \(S\) of the MDP. This is a convex and compact set. Each point in \(\Delta_{S}\) represents a probability distribution over states. Let \(p_{\pi}\in\Delta_{S}\) be the state distribution induced by a policy \(\pi\).
For each policy \(\pi\), we can define a visit frequency vector \(v_{\pi}\in\Delta_{S}\) such that \(v_{\pi}(s)\) is the expected proportion of time that the agent spends in state \(s\) under policy \(\pi\). By definition of the state distribution and the law of large numbers, we have \(p_{\pi}=\lim_{T\rightarrow\infty}v_{\pi}^{T}\), where \(v_{\pi}^{T}\) is the visit frequency vector over a time horizon of length \(T\). Now, suppose \(\pi^{*}\) minimizes the OT distance \(D_{OT}(p_{\pi},p_{r})\) to the risk distribution \(p_{r}\). By the properties of the OT distance, we have:
**Contraction property:**
\(D_{OT}(p_{\pi}^{T},p_{r})\leq D_{OT}(p_{\pi},p_{r})\) for all \(T\). This is because the OT distance is a metric and therefore satisfies the triangle inequality.
**Convergence property:**
As \(T\rightarrow\infty\), we have \(D_{OT}(p_{\pi}^{T},p_{r})\to D_{OT}(p_{\pi},p_{r})\). This is because \(p_{\pi}^{T}\to p_{\pi}\) as \(T\rightarrow\infty\). From these two properties, we can deduce that:
\[D_{OT}(v_{\pi^{*}}^{T},p_{r})\leq D_{OT}(p_{\pi^{*}},p_{r}) \tag{16}\]
for all \(T\). In other words, the visit frequency vector under the optimal policy \(\pi^{*}\) is always close to the risk distribution. Now, let \(B_{\delta}(p_{r})=s:D_{OT}(p_{s},p_{r})\leq\delta\) be the set of states that are within OT distance \(\delta\) of the risk distribution. The visit frequency to states in \(B_{\delta}(p_{r})\) under policy \(\pi\) can be written as:
\[N_{B_{\delta}(p_{r})}(\pi)=\sum_{s\in B_{\delta}(p_{r})}v_{\pi}(s) \tag{17}\]
By the definition of \(B_{\delta}(p_{r})\) and the properties of the OT distance, we have:
\[N_{B_{\delta}(p_{r})}(\pi^{*})\geq N_{B_{\delta}(p_{r})}(\pi) \tag{18}\]
for all \(\pi\). Therefore, a policy that minimizes the OT distance to the risk distribution also maximizes the expected number of visits to states close to the risk distribution. This formally establishes the desired result.
**Implications.** Theorem 4 validates the intuitive idea that when a policy reduces its OT distance to a desired risk distribution, it inherently increases its visits to states that align closely with that risk distribution. This suggests a natural mechanism for RL agents to exhibit safer behaviors: by minimizing the OT distance to a target risk profile, the agent is steered towards states that are deemed safer.
## V Discussion
The integration of risk-aware RL with OT has inaugurated a new direction in how we approach risk management in uncertain environments. The OT framework offers a panoramic and robust risk measurement that captures the entire distribution of states, transcending traditional risk metrics that often rely on isolated statistics. Such an approach ensures a more comprehensive understanding of risk while preserving the dynamism of states. Moreover, the inherent adaptability of our method permits the agent to adjust its risk perception based on specific contexts or tasks. However, the very richness of OT also brings forth challenges, especially regarding computational complexity in high-dimensional environments, potentially hampering its real-time utility in certain domains. Additionally, the efficacy of the approach is tightly coupled with the choice of risk distribution, which, though flexible, may introduce complexities in decision-making. As we look towards the horizon, it becomes imperative to address these computational challenges, possibly through algorithmic innovations that marry efficiency with the core benefits of OT. Empirical validations across a plethora of RL scenarios stand paramount to not only corroborate our theoretical insights but also to refine the approach for varied applications.
## VI Conclusions
In this work, we have proposed a formulation for risk-aware RL grounded in the mathematical framework of OT theory. This approach seeks to incorporate risk considerations into the heart of RL algorithms. We have provided a
series of theorems that clarify the relationship between risk distributions, optimal value functions, and policy behaviors. These theorems highlight the trade-offs between maximizing rewards and safeguarding against risks. Importantly, our theorems demonstrate that minimizing OT costs can yield policies that are not only reward-optimal but also intrinsically safer.
|
2309.05730 | What Multiple Images Say About the Large-Scale Mass Maps of Galaxy
Clusters | All lens modeling methods, simply-parametrized, hybrid, and free-form, use
assumptions to reconstruct galaxy clusters with multiply imaged sources, though
the nature of these assumptions (priors) can differ considerably between
methods. This raises an important question in strong lens modeling: how much
information about the mass model comes from the lensed images themselves, and
how much is a consequence of model priors. One way to assess the relative
contributions of the lensing data vs. model priors is to estimate global lens
properties through images alone, without any prior assumptions about the mass
distribution. This is our approach. We use 200 mock cluster lenses, half of
which have substructures which vary from clumpy and compact to smooth and
extended; a simulated cluster Ares; and real clusters Abell 1689 and
RXJ1347.5-1145 to show that the center, ellipticity, and position angle can be
estimated quite well, and nearly perfectly for weakly substructured clusters,
implying that the recovery of these properties is largely driven by the images,
not priors. However, the correlation between the true and image-estimated
amount of substructure has a lot of scatter, suggesting that multiple images do
not uniquely constrain substructure. Therefore in general, lens model priors
have a stronger effect on smaller scales. Our analysis partly explains why
reconstructions using different methodologies can produce qualitatively
different mass maps on substructure scales. Our analysis is not meant to aide
or replace lens inversion methods, but only to investigate what cluster
properties are constrained with multiple images. | Kekoa Lasko, Liliya L. R. Williams, Agniva Ghosh | 2023-09-11T18:01:54Z | http://arxiv.org/abs/2309.05730v1 | # What multiple images say about the large-scale mass maps of galaxy clusters
###### Abstract
All lens modeling methods, simply-parametrized, hybrid, and free-form, use assumptions to reconstruct galaxy clusters with multiply imaged sources, though the nature of these assumptions (priors) can differ considerably between methods. This raises an important question in strong lens modeling: how much information about the mass model comes from the lensed images themselves, and how much is a consequence of model priors. One way to assess the relative contributions of the lensing data vs. model priors is to estimate global lens properties through images alone, without any prior assumptions about the mass distribution. This is our approach. We use 200 mock cluster lenses, half of which have substructures which vary from clumpy and compact to smooth and extended; a simulated cluster Ares; and real clusters Abell 1689 and RXJ1347.5-1145 to show that the center, ellipticity, and position angle can be estimated quite well, and nearly perfectly for weakly substructured clusters, implying that the recovery of these properties is largely driven by the images, not priors. However, the correlation between the true and image-estimated amount of substructure has a lot of scatter, suggesting that multiple images do not uniquely constrain substructure. Therefore in general, lens model priors have a stronger effect on smaller scales. Our analysis partly explains why reconstructions using different methodologies can produce qualitatively different mass maps on substructure scales. Our analysis is not meant to aide or replace lens inversion methods, but only to investigate what cluster properties are constrained with multiple images.
gravitational lensing: strong - galaxies: clusters: general +
Footnote β : slugcomment: Version November 4, 2021
## 1. Introduction
Mass distribution in galaxy clusters is important for constraining the properties of dark matter (e.g., Andrade et al., 2022; Vega-Ferrero et al., 2021; Harvey et al., 2015), and for using clusters as natural telescopes to observe the very high redshift Universe (e.g., Bouwens et al., 2022; Salmon et al., 2020; Livermore et al., 2017). Most of the existing work on galaxy cluster lensing reconstructs their sky-projected mass distributions, using one or more lens inversion methods. The methods range from light-traces-mass, where the distribution of cluster stellar light is the basis for reconstructing their total mass (Zitrin et al., 2015; Broadhurst et al., 2005), to parametric, where simple functional forms describe individual galaxy members and a few cluster-scale dark matter halos (Laporte et al., 2021; Niemec et al., 2020; Grillo et al., 2015; Oguri, 2010; Jullo and Kneib, 2009), to free-form, which do not have a strict relation between galaxies' light and mass so lensed images have a greater say in how cluster-scale mass is distributed (Torres-Ballesteros and Castaneda, 2022; Cha and James, Jee, 2022; Lam, 2019; Bradac et al., 2008; Liesenborgs et al., 2006, 2007), to hybrid, which combine free-form and parametric features (Diego et al., 2018). All types of methods have their strengths as well as weaknesses.
With the increasing amount and precision of the lensing data, from _Hubble Space Telescope's_ Frontier Fields (Lotz et al., 2017), and BUFFALO (Steinhardt et al., 2020), _James Webb Space Telescope_(Mahler et al., 2022; Golubchik et al., 2022), as well as spectroscopic data from the _Multi-Unit Spectroscopic Explorer_(MUSE, e.g., Richard et al., 2021; Jauzac et al., 2021; Mahler et al., 2018; Lagattuta et al., 2017), it has become apparent that clusters have complex mass distributions. Parametric methods have adapted by including more flexible variables in their models (e.g., Beauchesne et al., 2021), and some free-form methods have increased their resolution on small scales (Liesenborgs et al., 2020).
Despite the quantity and quality of the data, the clusters are still underconstrained, and all lens inversion methods need to make modeling assumptions: the main difference is in the type of assumptions. As a result of differing model priors, the details of reconstructed mass distributions differ between models. This is the case not just for parametric vs. free-form methods, but even within parametric models created using the same software (Priwe et al., 2017; Limousin et al., 2016). It follows that the lensing data by itself does not completely determine the lens mass distribution. In other words, lens mass reconstructions, when based on the same high quality data suffer from degeneracies: a range of mass models can reproduce observed images, even when 100-200 images are used as input. The degeneracies are reduced with 1000 images (Ghosh et al., 2020), but observations
are not there yet, though the early spectacular results from JWST may change that.
In view of this situation, it is important to determine how much information about the mass models comes from the lensed images themselves, and how much is a consequence of model priors. This is the question we try to address in this paper. One way to address this question is by finding out how much information can be estimated from the lensing data alone, without resorting to any mass modeling. This is the approach we take. It is model-free in the sense that we do not carry out lens inversion to recover mass models of our mock clusters. Our analysis to estimate cluster properties is based solely on the multiple images.
Our approach is supported by the analysis of Meneghetti et al. (2007), who concludes that ellipticity, asymmetry and substructure in clusters are important in determining clusters' strong lensing properties. We turn that around, and ask how well can strong lensing properties--without a lens model--constrain clusters' center, ellipticity and substructure. Our definition of substructure is very broad: they range from compact clumps to smoothly varying density perturbations; in other words, if a cluster's mass distribution deviates from elliptical, we call it substructured.
Several papers in the literature aim to recover lens structure using multiple images, under certain specific conditions. For example, model-independent analysis can extract maximum information about cluster properties, without assuming a mass model, but only locally, in small lens plane regions covered by extended lensed images (e.g., Wagner, 2022; Griffiths et al., 2021; Wagner, 2019, 2017). Other works seek to find deviations from pure ellipticity using single point-like quads (Witt, 1996) in galaxy lenses, or constrain isolated substructures using extend images near critical curves (Alard, 2008). Clarkson (2016) explores how a systematic decomposition of large extended images using certain basis sets can potentially constrain lens properties.
The aim of this paper is to determine how well one can estimate _global_ cluster properties, namely, the center, the amplitude and position angle of ellipticity, and the amount of substructure or deviations from ellipticity, using several multiply-imaged point sources. In this regard our analysis is different from other published work. It is based on the work by Williams et al. (2008); Woldesenbet & Williams (2012, 2015), and Gomer & Williams (2018) on galaxy-scale lenses, which used observed properties of quad images.
Their analysis method is based on a curious property of lensing mass distributions: smooth, purely elliptical lenses of arbitrary ellipticity which is at least approximately constant with radius and density profile shape which is within the range of astrophysically plausible ones generate quads whose polar angles around the lens center obey a certain well defined relation. These angles are shown in Figure 1: they are the relative image angles \(\theta_{12},\theta_{23}\), and \(\theta_{34}\). The subscripts indicate the arrival sequence of images, \(1\to 4\), which for nearly all quads can be determined by quad morphology alone, without time delay data (Saha & Williams, 2003). The relation between these 3 angles--the Fundamental Surface of Quads (FSQ)--is shown in Figure 2. The FSQ is defined by a specific analytical lensing potential (Woldesenbet & Williams, 2012). Quads from other elliptical potentials and mass distributions follow FSQ closely, but not exactly.1 Each quad is a single point on the FSQ. Since all quads from any elliptical lens lie very close to the FSQ, it follows that deviations from the FSQ signal deviations from pure ellipticity, i.e., presence of substructure.
Footnote 1: For example, the average separation of quads from a Singular Isothermal Elliptical mass distribution with ellipticity \(\epsilon=0.3\) from the FSQ is \(\sim 0.6^{\circ}\), which for a cluster with an Einstein radius of \(30^{\circ}\) translates into image positions on the sky that differ from those predicted by the FSQ by \(\sim 0.3"\). For a circular de Vaucouleurs profile with external shear \(\gamma=0.4\) the corresponding value is \(\sim 0.024"\).
The FSQ-based technique to characterize substructure has already been used on galaxies. Using a population of 40 quads, (Gomer & Williams, 2018) concluded that \(\Lambda\)CDM substructure alone cannot account for the image properties, and other deviations from elliptical symmetry must be present in lensing galaxies, either in the lens plane or along the line of sight.
Figure 1.β Definition of relative image angles in a quad lens system, illustrated using a smooth elliptical lens. _Left:_ Black contours represent the mass distribution, and magenta dots are the 4 images of a quad. _Right:_ The same images, numbered by arrival order, with the relative angles labelled.
Figure 2.β Two-dimensional slightly curved surface in 3D space of relative image angles (in radians) of quad lenses, called the Fundamental Surface of Quads (FSQ) (Woldesenbet & Williams, 2012). All quads of elliptical mass distributions of a wide range of density profile lie either on or very close on this surface. Deviations of quad distribution form the FSQ indicate deviations from ellipticity in the lens. (The units on the axes are in radians.)
In addition to the difference in angular resolution, the difference between galaxies and clusters in this context is that the former have simpler mass distributions because they are dynamically older, and their multiple image regions are quite small compared to their virial radii--only a few percent--therefore galaxies are closer to being relaxed. The clusters, on the other hand, are dynamically younger, and their multiple image regions sample a much larger central portion the cluster, which is expected to be more abundant in substructure.
In this paper we generalize the quad-based technique, and apply it to smooth and substructured clusters to characterize global cluster properties--center, ellipticity, position angle, and the amount of substructure. Unlike nearly all published papers on cluster lensing we do not reconstruct mass distributions in clusters; our estimation is independent of any lens modeling technique.
All our estimators, including those that do not use FSQ directly, rely only on the relative image angles of quads. Not using image distances from the cluster lens center is a limitation of our method, because we are not using all the positional information provided by the quads. However, we are not aware of any model-free, global (i.e., cluster-wide) method that makes use of all positional information of multiple images.
Because we use only quads, not all clusters are amenable to our analysis. Merging clusters, which can be highly elongated, are often dominated by triply-imaged systems--naked cusp systems--which we cannot use.
The paper is organized as follows. The construction of our mock clusters is described in Section 2. Our quad-based estimators of cluster properties are presented in Section 3. How we measure the corresponding true properties, i.e., those measured directly from the known mass distribution in the mock clusters, is described in Section 4. In Sections 5.1 and 5.2 we examine how our quad-based estimators correlate with the true cluster properties, for 100 smooth and 100 substructured clusters, respectively. In Section 6 we apply our method to one simulated cluster, Ares (Meneghetti et al., 2017), generated using a semi-analytic code (Giocoli et al., 2012), and two observed clusters, Abell 1689, and RX J1347.5-1145. The discussion and conclusions are presented in Section 7.
The projected surface mass densities of our clusters are quoted in dimensionless units of convergence \(\kappa\), which is the surface mass density normalized by the critical surface mass density for lensing, \(\Sigma_{\rm crit}=\frac{c^{2}}{4\pi G}\frac{D_{s}}{D_{i}D_{s}}\), where \(D\)'s are angular diameter distances between the observer, lens, and source. For the standard cosmology with \(\Omega_{m}=0.3\), \(\Omega_{\Lambda}=0.7\), \(h=0.7\), with a cluster at redshift \(z=0.3\), and a source at \(z=2.5\), \(\Sigma_{\rm crit}=0.476\) g cm\({}^{-2}\).
Before we proceed we stress that there are 3 distinct pieces in our analysis: (i) making our mock clusters (Section 2), (ii) measuring their true properties (Section 4), and (iii) estimating their properties from multiple images (Section 3). Mass models are used in (i) only to construct the mocks; no mass modeling is done in (ii) and (iii), making our estimation analysis model-free.
## 2. Mock Galaxy Clusters
We apply our method of estimating cluster properties to a set of 100 mock smooth, purely elliptical clusters, and 100 mock substructured clusters. Smooth clusters without any small-scale or cluster-scale substructure are not realistic; we use them as a control sample for substructures ones. The latter were obtained by adding mass clumps (see below) to the corresponding purely elliptical clusters. We construct these to approximate real clusters, but with a wider range of substructure properties, to stress test our method.
Smooth mock clusters have just the elliptical lensing potential, while substructured ones also have superimposed mass clumps. The smooth part is described by the alphapot potential, \(\psi=b\big{(}s^{2}+x^{2}+\frac{y^{2}}{q^{2}}+K^{2}xy\big{)}^{\frac{9}{2}}\)(Keeton, 2001), with slope \(\alpha=1.1\) (or projected density slope of \(-0.9\)), and core radius ranging from \(0.25\,r_{\rm Ein}\) to \(0.70\,r_{\rm Ein}\), where \(r_{\rm Ein}\) is the Einstein radius of the cluster lens. This range is consistent with the recent findings of Limousin et al. (2022). (We estimate the Einstein radius2, \(r_{\rm Ein}\), of each cluster as the average distance from the true cluster center to the 4 images of all its quads, which come from sources at a range of redshifts; see below.)
Footnote 2: For circularly symmetric lenses Einstein radius is a model-independent quantity, and established the mass scale of the lens. In this paper we focus on morphological features of clusters, not scalings.
The slope of \(\alpha=1.1\) is chosen to be somewhat shallower than isothermal (\(\alpha=1\)) to approximate the typical slope of cluster in the region where the images usually form. We give our clusters a range of ellipticities of the lensing potential, \(\psi\), with axes ratios between 0.77 and 0.95, which correspond to mass axis ratio between 0.48 and 0.87.3 We avoid circular clusters as these do not generate any quads. The ellipticity position angles were chosen randomly.
Footnote 3: An approximate relation between the axis ratio of the lensing potential iso-contours, \((b/a)_{\psi}\), and mass density iso-contours, \((b/a)_{\kappa}\), is \((b/a)_{\kappa}\approx{(b/a)_{\psi}}^{2.75}\).
Our substructures have a wide range of properties: compactness, normalization, and number density. Depending on their specific properties they can describe massive galaxies, or large-scale deviations from ellipticity in the dark matter, that are the result of past mergers.
The amount of substructure varies considerably in observed clusters: equilibrium clusters can be described by an overall elliptical distribution with some modest amount of superimposed mass clumps, whereas some merging clusters can be so distorted in shape that describing them as elliptical is an oversimplification.
The substructure clumps are modeled by circularly symmetric Einasto profiles (Dhar, 2021), with a range of parameters, picked randomly from within the following ranges: the Einasto shape parameter, \(\alpha_{E}=2-5\), and the Einasto scale radius, \(r_{s}=0.005\,r_{\rm Ein}-0.6\,r_{\rm Ein}\). The number of substructures in each cluster varies between 20 and 80. The distribution of Einasto substructure centers is random in polar angle with respect to the cluster center, and random in distance, \(r\), from center. That means their projected number density decreases as \(1/r\). Because of the wide range of parameters and because they are combined randomly in any given cluster, the range of substructure types we generate--scale, concentration, abundance, etc.--is wider than real clusters probably have. It is important to keep in mind that many of our substructures are not subhalos, but instead add to
the cluster-scale mass perturbations and deviations from pure ellipticity.
Sources are placed at a range of redshifts, such that the critical surface mass density for lensing spans a factor of 2. For a typical lens redshift, \(z_{l}\sim 0.4\) this covers \(z_{s}\) between \(\sim 0.7\) and \(\sim 4\), typical of observed sources. Sources at different redshifts are valued for cluster reconstruction because they usually (but not always; see Prieve et al., 2017) break the mass sheet degeneracy (MSD, Gorenstein et al., 1988; Saha, 2000). In this paper we are not concerned with MSD; for us, sources at different redshifts result in image distribution that span a wider range of radii within clusters, which helps in extracting properties we are interested in. All lensing mass is assumed to be at the same redshift, in a thin lens plane.
We then scatter sources randomly in the source plane, and collect only those that produce quads around the cluster center. (Quads that are formed by individual substructures are not used.) We generate 300 such quads for each of the smooth and substructured clusters.
## 3. Estimation of Cluster Properties Using Quads
The center of mass of a cluster is an important property. If it does not coincide with the brightest cluster galaxy (BCG) that would imply that the BCG is wobbling in the potential of the cluster (e.g., Zitrin et al., 2012; Lauer et al., 2014; Seppi et al., 2023). This wobbling may be consistent with the expectation of the standard \(\Lambda\)CDM, or, if large offsets are observed then self-interacting dark matter is a possibility (Harvey et al., 2019; Fischer et al., 2023). The ellipticity position angle is also important for the understanding of the relation between clusters and their environment, for example, connection to cosmic filaments that feed clusters with infalling mass, or nearby merging clusters (e.g., Tam et al., 2020; Cho et al., 2022; Furtak et al., 2023).
We first describe how to find global cluster properties based solely on the relative image angles of its quadruply imaged background sources.
### Cluster center
The 3D space of quad image angles, FSQ, is defined by 3 polar relative image angles, \(\theta_{12}\), \(\theta_{23}\), and \(\theta_{34}\), with respect to the lens center (Figure 1). Figure 3 shows two examples (discussed later in the paper) each with 100 quads (red points), alongside the FSQ (pink shaded surface). The left panel shows a purely elliptical cluster, while the right panel shows a substructured cluster. As expected, the quads in the former case are located very close to the FSQ, while in the latter case they are dispersed around it. We use such distribution of quads to estimate the cluster center.
The principle behind the method is that for a purely elliptical lens all its quads will lie on, or very near the FSQ. Therefore to find the center of a purely elliptical lens, one needs to find the point in the lens plane that, when used as the center of the polar image distribution, results in the smallest dispersion of quads around the FSQ. Because to first order clusters are elliptical, the center of substructured cluster can also be found as the point that minimizes the deviations of its quads from the FSQ. The corresponding rms dispersion is called \(\delta_{\rm FSQ}\), and will be used later, in Section 3.3.
Observed galaxy clusters can have up to \(\sim 15\) quads per cluster; the three clusters we discuss in Section 6 have 13 quads each. Aiming at more average clusters, we use 10 quads per mock cluster to carry out the procedure described in the previous paragraph. We stress again that our cluster property estimation is done not for its own sake, but to figure out what global properties are already determined by the quads, even without modeling. For each mock cluster we repeat the procedure for 30 independent sets of 10 quads each. Using each of these determinations we calculate mean and standard deviation. We refer to the center generated using the mean of all 30 subset centers, with coordinates \((x_{Q},y_{Q})\), as the quad-estimated center of the cluster, and allow the standard deviation to become our estimate of the uncertainty. In other words, we use bootstrapping without replacement to estimate uncertainty.
We considered instead obtaining uncertainties from the same optimization algorithm used to minimize deviations from the FSQ. Ultimately, we chose to use bootstrapping to define our uncertainties because this method provided us estimates which were larger by orders of magnitude and which better predicted separation of the cluster centers between subsets of quads.
### Ellipticity and position angle
Once the center has been located using the method described above, quads can be used to find the position angle of ellipticity of a cluster. Williams et al. (2008) have shown that the bisector of the angle between the 1st and 2nd arriving images is well aligned with the ellipticity position angle of the lens' mass distribution. The average of all bisectors from the 300 quads provides us with our quad-estimated position angle, which we call \(\theta_{Q}\). We use the 30 independent samples of 10 quads per mock cluster to calculate the rms, which provides us with our uncertainty.
To measure the amplitude of ellipticity we use another property of quads. The ellipticity is reflected in the distribution of quad images around the center. For high ellipticity clusters (mass axis ratio \(\lesssim 0.75\)), most of the 1st and 2nd arriving images are along the minor cluster axis, and so tend to be across from each other around the cluster center. Very elliptical clusters tend to have \(\theta_{12}\sim 180^{\circ}\), while less elliptical ones will have smaller \(\theta_{12}\). We use \(\theta_{12}\) as an indicator of cluster ellipticity. We calculate the average \(\theta_{12}\) for 30 independent sets of 10 quads each, per mock cluster. The average and dispersion of these 30 measurements give us a quantity, \(\langle\theta_{12}\rangle=\epsilon_{Q}\), which we call elongation, and the associated uncertainty. Note that this measure of elongation is expressed in radians.
We again tested our choices of uncertainty estimation used above by comparing them against propagation of the uncertainty on the cluster center. We found that the rms deviations from subsets of quads were the larger estimate of the two methods, and that, for position angle, they better matched the difference between the quad-estimated value \(\theta_{Q}\) and the true position angle of the cluster, \(\theta_{T}\), as described in Section 4.2. To better represent the distribution of quad-estimated values in this method, we chose to use bootstrapping without replacement as our method of determining uncertainties.
### Substructure, or deviations from ellipticity
We define substructure as deviations from a purely elliptical mass distribution. Because quads of purely elliptical lenses lie very near the FSQ (indistinguishable from it given the astrometric errors), the amount of substructure can be estimated as the rms dispersion of cluster quads (in the 3D space of angles) around the FSQ, \(\delta_{\rm FSQ}\), which we defined in Section 3.1. The value of \(\delta_{\rm FSQ}\) generated by all 300 quads (using the quad-estimated center) is our quad-based estimate of substructure, \(s_{Q}\). We also calculate the average \(\delta_{\rm FSQ}\) for 30 independent samples of 10 quads each, per mock cluster, and the rms dispersion of these gives us the uncertainty. \(s_{Q}\) will be very near 0 for smooth elliptical lenses, and increases with increasing prevalence of substructure.
## 4. Measuring true cluster properties
Here, we describe how we quantify true global properties of clusters, using their known mass distribution. As we describe in Section 2, we use 100 purely elliptical, smooth clusters, and 100 substructured clusters, generated by superimposing mass clumps onto purely elliptical clusters.
### Cluster center
The true cluster center, \((x_{T},y_{T})\), is taken to be the center of the mass distribution of the purely elliptical part of the cluster. We checked that for substructured clusters this center is nearly identical (median difference is \(0.01\,r_{\rm Ein}\)) to the center of mass of the cluster, determined using the mass distribution within the Einstein radius.
### Ellipticity and position angle
Because many of our clusters are substructured, and in some cases the added substructure changes the appearance of clusters considerably, ellipticity and its position angle (PA) are not just those of the purely elliptical part of the cluster, but are due to the total projected mass distribution. The latter is quantified by dimensionless convergence \(\kappa(x,y)\).
We measure the ellipticity and PA using the first moment of the distribution of \(\kappa(x,y)\) around different trial axes, assumed to go through the true cluster center, \((x_{T},y_{T})\). For each trial PA, we calculate the average distance from the center, along the trial PA axis, \(\langle r_{||}\rangle\):
\[\langle r_{||}\rangle=\epsilon_{T}=\frac{\iint r_{||}\,\kappa(x,y)\,dx\,dy}{ \iint\,\kappa(x,y)\,dx\,dy}, \tag{1}\]
where each location in the lens plane is weighted by its surface mass density, \(\kappa\). The integrals are evaluated as discrete summations over small mass pixels, \(\sim 0.005\) of Einstein radius. The subscript of \(r_{||}\) means that only the distance along the trial PA axis is used; the perpendicular component of the distance from the center is disregarded. The axis with the largest value of \(\langle r_{||}\rangle\) is the true PA, which we call \(\theta_{T}\), and the corresponding value of \(\langle r_{||}\rangle\) is a measure of ellipticity, \(\epsilon_{T}\).4, but we use \(r_{E}\) as a length measure, instead of a semi-major axis. Note that because this measure is the first moment of a distribution of distances, it is not straightforwardly related to the usual definition of ellipticity, and is not dimensionless. It is expressed in units of the Einstein radius, \(r_{\rm Ein}\), for each cluster lens. Since we are interested only in how well the true and quad-estimated properties correlate, the units of the measures are not important.
Footnote 4: Our method of finding the PA and ellipticity is very similar to the one used in SExtractor; see [https://sextractor.readthedocs.io/en/latest/Position.html](https://sextractor.readthedocs.io/en/latest/Position.html) #basic-shape-parameters-a-b-theta
### Substructure, or deviations from ellipticity
There is no unique way to separate the cluster mass distribution into the smooth component and substructure without making assumptions (e.g., Wagner, 2018). Therefore there is no unique definition of substructure in a cluster. Various definitions have been used in the literature. In parametric models substructures are mostly the cluster's member galaxies and their associated dark matter halos. These are represented by mass clumps at or very near the locations of the galaxies, and can be
Figure 3.β Hundred quads of the clusters shown in the third rows of Figure 5 (_left_; purely elliptical clusters) and 6 (_right_; substructured clusters). (Our calculations use a total of 300 quads per cluster.) The FSQ is the magenta surface. The 3D space of relative image angles has been rotated such that it appears as edge-on as possible, to show the quadsβ dispersion around the FSQ. Quads from the smooth, purely elliptical cluster have visibly less deviation from the FSQ, while the quads from the substructured cluster deviate from the FSQ.
described by a subhalo mass function (e.g., Natarajan et al., 2017). However, some cluster-scale dark matter halos that are not centered on the cluster center can also be considered substructure. In free-form methods, substructure is less tightly associated with member galaxies, and is better represented by a power spectrum of cluster's projected mass distribution (Mohammed et al., 2016).
Because ours is the first attempt to quantify globally distributed substructure using point-like lensed images only, without mass modeling, we seek a simple, one-parameter measure of the amount of substructure. Due to the differences in galaxies and clusters described in Section 1, we use a different measure of substructure from the one used for galaxies.5 We also want this measure to quantify deviations from pure ellipticity, similar to what \(\delta_{\rm FSQ}\) does; see Section 3.3.
Footnote 5: The subhalo mass fraction, which is commonly used to quantify the amount of substructure in galaxies is well suited when substructures are compact, like \(\Lambda\)CDM substructure (e.g., Despali & Vegetti, 2017; OβRiordan et al., 2023). In the case of clusters, which are not a relaxed as centers of galaxies, compact substructure is only one type of deviation from ellipticity, therefore the metric we use is different from subhalo mass fraction.
We use the fact that all types of substructure produce local deviations from smooth elliptical mass distribution. Such local deviations will introduce local density gradients, beyond those due to the smooth ellipticity. To quantify these we cover clusters with short line segments, oriented tangentially with respect to the center, \((x_{T},y_{T})\); see Figure 4. (In our analysis, we use many more segments than shown in the Figure.) We measure the absolute value of the density difference, \(|\Delta\kappa(r,\theta)|\) between the two ends of each of these segments. Each line segment is labelled by the location of its center, \((r,\theta)\), where \(r\) and \(\theta\) are with respect to the center of the cluster. To take out the global cluster ellipticity or any other inversion symmetric component of the mass distribution, we use the difference between \(\Delta\kappa\)'s on the opposite sides of the cluster, \(|\Delta\kappa(r,\theta)|-|\Delta\kappa(r,\theta+\pi)|\). Our measure of substructure is the average over radii and angles,
\[s_{T}=\left\langle\left|\left[|\Delta\kappa(r,\theta)|-|\Delta\kappa(r,\theta +\pi)|\right]\right|\right\rangle_{r>0,\theta=[0,\pi]}, \tag{2}\]
which is dimensionless. We tried a range of different line segment lengths, from 0.05 to 0.5 of the Einstein radius. While the values of \(s_{T}\) depended on the choice of line segment length, the important metric for us is the nature of the correlation between true and quad-estimated amount of substructure. That relation, discussed in Section 5.2, did not depend on the length of the line segment used. We use \(0.2\,r_{\rm Ein}\).
In the following sections we will show how well global cluster parameters can be estimated, by comparing the quad-estimated properties (Section 3) vs. true properties (Section 4).
## 5. Comparing true and quad-estimated properties of clusters
### Smooth elliptical clusters
We first carry out a test of our proposed method: we apply it to smooth elliptical clusters, with no substructure. We use 100 mock clusters; Figure 5 shows a representative subset of four. The contours in the left and middle columns are contours of equal surface mass density. The colored points in the left panel show images from 300 quads: blue, red, green and magenta points represent 1st, 2nd, 3rd, and 4th arriving images, respectively. The wide radial distribution of images is the consequence of a wide range of source redshifts used. The red solid dot and red line in the middle panels shows the true center and position angle of the cluster.
In the right panels of Figure 5 the red solid line shows our quad-estimated position angle using all 300 quads per cluster, while the dashed red lines show its rms dispersion, which was calculated from 30 subsets of 10 quads, with every quad appearing in exactly one subset. The black points in the right panel show centers estimated from these 30 independent subsets, as described in Section 3.1. In each of the 4 clusters shown in 5, the black points and the red point nearly coincide, implying that even 10 quads are enough to locate the center.
The contours in the right panels are contours of the rms deviation of quads from the FSQ. The minima of these contours indicates the quad-estimated center, \((x_{Q},y_{Q})\), and the rms of deviations from the FSQ at this location is referred to as \(\delta_{\rm FSQ}\). We show these contours in the lens plane to demonstrate that the FSQ-based center finding method has only one minimum, and is therefore robust.
A summary of quad-estimated cluster properties for all 100 clusters is presented in the left panels of Figures 7, 8, and 9. In each one of these figures the 4 clusters of Figure 5 are highlighted as red triangles. The average and rms dispersion in the displacement between the true
Figure 4.β Measuring true substructure. The background light blue contours represent the mass density of one of our substructured clusters. We quantify the amount of substructure by measuring the absolute value of the density differences between the two ends of each of the black line segments, \(|\Delta\kappa(r,\theta)|\). Here, \((r,\theta)\) refer to the center of the black line segment. The difference between \(|\Delta\kappa|\)βs on the opposite sides of the cluster (i.e., across the cluster center from each other) are subtracted to eliminate inversion symmetric (i.e., not substructure) mass features, such as ellipticity. See Section 4.3 and eq. 2 for details.
Figure 5.β Four examples of mock clusters without substructure. Spatial coordinates measured in units of Einstein radii from the cluster center. See Section 2 for the definition of the Einstein radius. _Left:_ Quad images produced by 300 sources, colored by arrival order: blue\(\rightarrow\)red\(\rightarrow\)green\(\rightarrow\)magenta. _Middle:_ linearly spaced contours show the mass distribution. The red line is the true position angle of ellipticity, \(\theta_{T}\); see Section 5.1. The red dot is the true center, \((x_{T},y_{T})\). _Right:_ Visualization of the three estimated properties of the cluster. The contours are those of \(\theta_{\rm PS0}\), and the location of the minima corresponds to the estimated cluster center \((x_{0},y_{0})\), represented by the red dot. The red solid line shows estimated \(\theta_{Q}\). Black dots show the quad-estimated center for one of 30 subsets of 10 quads each, which is used to give an estimate on the error for \(\theta_{Q}\), shown by the red dashed lines (Note that black dots are nearly coincident with the red dot). The contours demonstrate that the FSQ-based method is a robust way to locate the center.
Figure 6.β Similar to Fig. 5, but clusters have imposed substructure over their smooth counterparts. See Section 5.2 for details.
and quad-estimated center and PA are \(9.0\times 10^{-3}\)\(\pm\)\(5.4\times 10^{-3}\) in units of cluster Einstein radius, and \(2.64\times 10^{-2}\)\(\pm\)\(7.2\times 10^{-3}\) radians, respectively.
The quad-estimated, \(\epsilon_{Q}\), and true, \(\epsilon_{T}\) ellipticities are tightly correlated. The quad-estimated elongation, \(\epsilon_{Q}\) was measured from lensed image angles and is in units of radians, while the true ellipticity, \(\epsilon_{T}\) was measured from the true mass distribution and is in units of length, normalized by \(r_{\rm Ein}\). The actual values and units of \(\epsilon\)'s are not relevant, but what is important for us here is that the two are correlated, implying that quads alone contain the information about the ellipticity. The points in Fig. 8 are colored by their \(\delta_{FSQ}\) values. Less elliptical mass distributions follow the FSQ better than more elliptical ones, though \(\delta_{FSQ}\) is small for all clusters.
Summing up our results from smooth substructuresless clusters, the estimation of all three properties is very good. This establishes that our quad-based method works as intended. We can now apply it to substructured clusters.
### Substructured clusters
Figure 6 is similar to Fig. 5, but for 4 out of our 100 substructured mock clusters, chosen to span the range of cluster morphologies. The right panels show that even for substructured clusters our FSQ-based method finds the global minimum. The 30 black points in the right panels represent the cluster center locations estimated using 10 quads each. Typically, these points, which are calculated without any reference to the true cluster center, sit at a distance of \((1.64\times 10^{-2}\pm 3.07\times 10^{-2})\,r_{\rm Ein}\) from the center found using all 300 quads.
The right panel of Figure 7 shows the comparison between the true and the quad-estimated cluster center locations, using 300 quads per cluster. The color of the points represents the uncertainty on the cluster's center estimation found through the 30 subsets of 10 quads each. Though not as accurate as for the smooth elliptical clusters (left panel), the average and rms dispersions of these are still small, \((3.59\times 10^{-2}\pm 3.53\times 10^{-2})\,r_{\rm Ein}\).
The right panel in Figure 8 presents the quad-estimated elongation vs. true ellipticity. Just as for the smooth clusters (left panel), \(\epsilon_{Q}\) was measured from lensed image angles and is in units of radians, while \(\epsilon_{T}\) was measured from the true mass distribution and is in units of length, normalized by \(r_{\rm Ein}\). As stated in Section 5.1, the actual values of \(\epsilon\)'s are not important; what matters here is that the two are correlated. As expected, the correlation is not as tight as for clusters without substructure, but it is well defined nonetheless. Most of the scatter is in the lower left portion of the panel, which is the region of least elongation. Points in that region of the plot tend to have high values of \(\delta_{\rm FSQ}\), which indicates a high amount of substructure (see the color scale). An example is shown on the third row of Figure 6, where we can see a clear influence of substructure on the mass profile of the cluster. For clusters with low elongation, considerable substructure, or both, elongation is hard to define. Thus it is not surprising that our estimation method struggles to assign these clusters an accurate value.
The right panel in Figure 9 shows the true position angle of 100 clusters vs. the quad-estimated position angle. The average and rms of the deviation between true and estimated PAs are \(4.13\times 10^{-2}\)\(\pm\)\(4.78\times 10^{-2}\) radians, respectively.
We conclude that the center, ellipticity and PA are well estimated just from quads alone, with no model priors. That means these properties of clusters will have similar values when recovered by lens inversions methods with different methodologies. The uncertainties we quote above should be representative of their spread.
Figure 10 shows the relation between \(\delta_{\rm FSQ}\) and \(s_{T}\), the true amount of substructure, in units of \(r_{\rm Ein}\). (We note that we tried using reduced shear in place of \(\kappa\) to measure true amount of substructure, but the correlation with \(\delta_{FSQ}\) turned out to be worse.) There is a clear positive relation between the two, though with considerable scatter. The width of scatter can be as large as an order of magnitude, indicating that quads alone do not provide good constraints on \(s_{T}\).
Given the large scatter, we conclude that quads alone do not constrain substructure nearly as well as they constrain the center, ellipticity and PA. We continue this discussion in Section 7, after examining three other clusters in the next Section.
## 6. Three example clusters from the literature
We apply our quad-based estimators to a simulated galaxy cluster, Ares, and two observed clusters, Abell 1689 (A1689) and RXJ1347.5-1145 (RXJ1347). All three are reasonably centrally concentrated, so that their lensed sources have enough quads to apply our method. Coincidentally, all three clusters have 13 quads each.
The quad images of Ares (\(z=0.5\), Meneghetti et al., 2017) are shown in the left panel of Figure 11. Even though it is a merging cluster, it has one dominant center located at \((x_{T},y_{T})=(-19.8",-31.5")\); see the middle panel of Figure 11. Ares' quad-estimated center lies \(1.94"\), or \(5.91\times 10^{-2}\)\(r_{\rm Ein}\) of the dominant center in the mass profile. This corresponds to \(<2\) of the rms uncertainty we found in Section 5.2 and the right panel of Figure 7.
The contours of \(\delta_{FSQ}\) are shown in the right panel. Unlike the right panels of Figures 5 and 6 which were based on 300 quads per cluster, the small number of quads in Ares results in sharp changes in the contours: when the trial center is very near the location of an image, there is a rapid change in the value of two of the angles \(\theta_{12},\theta_{23}\), and \(\theta_{34}\), resulting in a steep change in deviation from the FSQ. However, these sharp features are far from the estimated center, so do not effect the results.
Ares' \(\delta_{\rm FSQ}=4.08\times 10^{-2}\) radians, which places roughly in the middle of the range of \(\delta_{\rm FSQ}\)'s of our mock clusters; see Figure 10. Visual inspection shows that the position angle for Ares, indicated by red lines in all three panels of Figure 11, aligns very well with the elongation of its true isodensity contours.
A1689, at \(z_{l}=0.183\), shown in the three panels of Figure 12, could be a line of sight merger, but on the plane of the sky it is very roughly circularly symmetric (e.g., Broadhurst et al., 2005; Coe et al., 2010; Diego et al., 2015; Bina et al., 2016; Ghosh et al., 2022). Its quad-estimated center is about \(2.7"\), or \(5.86\times 10^{-2}\,r_{\rm Ein}\) from the BCG, towards the next brightest galaxy, G1. This offset between true and estimated center is \(<2\) times the (rms) uncertainty estimated using out substructured clusters. The true mass distribution is obviously not known for
Figure 8.β Comparison of true ellipticity, \(\epsilon_{T}\), vs. quad-estimated elongation, \(\epsilon_{Q}\), for all 100 smooth mock clusters (_left_), and 100 substructured mock clusters (_right_). True ellipticity is measured using Eq. 4.2 and normalized by the respective clusterβs Einstein radius. Estimated elongation is \(\epsilon_{Q}=\langle\theta_{12}\rangle\). Though these two measures, \(\epsilon_{T}\) and \(\epsilon_{Q}\), are different, the tight relation in the left panel can be used to translate between the two measures. Most importantly, the two measures of ellipticity correlate, showing that quads contain ellipticity information. Color indicates the uncertainty on \(\epsilon_{Q}\) for each cluster, measured in radians. The points marked by red triangles correspond to the clusters shown in Figures 5 and 6: triangle pointing up is the first row, pointing right is the second row, pointing down is the third row, and pointing left is the fourth row. See Sections 5.1 and 5.2 for details.
Figure 7.β Comparison of the true, \((x_{T},y_{T})\), vs. estimated, \((x_{Q},y_{Q})\), cluster centers for all 100 smooth purely elliptical mock clusters (_left_), and 100 substructured mock clusters (_right_). (See Sections 4.1 and 3.1.) Color indicates the uncertainty in the quad-estimated center, in units of cluster Einstein radius, obtained using 30 random subsets of 10 quads. The points marked by red triangles correspond to the clusters shown in Figures 5 and 6: triangle pointing up is the first row, pointing right is the second row, pointing down is the third row, and pointing left is the fourth row. To facilitate comparison, the scale of the two panels is the same.
this cluster, and the location of the BCG is not necessarily the mass center of the cluster (Lauer et al., 2014), so we cannot directly compare our estimated values to the true ones. In the middle panel of Figure 12 we plot the mass distribution recovered by free-form grale(Ghosh et al., 2022), but we do not use it to measure true cluster properties. Using the method of minimizing deviations from the FSQ, we obtain a value of \(\delta_{\rm FSQ}=6.74\times 10^{-2}\) radians, which is in the upper half of our mock clusters (see Figure 10), so probably has more substructure than Ares.
RXJ1347, at \(z=0.451\) is a massive, X-ray luminous merging galaxy cluster (e.g., Halkola et al., 2008; Kohlinger and Schmidt, 2014; Ueda et al., 2018; Richard et al., 2021). Our quad-estimated center, \((x_{Q},y_{Q})=(-6.18",-3.83")\), is roughly between the two brightest galaxies of the cluster, located at \((0",0")\) and \((-17.8",-2.1")\)(Richard et al., 2021), while the Einstein radius of the cluster, as obtained from its 13 observed quads is \(35.5"\). The middle panel of Figure 13 shows the mass distribution recovered by simply parametrized Lenstool(Richard et al., 2021), for reference. As with A1689 we do not use it to estimate true cluster properties. The quad-estimated position angle appears to align well with the elongation of the cluster's mass; see Figure 13. Its \(\delta_{\rm FSQ}=9.34\times 10^{-2}\) radians, which places it above the average \(\delta_{\rm FSQ}\) of our mock clusters, and suggests that it is the most substructured of the 3 clusters examined in this Section.
## 7. Conclusions and Discussion
The paper's goal is to better understand the relative roles of multiple image data vs. model assumptions in lensing mass reconstructions, not to carry out such reconstructions. Our analysis, and the methods we use are not meant to complement or substitute for the existing lens inversion methods. We use our analysis tools with a different goal in mind. We wanted to know how much information comes from the lensing images themselves, and how much is determined by the modeling priors. To that end, we asked, how well can we estimate global cluster properties from images alone, without any lens models? To do that we estimated global cluster properties--cluster center, ellipticity, its position angle and amount of substructure--using only the angular distribution of images of quadruply imaged sources around the cluster center, in a model-free way, i.e., without doing a lens-inversion to get a mass model. (While our mock cluster construction is model-based, our estimation of cluster properties is model-free.) Our rationale is that if a given cluster property is robustly estimated based on images alone, without a mass model, at least for the range of mass distributions used in this paper, then model priors play a minor role in their determination, while images play a dominant role, and all lens inversion methods should agree. If a lens property is estimated based on quads only very approximately, then modeling priors will have a stronger influence on it, and various lens inversion methods will yield a range of results.
A caveat to our method is that it does not use all the lensing information: it cannot use sources that do not result in quads, and for quads, it cannot use the image distances from the lens center. We are not aware of any existing method that would use all the lensing observables to estimate global cluster properties in a model-free way.
### Estimation of cluster centers, ellipticity and position angle
For smooth elliptical mock clusters we show that one can estimate the center, ellipticity and its position angle accurately and precisely using our quad-based method (see the left panels of Figures 7, 8, and 9).
For mock clusters with substructure, quad-based estimation works reasonably well (see the right panels of Figures 7, 8, and 9): 92% of estimated centers lie within
Figure 9.β Comparison of true position angle, \(\theta_{T}\), vs. estimated position angle, \(\theta_{Q}\), for all 100 smooth mock clusters (_left_), and 100 substructured mock clusters (_right_). The true position angle is calculated using Eq. 2. The estimated PA is the average of angles that bisect \(\theta_{12}\) of each of the 300 quads in a mock cluster. Color indicates the uncertainty associated with the estimated \(\theta_{Q}\) in radians, calculated by taking the standard deviation on \(\theta_{Q}\) using 30 subsets with 10 quads each. The points marked by red triangles correspond to the four clusters shown in Figures 5 and 6: triangle pointing up is the first row, pointing right is the second row, pointing down is the third row, and pointing left is the fourth row. For substructure-less clusters the quad estimation of PA is excellent: the points follow a diagonal in the left panel.
\(0.1r_{\rm Ein}\) of the true center, and 93% of cluster position angles are within 0.1 radians of the true position angle. Estimation of ellipticity depends on the amount of substructure and cluster ellipticity, with less substructured and more elliptical clusters yielding better precision.
In the case of Ares, where the true mass distribution is known, we use its 13 quads to estimate the center, which is \(<2"\), or \(0.059\,r_{\rm Ein}\) (\(<2\) of the rms uncertainty estimated from our synthetic clusters) away from the true center. As Figure 11 shows our estimated position angle lines up very well with the elongation of the mass density contours.
The true mass distributions of A1689 and RXJ1347 are not known, but quad-based estimation recovers their centers near the central dominant galaxies, and the ellipticity position angle aligns well with the visual appearance as well as the lens inversion reconstructions.
We conclude that cluster centers, ellipticity and position angle are estimated robustly with our quad-based method with no model priors, and should therefore also be recovered well and by all lensing inversion methods, with a good degree of consistency between methods. This is largely in agreement with the findings of (Meneghetti et al., 2017). While they do not discuss how well the reconstructions recover cluster center, they do compare the recovery of ellipticity, position angle and substructure in their Figure 27. Of these 3 properties the best recovered one is the position angle, especially for the more realistic cluster Hera. The next best estimated property is ellipticity, which is recovered by lens reconstructions considerably better than substructure: of the 20 lens reconstructions, 16 estimate ellipticity at the same level or better than substructure. This order of how well properties are recovered agrees with our findings.
### Estimation of amount of substructure, or deviations from ellipticity
The estimation of the amount of substructure yields interesting and instructive conclusions. Fig. 10 shows that for our mock clusters there is a positive relation between the quad-estimated amount of substructure and the true amount of substructure measured from the mass distribution. Before we discuss the scatter in the next paragraph, we note that the correlation also holds for Ares, A1689 and RXJ1347: the quad-estimated amount of substructure increases progressively from Ares (\(\delta_{\rm FSQ}=4.08\times 10^{-2}\)), to A1689 (\(\delta_{\rm FSQ}=6.74\times 10^{-2}\)), to RXJ1347 (\(\delta_{\rm FSQ}=9.34\times 10^{-2}\)). This sequence of increasing \(\delta_{\rm FSQ}\) values is consistent with what we know about these 3 clusters: Ares, though bimodal, has a dominant central mass concentration that hosts most of the images, and a small amount of substructure (see the middle panel of Fig. 11). A1689 is a cluster near equilibrium, but with known substructure (Ghosh et al., 2022), while RXJ1347 is a merging cluster with complex structure (Richard et al., 2021).
The correlation between \(\delta_{\rm FSQ}\) and the true amount of substructure for our mock clusters (Fig. 10) has a lot of scatter. For example, in Fig. 6 the clusters in the 1st and 4th rows have very similar \(s_{T}\) values, within a factor of 1.17 of each other, yet their values of \(\delta_{FSQ}\) differ by a factor of 3.35. We can see even larger disparities in clusters with roughly equal \(\delta_{\rm FSQ}\)-many with differences in \(s_{T}\) spanning more than an order of magnitude. One reason why \(\delta_{\rm FSQ}\) cannot predict \(s_{T}\) well is that we are not able to use all the multiple image information in our analysis: we do not use image distances from the cluster center as input (see Section 3), and we cannot use doubles. The other reason is that lensed images alone cannot uniquely determine the mass distribution.
The latter helps explain why given the same multiple image data set, different reconstruction methodologies--parametric vs. LTM vs. free-form vs. hybrid--can reproduce observed multiple images with qualitatively different lens mass distributions: Figure 14 shows 3 examples of mass distributions of simulated clusters Ares and Hera, reconstructed by different lens inversion methods: free-form grale, simply parametrized Johnson-Sharon, Glafic, Light-Traces-Mass (LTM) Zitrin-NFW, and hybrid Diego-multires (for references, see Meneghetti et al., 2017). The mass distribution of simply-parametrized and LTM models is dominated by smooth dark matter and very localized mass due to individual member galaxies, while that of the free-form (and to a lesser degree hybrid) methods has more diffuse substructure with less very small scale structure.
This visual appearance of mass distribution can also be cast in terms of mass power spectrum. Mohammed et al. (2016) showed that reconstructions by simply-parametrized Lenstool are dominated by power on large and small scales, with a somewhat of a deficit at intermediate scales. On the other hand, free-form grale includes intermediate length scales, which result in diffuse substructures, but misses power on the smallest scales.
We conclude that multiple images by themselves, i.e., without model priors, allow for a range of substructure characteristics and hence differently shaped mass distributions, corresponding to different shape degeneracies (Saha and Williams, 2006). Lens model priors'select' what
Figure 10.β Comparison of quad-estimated amount of substructure, \(\delta_{\rm FSQ}\), and the true amount of substructure, \(s_{T}\). (Both quantities are dimensionless.) It is apparent that there is a correlation between these two quantities, but the scatter is large. Color indicates the clusterβs ellipticity, \(\epsilon_{T}\). See Sections 5.2 and 7.2 for details.
scale substructure to use to fit the observed images. In the future, if the number of multiple images in clusters increases, for example with data from the _James Webb Space Telescope_, it will be possible to let lensed images, not modeling priors, be the dominant factor in determining the mass distribution on a wide range of scales within galaxy clusters.
The authors would like to thank John Richard (Observatoire de Lyon, France) for helping us with the analysis of the lensing cluster RXJ 1347.
## Data Availability
The data and and scripts used for the purposes of this paper are publicly available from the GitHub repository: [https://github.com/kekoalasko/FSQ_Modeling](https://github.com/kekoalasko/FSQ_Modeling). Any requests for supplementary data or code will be shared upon reasonable request to the corresponding author.
|
2309.15523 | Improving Facade Parsing with Vision Transformers and Line Integration | Facade parsing stands as a pivotal computer vision task with far-reaching
applications in areas like architecture, urban planning, and energy efficiency.
Despite the recent success of deep learning-based methods in yielding
impressive results on certain open-source datasets, their viability for
real-world applications remains uncertain. Real-world scenarios are
considerably more intricate, demanding greater computational efficiency.
Existing datasets often fall short in representing these settings, and previous
methods frequently rely on extra models to enhance accuracy, which requires
much computation cost. In this paper, we introduce Comprehensive Facade Parsing
(CFP), a dataset meticulously designed to encompass the intricacies of
real-world facade parsing tasks. Comprising a total of 602 high-resolution
street-view images, this dataset captures a diverse array of challenging
scenarios, including sloping angles and densely clustered buildings, with
painstakingly curated annotations for each image. We introduce a new pipeline
known as Revision-based Transformer Facade Parsing (RTFP). This marks the
pioneering utilization of Vision Transformers (ViT) in facade parsing, and our
experimental results definitively substantiate its merit. We also design Line
Acquisition, Filtering, and Revision (LAFR), an efficient yet accurate revision
algorithm that can improve the segment result solely from simple line detection
using prior knowledge of the facade. In ECP 2011, RueMonge 2014, and our CFP,
we evaluate the superiority of our method. | Bowen Wang, Jiaxing Zhang, Ran Zhang, Yunqin Li, Liangzhi Li, Yuta Nakashima | 2023-09-27T09:41:36Z | http://arxiv.org/abs/2309.15523v5 | # Improving Facade Parsing with Vision Transformers and Line Integration
###### Abstract
Facade parsing stands as a pivotal computer vision task with far-reaching applications in areas like architecture, urban planning, and energy efficiency. Despite the recent success of deep learning-based methods in yielding impressive results on certain open-source datasets, their viability for real-world applications remains uncertain. Real-world scenarios are considerably more intricate, demanding greater computational efficiency. Existing datasets often fall short in representing these settings, and previous methods frequently rely on extra detection models to enhance accuracy, which requires much computation cost. In this paper, we first introduce Comprehensive Facade Parsing (CFP), a dataset meticulously designed to encompass the intricacies of real-world facade parsing tasks. Comprising a total of 602 high-resolution street-view images, this dataset captures a diverse array of challenging scenarios, including sloping angles and densely clustered buildings, with painstakingly curated annotations for each image. We then propose a new pipeline known as Revision-based Transformer Facade Parsing (RTFP). This marks the pioneering utilization of Vision Transformers (ViT) in facade parsing, and our experimental results definitively substantiate its merit. We also design Line Acquisition, Filtering, and Revision (LAFR), an efficient yet accurate revision algorithm that can improve the segment result solely from simple line detection using prior knowledge of the facade. In ECP 2011, RueMonge 2014, and our CFP, we evaluate the superiority of our method.
keywords: Facade, Semantic Segmentation, Vision Transformer, Line Detection +
Footnote β : journal:
## 1 Introduction
With the burgeoning demand for 3D architectural models in digital twin cities, autonomous driving, and urban simulations Bagloee et al. (2016); Huang et al. (2023), facade parsing--particularly detailed parsing of windows and doors in CityGML Level of Detail 3 (LoD3) architectural models Donkers et al. (2016)--has become paramount in urban 3D reconstruction. Prevailing facade parsing approaches predominantly lean on syntactical rules Eilouti (2019) or rudimentary computer vision techniques Liu et al. (2020); Zhang et al. (2021). Yet, these methods grapple with challenges. Syntactical rules, typically mined from architectural design tenets, struggle to encapsulate the broad heterogeneity of architectural styles, leading to potential parsing incompleteness. Additionally, fundamental computer vision techniques like region growing and edge detection, contingent upon local gradients or localized intensity variances, exhibit noise susceptibility, thereby undermining image analyses' stability and accuracy.
Recent advancements in deep learning proffer enhanced insights into image comprehension LeCun et al. (2015). Convolutional Neural Networks (CNNs) excel in discerning intricate hierarchical features within images and consistently attain state-of-the-art (SOTA) results across diverse domains Li et al. (2021); Gu et al. (2018). Antecedent studies Femiani et al. (2018); Ma et al. (2020) employing CNNs for semantic facade parsing have outperformed traditional methods on open-source datasets like eTRIMs Korc and Forstner (2009), ECP2011 Teboul (2010), Graz2012 Riemenschneider et al. (2012), and CMP2013 Tylecek and Sara (2013). Certain investigations Liu et al. (2017); Dai et al. (2021) harness facade priors to augment segmentation quality, postulating rectangularity of facade elements (e.g., windows) and achieving this end through object detection Girshick (2015). Although such revisions can benefit to segment results, one defect is that they require high extra computation costs.
Besides, existing methodologies are tailored for extant datasets characterized by front-facing architectural views, controlled illumination, and minimal occlusions Rohlig et al. (2017). Nevertheless, these datasets' volume and architectural style diversity fall short of meeting the intricate demands of contemporary deep learning architectures. The predominance of rectified images in most datasets suggests potential performance bottlenecks in real-world applications. The constraints in image resolutions of datasets also hint at suboptimal generalization capabilities in realistic scenarios.
Furthermore, previous methodologies for facade parsing utilizing CNN-based approaches were limited, primarily due to the inherent inability of rudimentary neural networks to model
long-range pixel interactions, thereby compromising optimal feature representation through contextual information. Recently, the Vision Transformer (ViT) Dosovitskiy et al. (2021), a revolutionary deep learning architecture, has heralded notable advancements within the realm of computer vision Strudel et al. (2021); Khan et al. (2022). Intrinsically designed to discern contextual relationships and adeptly handle high-resolution imagery through segmented patches, it emerges as an exemplar for semantic segmentation tasks Zhao et al. (2017); Minaee et al. (2021). However, in spite of its evident potential, the exploration of ViT in facade segmentation remains embryonic, attributed largely to its voracious data appetites Steiner et al. (2021) and the prevailing paucity of comprehensive facade datasets.
In summary, the primary challenges previously encountered in facade parsing can be distilled into three focal areas: (1) Bridging the discrepancy between existing facade segmentation datasets and real-world scenarios to enhance the robustness of facade segmentation techniques; (2) Harnessing the rich contextual dependencies inherent in local features to achieve superior feature representation; and (3) Constructing a revision algorithm grounded in prior knowledge, synergizing the intricate hierarchical feature extraction capabilities of deep learning, to bolster the precision and efficiency of facade image parsing in authentic scenarios.
In this paper, we first introduce a novel streetscape image dataset, designated Comprehensive Facade Parsing (CFP), to address these disparities, tailored for facade parsing. Diverging from extant datasets, ours encompasses annotated architectural imagery from six cities--Osaka, Tokyo, Toronto, Shanghai, Nanjing, and Nanchang--offering diverse perspectives, intricate lighting conditions, foreground occlusions, and a tapestry of architectural styles. Both semantic segmentation and object detection annotations are rendered for facade components. Samples comparison between RueMonge and CFP are shown in Figure 0(a) and Figure 0(b). Images in our dataset are more challenging. Furthermore, we propose a pioneering methodology, Revision-based Transformer Facade Parsing (RTFP), optimized for streetscape imagery. This paradigm pivots on a semantic segmentation model grounded in the Vision Transformer (ViT) Dosovitskiy et al. (2021), geared towards preliminary segmentation. The ViT demonstrates superior proficiency in discerning global layouts, which is pivotal for precise facade parsing, compared to preceding CNN-centric models. Additionally, we incorporate Masked Autoencoders (MAE) He et al. (2022), a self-supervised pre-training algorithm, to foster enhanced fine-tuning of the ViT Dosovitskiy et al. (2021) on facade-centric data. Conclusively, we unveil a Line Acquisition, Filtering, and Revision (LAFR) algorithm dedicated to refining rules for facade elements like windows (shown in Figure 0(c)). The LAFR emerges as an efficient and precise facade element prediction refinement instrument. Our LAFR relies on rudimentary line detection and artificially set refinement stipulations, eschewing heavyweight detection models Girshick (2015); He et al. (2017).
**Contribution**. In this study, we introduced a novel facade parsing dataset CFP, which serves as a comprehensive testbed for evaluating the performance of contemporary facade segmentation techniques in real-world scenarios. Encompassing a wide array of street-view facade scenarios, CFP presents a challenging benchmark that pushes the boundaries of research in this domain. To facilitate precise and efficient facade parsing, we developed the ViT-based RTFP framework, which represents a significant contribution to the field. Notably, we introduced MAE pre-training specifically tailored for facade segmentation, marking the first application of this approach in the context of facade-related tasks. Furthermore, we introduced a straightforward yet powerful revision method called LAFR, designed to enhance facade predictions. Our experimental results unequivocally establish RTFP's superiority over previous methods across multiple datasets, underscoring its potential as a valuable tool in the arsenal of facade segmentation techniques.
## 2 Related Works
### Datasets for Facade Parsing
Facade parsing has emerged as a pivotal domain in architectural analysis and urban design, gaining significant attention in the recent computational vision literature. Table 1 shows the previous and our proposed facade parsing datasets. However, a comprehensive analysis of existing datasets reveals a range of challenges that remain unaddressed.
Historically, initial datasets often suffered from limited volume, containing mere hundreds to thousands of images, which impedes the training of sophisticated deep learning models Mahajan et al. (2018). The diversity of these datasets has been another recurrent issue. Predominantly centered on specific architectural styles, regions, or types, many datasets exhibit a narrow
Figure 1: Image samples from (a) RueMonge 2014 and (b) our CFP dataset. In Figure (c), we show an inference sample by our LARF (left is the line detection result from LSD and right is the integrated lines after LARF). It has the ability to outline facade elements, e.g., windows and doors, through simple line detection.
scope, potentially compromising the model's adaptability to diverse facade designs Jeong et al. (2019). The intricacies of manual annotating have led to inaccuracies, especially in datasets dealing with multifaceted facade structures Huang et al. (2020). Moreover, a holistic representation is often neglected with primary emphasis on larger structures, overlooking finer elements like windows, doors, or balconies Rohrbach et al. (2016). Real-time contextual information, crucial for practical applications, is often absent. A significant portion of these datasets predominantly captures imagery from fixed angles and specific conditions, neglecting the variability encountered in real-world scenarios Neuhold et al. (2017).
In the pursuit of curating an optimal facade parsing dataset, a few guidelines emerge from the pitfalls of predecessors. Ensuring diversity by capturing images from varied styles, regions, and lighting conditions can significantly bolster model generalizability Hudson and Manning (2019). Given the demands of deep learning, collecting extensive data, potentially in the magnitude of hundreds of thousands of images, becomes indispensable Gupta et al. (2019). A layered, meticulous annotation, capturing both macro and microelements, promises a richer dataset Zhang et al. (2022). Embracing semi-automatic annotation techniques can expedite the process while retaining accuracy Anisetti et al. (2017). Capturing images across scales and angles, and harnessing high-resolution equipment can further enrich the data Kattenborn et al. (2020).
To address the disparity between existing facade segmentation datasets and real-world scenarios, as previously discussed, we introduce the Comprehensive Facade Parsing (CFP) dataset as a pioneering benchmark. As illustrated in Figure 2, the CFP dataset encompasses diverse street views, spanning residential areas to the city center, thereby providing a holistic representation of real-world scenarios.
### Deep Learning Methods for Facade Parsing
Over the years, the landscape of facade segmentation has been profoundly transformed by the remarkable strides made in deep learning techniques, culminating in state-of-the-art (SOTA) achievements and substantial progress in the field. In this section, we delve into critical studies that have harnessed the power of deep learning methods to advance facade segmentation.
Some pioneering endeavors, such as those by Cohen et al. Cohen et al. (2014), Mathias et al. (2016), and Kelly et al. Kelly et al. (2017), embraced the early stages of deep learning methods Long et al. (2015); Ronneberger et al. (2015) for facade segmentation. These studies relied on relatively straightforward network architectures and robust yet heavy backbones. While they successfully achieved facade parsing, their performance left room for improvement.
Recent research has witnessed a shift towards designing specialized structures within deep learning models to enhance facade segmentation. For instance, Femiani et al. Femiani et al. (2018) introduced the compatibility network, which concurrently addresses segmentation across various facade element types. In a similar vein, ALKNet Ma et al. (2020) devised a
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Dataset** & **Description** & **Number of Images** & **Resolution** & **Other Information** \\ \hline ECP 2011 Teboul (2010) & Labeled pictures in Paris - 7 categories: walls, windows, doors, balconies, roofs, stores, sky & 104 & 1024x768 & - \\ Graz 2012 Riemenscheider et al. (2012) & Images from Germany and Austria - 4 categories: door, window, wall, sky & 50 & 640x480 & - \\ CMP 2013 Tylecek and Sara (2013) & Facade images from around the world - 12 categories: facade, molding, cornice, pillar, window, door, sill, blind, balcony, shop, deco, background & 378 basic + 228 extended & 512x512 & - \\ ENPC 2014 Lotte et al. (2018) & Images located in Paris - Same categories as ECP2011 & 79 & 1280x960 & - \\ RueMonge 2014 Riemenscheider et al. (2014) & 428 multi-view images of a street in Paris. It provides annotations with seven semantic categories for 2D images, including door, shop, balcony, window, wall, roof, and sky. & 428 images (60 buildings) & 800x1067 & Dataset for 3D reconstruction and semantic mesh annotation. \\ eTRIMs Kore and Forster (2009) & Multi-view images based on multiple European cities - 8 categories: building, car, door, pavement, road, sky, vegetation, window & 60 & 960x720 & Includes three sets of annotations for object segmentation, class segmentation, and object boundaries. \\ LabelMeFacade et al. (2010) & Frohlich & Based on the eTIRMs extended LabelMe database - 8 categories: building, car, door, pavement, road, sky, vegetation, window & 945 & 256x256 & Only pixel-level masks are provided. \\ \hline CFP (_Ours_) & Images are from six cities and captured from different angles. There are 9 classes: building, window, door, roof, tree, sky, people, car, and sign. & 602 & 2560x1440 & Provide instance-level mask annotation for all objects. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Introduction of previous facade datasets and our CFP Datasets
pyramid structure equipped with ALK modules to obtain dependencies with a long-range among building elements across multiscale feature maps. Notably, these approaches outperformed the earlier pioneer works, showcasing significant advancements in facade segmentation. Nevertheless, it's essential to highlight that they did not incorporate prior knowledge about the construction of building facades into their methodologies.
Another aspect of facade parsing involves leveraging prior knowledge to enhance segmentation results. DeepFacade Liu et al. (2017) introduced object detection for window revision. They implemented Faster-RCNN Ren et al. (2015) to automatically identify window objects and utilized their bounding boxes to refine segmentation. This innovative approach has inspired the development of several subsequent methods. FacMagNet Dai et al. (2021), for instance, employs Mask-RCNN He et al. (2017) to segment individual facade elements, enhancing segmentation accuracy. Additionally, a side branch is incorporated to further improve the results, particularly for smaller objects like windows. Kong et al. Kong and Fan (2020) introduced the use of YOLO Redmon et al. (2016) for facade object detection, prioritizing efficiency. Zhang et al. Zhang et al. (2022) adopted the DETR Carion et al. (2020) model to achieve superior detection and semantic segmentation performance. However, it's crucial to note that a common challenge with these methods lies in their reliance on large object detection models, which may present practical limitations in real-world applications due to computational demands and resource constraints. There is also no architecture based on ViT, thus it is of great interest to explore the usage of ViT for facade parsing.
## 3 Method
### Overview
As shown in Figure 3, our facade segmentation pipeline consists of two branches. In the upper branch, the input image is segmented into image patches and we feed them to our ViT based semantic segmentation model, which produces a preliminary prediction of the facade semantics. In the lower branch, we use traditional line detection methods to find lines that describe the outline of the facade elements. We then use each element instance from the preliminary prediction to filter the lines, keeping only those that perfectly frame out each element and integrating a corresponding outline. Finally, we revise the preliminary prediction using integrated outlines and produce the final segmentation map.
### ViT-based Semantic Segmentation
Our ViT-based semantic segmentation is based on the architecture of Segmenter Strudel et al. (2021), which features a fully transformer-based encoder-decoder structure. This architecture maps the embeddings of image patches to pixel-level class predictions and comprises two main components: the encoder, which extracts information from raw image patches, and the decoder, which produces the final output. In the following sections, we will introduce them in detail.
#### 3.2.1 Encoder
We adopted an encoder to extract features from input image \(x\in\mathbb{R}^{H\times W\times C}\). \(x\) will be first divide into image patches \(x=[x_{1},...,x_{N}]\in\mathbb{R}^{N\times L^{2}\times C}\). \(P\) is the patch size and \(N\) is the number of patches calculated by \(N=WH/P^{2}\). As usual ViT process, to create a sequence of patch embedding \(x_{E}=[E_{x_{1}},...,E_{x_{N}}]\in\mathbb{R}^{N\times D}\), \(x\) is scanned by a 2D convolution with both kernel size and stride as \(P\). This operation will project each patch into an embedding vector \(E\in\mathbb{R}^{D}\). In order to encode positional information, the architecture uses learnable position embeddings \(pos=[pos_{1},...,pos_{N}]\in\mathbb{R}^{N\times D}\). We add these embeddings to the sequence of patches to generate the tokens for the input sequence, which is represented by the equation \(s_{0}=x_{E}+pos\).
The Encoder is a transformer-based Vaswani et al. (2017) architecture that consists of \(L\) layers and is used to produce contextualized encoding \(s_{L}\in\mathbb{R}^{N\times D}\). This powerful structure has revolutionized natural language processing by enabling models to capture complex relationships between words and has been extended to the computer vision area Dosovitskiy et al. (2021). Each layer of the Encoder applies a multi-headed self-attention (MSA) block and then computed by a point-wise MLP block to
Figure 2: Some image samples from our CFP dataset.
refine the representation of the input as:
\[a_{i-1} = MSA(LN(s_{i-1}))+s_{i-1}, \tag{1}\] \[s_{i} = MLP(LN(a_{i-1}))+a_{i-1}, \tag{2}\]
where \(i\) is the layer index of \(L\) and \(LN\) is the layer normalization. The self-attention is composed of three point-wise linear layers that map patches \(s_{i}\) to intermediate representations, including queries \(Q\in\mathbb{R}^{N\times D}\), keys \(K\in\mathbb{R}^{N\times D}\), and values \(V\in\mathbb{R}^{N\times D}\). The queries are used to score the relevance of the keys for each patch, and the values are weighted by these scores and integrated to generate the final contextualized representation. This process allows the model to capture complex relationships among patches, resulting in highly informative and expressive encoding. It is formulated as follows:
\[MSA(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d}})V. \tag{3}\]
Input patch sequence \(s_{0}\) is mapped into \(s_{L}\) by the encoder, containing rich image semantic information. \(s_{L}\) will be adopted in a decoder (introduced in the continuing section) for preliminary semantic prediction.
#### 3.2.2 Decoder
Decoder is also a full transformer-based structure with \(M\) layers. Following previous works Carion et al. (2020); Strudel et al. (2021), the information of \(K\) classes will be encoded as the input for the decoder. Each class \(k\) will be represented by a learnable vector with the same shape as the image patch. The token list for all classes can be denoted as \(z_{0}\in\mathbb{R}^{K\times D}\). As shown Figure 3, \(z_{0}\) will be concatenated with patch sequence \(s_{L}\) (denote as \(s_{L,0}\) for decoder) to form the input for decoder as \([s_{L,0},z_{0}]\in\mathbb{R}^{(N+K)\times D}\).
After \(M\) layers of the transformer (same structure as encoder), the output of the patch-class combination is denoted as \([s_{L,M},z_{M}]\). \(z_{M}\) is considered as the weight for each class and will be multiplied with \(s_{L,M}\) for the mask prediction \(\hat{y}\) as follows:
\[\hat{y}=s_{L,M}\cdot z_{M}^{T}, \tag{4}\]
where \(\cdot\) is the matrix multiplication and \(\hat{y}\in\mathbb{R}^{H\times W\times K}\). We will further utilize \(\hat{y}\) as a preliminary estimation of the window instance and then modify \(\hat{y}\) using our pre-set priorly knowledge.
### Line Acquisition, Filtering, and Revision
Different from previous works that adopt heavy and rigid deep learning-based object detection models for the improvement of facade prediction. In this research, we want to design a simple yet efficient method named LAFR. We take advantage of facade priory, that is, most of the facade elements (e.g., windows, doors) are generally quadrilateral. Based on this, we adopt traditional line detection methods to realize localization and further modify the facade prediction.
#### 3.3.1 Line Acquisition
There many methods designed for line detection, e.g., Hough and Line Segment Detector (LSD). We adopted LSD for our target since it is a powerful method for detecting line segments in digital images. It operates by analyzing the image's edge map to identify candidate line segments. LSD employs a series of steps, including the computation of region orientations, pixel grouping, and line validation. By considering both local and global information, the LSD algorithm effectively detects line segments with high accuracy and robustness. Its ability to handle complex scenes, varying lighting conditions, and noisy images makes it a popular choice in computer vision applications such as object recognition, image stitching, and scene analysis.
In our revision pipeline, depicted in Figure 4, we employ a series of steps to enhance the accuracy of our line detection.
Figure 3: The pipeline of our Rvision-based Transformer Facade Parsing (RTFP). It is composed of two branches: (a) On the upper part, a ViT-based semantic segmentation model. (b) Our proposed method for line acquisition, filtering, and prediction revision is on the lower part.
Initially, the input image undergoes dual Gaussian blurring processes, effectively eliminating extremely short line segments caused by noise. The size of convolution kernels is \(5\times 5\) and \(3\times 3\), respectively. The standard deviation of the Gaussian function is 5. Subsequently, the LSD algorithm is employed to detect a set of \(J\) potential line segments, denoted as a collection \(G=\{(\alpha_{1j},\beta_{1j},\alpha_{2j},\beta_{2j})\mid j=1,\ldots,J\}\). These line segments are characterized by their quaternion representations, which record the coordinates of their start and end points.
#### 3.3.2 Instance Acquisition
Our objective is to identify each facade element as our target on the facade revision and incorporate prior knowledge to \(G\) to refine their prediction. Using the ViT model, we can generate a preliminary prediction mask \(\hat{y}\), which provides a preliminary estimate. To isolate each individual facade element, we retain only the pixels within the region predicted as a facade element. Figure 4 illustrates the subsequent steps in our process, where we employ erosion and dilation techniques to address noise and subtle connections between windows in the prediction. Then, we calculate the connected components for each element and split them into \(B\) individual masks demoted as \(F=\{(\hat{\alpha}_{1b},\hat{\beta}_{1b},\hat{\alpha}_{2b},\hat{\beta}_{2b}) \mid b=1,\ldots,B\}\). Each \(b\) is also a quaternion, where the first and last two values are the coordinate of the minimum external rectangle \(R=[r_{top},r_{bottom},r_{left},r_{right}]\) (top, left, bottom, and right edge) for the mask, respectively.
#### 3.3.3 Filtering and Revision
We manually define some policies to filter line segments \(G\), transforming lines into a quad that can perfectly frame out each facade element. We set the minimum external rectangle from each \(b\) as the anchor, and the filtering is organized as follows:
\[E=\oint_{b}^{B}\Psi(\oint_{j}^{J}\Xi[\Delta(b,j,r);\Theta(b,j,r)]), \tag{5}\]
where \(E\in\mathbb{R}^{B\times R}\) serves as a record, indicating the assigned line segments to each anchor's edge. The circular pair calculation between \(G\) and \(F\) is denoted by \(\oint\). The function \(\Xi[\cdot]\) computes each \(j\) using four anchor edges, employing two threshold functions, \(\Delta(\cdot)\) and \(\Theta(\cdot)\), with threshold values \(\delta\) (default as 20) and \(\theta\) (default as 0.1), respectively. These functions are responsible for distance and angle calculations. Line segments far from the anchor are discarded, while the remaining segments are assigned to their respective edges based on the angle. It is important to note that each edge can only accommodate a single line segment with the smallest distance (we only keep the longest line among all qualified candidates). In cases where no satisfactory line segment is found for an edge, a blank value is recorded.
The function \(\Psi(\cdot)\) is responsible for integrating the four line segments associated with each edge of the anchor, resulting in the formation of a new external rectangle (shown in Figure 4). However, if an anchor exhibits a blank edge, we discard it for further consideration. This is due to the inability to accurately determine the facade element's shape in such cases. As a result, the record for this particular anchor is deemed invalid and is subsequently skipped in the subsequent stages of the revision process.
For the revision, we directly use \(E\) to replace the window region in \(\hat{y}\) correspondingly. Thus, the final mask prediction for the facade is \(y\).
## 4 Results
### Datasets and Metrics
We compared our method with other SOTA methods in two open-source datasets and our CFP dataset (statistics of object number are showing Table 2). There are also four metrics adopted for quantification.
Our CFP DatasetCF represents a meticulously curated collection of street view images acquired through state-of-the-art
Figure 4: Our proposed Line Acquisition, Filtering, and Revision (LAFR) algorithm. It acquires lines from the input image and window instance from the predicted mask. After filtering, we use the integrated outline to revise the predicted mask.
equipment, resulting in a total of 602 high-resolution images, each boasting a 2560x1440 pixel resolution. These images were captured across six cities: Osaka, Tokyo, Toronto, Shanghai, Nanjing, and Nanchang, providing a diverse and comprehensive view of urban landscapes from various angles. Within the CFP dataset, we have meticulously categorized images into nine distinct classes: buildings, windows, doors, roofs, trees, sky, people, cars, and signs. This comprehensive categorization enables detailed scene understanding and segmentation, allowing for a wide range of applications. The dataset encompasses a rich variety of street views, encompassing the charm of residential neighborhoods and the dynamic energy of bustling city centers. To ensure the highest quality and accuracy, each image is meticulously annotated at the instance mask level. For this labeling process, we enlisted the expertise of two dedicated annotators, guaranteeing the precision of our dataset. 80% of the data are used for training, 10% for validation, and 10% for testing. As a result, CFP is an invaluable resource for many applications. Researchers and practitioners alike can leverage this dataset to gain deep insights into urban environments, enhance their algorithms, and address real-world challenges related to facade parsing, scene understanding, and more.
_ECP 2011 Teboul (2010)._ The ECP dataset comprises 104 rectified images showcasing facades characterized by Haussmannian-style architecture. Due to the inherent imprecision in the original annotations, we have adopted the annotations established in prior research Ma et al. (2020). These annotations classify elements into eight distinct categories: windows, walls, balconies, doors, roofs, chimneys, skies, and shops.
_RueMonge 2014 Riemenschneider et al. (2014)._ This dataset has 428 high-resolution and multiview images from Paris streets. It was designed to support developing and evaluating algorithms and models for various urban scene understanding tasks, such as visual localization, 3D reconstruction, and visual odometry. It provides annotations with seven semantic categories for 2D images, including door, shop, balcony, window, wall, roof, and sky. It is worth noting that there are actually only 32 unique facade images in this dataset. Thus, the number of each class is far less than we statistics in Table 2.
_Metrics._ We evaluate the segmentation results using pixel-level accuracy (Acc), the average accuracy of prediction for each category (Class avg.), F1-score for quantifying imbalanced classes, and mean intersection over union (mIoU). Here, we denote hist as the confusion matrix for pixel-level classification. The definition for \(Acc\) is as follows:
\[Acc=\frac{TP+TN}{TP+TN+FP+FN}, \tag{6}\]
where \(TP\), \(TN\), \(FP\), and \(FN\) are the number of true positive, true negative, false positive, and false negative. The definition of F1-score is as follows:
\[Precision=\frac{TP}{TP+FP}, \tag{7}\] \[Recall=\frac{TP}{TP+FN},\] (8) \[F1-score=2*\frac{Precision*Recall}{Precision+Recall}. \tag{9}\]
It is used to assess the accuracy and robustness of the model's segmentation results, especially when dealing with imbalanced class distributions.
Class avg. is calculated to quantify per-class accuracy as follows:
\[Class\_avg=\frac{1}{K}\sum_{i=1}^{K}\frac{diag(hist)[i]}{\sum(hist[:,i])}, \tag{10}\]
where \(diag(\cdot)\) is diagonal value.
(mIoU) is computed as the average of the class-wise IoU:
\[mIoU=\frac{1}{N}\sum_{i=1}^{N}IoU_{i}, \tag{11}\]
where \(N\) is the number of classes and IoU\({}_{i}\) is the IoU value for class \(i\).
### Experimental Settings
**ViT Model Structure** The structure of our encoder follows Strudel et al. (2021) and we tried the "Tiny", "Small", "Base", and "Large" model sizes (performance is described in Section 4.5). We also let the models use different resolutions of input image corresponding to patch sizes \(8\times 8\), \(16\times 16\), and \(32\times 32\). For the decoder, we adopt only two layers (\(M=2\)) of transformers with a head number of 8.
**Model Pre-training** Previous works adopt ImageNet Russakovsky et al. (2015) initialization for easier training. However, the data distribution between ImageNet and facade datasets is quite different. The parameters pre-trained on ImageNet are unsuitable for the initialization of the ViT model in the facade segmentation task. We thus adopt an MAE He et al. (2022) pre-training to initialize ViT methods. As MAE is a self-supervised method, we use all three datasets as well
\begin{table}
\begin{tabular}{l c c c} \hline \hline class & ECP2011 & RueMonge 2014 & CFP \\ \hline building & 104 & 32 & 1545 \\ window & 2976 & 8416 & 12048 \\ balcony & 1991 & 2620 & - \\ wall & 104 & 428 & - \\ door & 94 & 311 & 896 \\ roof & 104 & 428 & 383 \\ shop & 198 & 642 & - \\ tree & - & - & 611 \\ sky & - & 402 & 720 \\ people & - & - & 446 \\ car & - & - & 531 \\ sign & - & - & 923 \\ \hline total & 5580 & 13279 & 18103 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of objects in three datasets.
as Cityscapes Cordts et al. (2016) as external data for the pre-training process. MAE is implemented based on its official training pipeline. For all the CNNs-based methods, we use ImageNet initialization.
**Optimizer Setting** We used cross entropy as the loss function and AdamW Kingma and Ba (2015) as the optimizer. Following the experimental setting of DeepLab Chen et al. (2017), we adopt the "poly" learning rate decay. The learning rate is reduced gradually over time during training and starts from 0.0001. For all the settings, we implement 60 epochs. Our experiments are implemented on a single A6000 GPU with an Intel(R) Xeon(R) W-2255 CPU.
**Augmentation** We adopt mean subtraction, a ratio between 0.5 and 2.0 for the random resizing of the image, and random flipping from left to right. We randomly crop images to a fixed size of \(640\times 640\) for the proposed dataset and \(448\times 448\) for other datasets. During inference, we adopt dense cropping over the whole image Chen et al. (2017).
### Comparison to Previous Works
In this section, we compared our method to SOTA methods in semantic segmentation tasks (PSPNet Zhao et al. (2017), Deeplabv3+ Chen et al. (2017), OCR Yuan et al. (2020), Swin-L UperNet Liu et al. (2021), SETR-L MLA Zheng et al. (2021), and Segmentor Strudel et al. (2021)) and methods specially designed for facade segmentation tasks (DeepFacade Liu et al. (2017), FacMagNet Dai et al. (2021), and ALKNet Ma et al. (2020)). For CFP, our revision methods are only related to windows and doors (other classes are rare quadrilateral). For ECP 2011 and RueMonge 2014, revision is implemented for all classes besides building and sky. We aim to showcase the effectiveness and accuracy of our method by evaluating its performance against these prominent baselines. The following discussion outlines the evaluation of three datasets used for the comparison, providing a thorough analysis of the results obtained.
As presented in Table 3, we first want to showcase the experimental results obtained from our CFP dataset. Notably, our RTFP consistently outperforms all competitive methods, with the most exceptional performance emerging when utilizing ViT-Base with a patch size of 16 (for more comprehensive insights, refer to section 4.5). The comparison also indicates that ViT-based approaches exhibit superior performance compared to their CNN-based counterparts. For instance, the mIoU of Segmentor, a ViT method, is marginally superior by approximately 0.71% than CNN-based Deeplabv3+. This observation substantiates our hypothesis that the ViT architecture holds promise in the context of facade segmentation tasks. Its holistic perception capability aligns well with the requirements of comprehending complex facade scenes, ultimately contributing to improved segmentation outcomes. We can also find that a heavy extra revision (e.g., DeepFacade and FacMagNet) will get better results. We further demonstrate the performance comparison over ECP and RueMonge (Table 4 and Table 5), respec
\begin{table}
\begin{tabular}{l c c c} \hline \hline Device & FacMagNet Dai et al. (2021) & DeepFaceade Liu et al. (2017) & RTFP \\ \hline CPU & 5.62 s & 6.25 s & **1.97 s** \\ GPU & 1.03 s & 1.44 s & - \\ \hline \hline \end{tabular}
\end{table}
Table 6: Computation cost comparison to previous facade segmentation methods using CPU/GPU.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Acc & Class\_avg & F1-score & mIoU \\ \hline PSPNet Zhao et al. (2017) & 93.62 & 90.95 & 91.08 & 83.76 \\ DANet Fu et al. (2019) & 93.70 & 91.04 & 92.11 & 84.02 \\ Deeplabv3+ Chen et al. (2017) & 93.75 & 91.20 & 91.34 & 84.26 \\ Segmentor Strudel et al. (2021) & 93.82 & 91.63 & 91.81 & 84.68 \\ \hline Femiani _et al._ Femiani et al. (2018) & 82.79 & 79.06 & 79.21 & 72.25 \\ Rahmani et al. Rahmani and Mayer (2018) & 92.20 & 91.00 & - - \\ DeepFace Liu et al. (2017) & 93.86 & 91.75 & 91.86 & 84.75 \\ ALKNet Ma et al. (2020) & 93.88 & 91.80 & 91.98 & 84.81 \\ \hline RTFP ViT-B/16 _(Ours)_ & **93.92** & **91.88** & **92.13** & **84.93** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Segmentation results on ECP 2011 dataset.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & Backbone & Acc & Class\_avg & F1-score & mIoU \\ \hline PSPNet Zhao et al. (2017) & ResNet-101 & 88.32 & 78.01 & 78.47 & 60.03 \\ Deeplabv3+ Chen et al. (2017) & ResNeS-101 & 88.48 & 78.33 & 78.90 & 60.89 \\ OCR Yuan et al. (2020) & HRNetV2 & 87.85 & 76.95 & 77.42 & 58.02 \\ Swin-L UperNet Liu et al. (2021) & Swin-B/16 & 87.95 & 77.13 & 77.58 & 58.65 \\ SETR-L MLA Zheng et al. (2021) & ViT-B/16 & 88.30 & 77.96 & 78.44 & 60.14 \\ Segmenter Strudel et al. (2021) & ViT-B/16 & 88.72 & 79.35 & 79.78 & 61.60 \\ \hline DeepFaceade Liu et al. (2017) & FCN+Faster-RCNN & 88.47 & 78.30 & 78.81 & 60.85 \\ Femiani et al. Femiani et al. (2018) & AlexNet & 86.19 & 70.51 & 71.45 & 50.22 \\ ALKNet Ma et al. (2020) & ResNet-FCN & 88.76 & 79.38 & 79.86 & 61.74 \\ FacMagNet Dai et al. (2021) & FCN+Mask-RCNN & 88.41 & 78.25 & 78.94 & 60.62 \\ \hline RTFP _(Ours)_ & ViT-B/16+LAFR & **88.80** & **79.75** & **80.63** & **61.95** \\ RTFP _(Ours)_ & ViT-L/16+LAFR & 88.78 & 79.47 & 80.06 & 61.87 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Segmentation results on proposed CFP dataset. We compare the performance of vanilla segmentation models and models designed for facade parsing.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Acc & Class\_avg & F1-score & mIoU \\ \hline PSPNet Zhao et al. (2017) & 93.62 & 90.95 & 91.08 & 83.76 \\ DANet Fu et al. (2019) & 93.70 & 91.04 & 92.11 & 84.02 \\ Deeplabv3+ Chen et al. (2017) & 93.75 & 91.20 & 91.34 & 84.26 \\ Segmenter Strudel et al. (2021) & 93.82 & 91.63 & 91.81 & 84.68 \\ \hline Femiani _et al._ Femiani et al. (2018) & 82.79 & 79.06 & 79.21 & 72.25 \\ Rahmani et al. Rahmani and Mayer (2018) & 92.20 & 91.00 & - - \\ DeepFaceLab Liu et al. (2017) & 93.86 & 91.75 & 91.86 & 84.75 \\ ALKNet Ma et al. (2020) & 93.88 & 91.80 & 91.98 & 84.81 \\ \hline RTFP ViT-B/16 _(Ours)_ & **93.92** & **91.88** & **92.13** & **84.93** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Segmentation results on RueMonge 2014 dataset.
tively. RTFP still shows its superiority over previous works.
Furthermore, we provide an in-depth analysis of the IoU scores for each individual class, as outlined in Table 7. Notably, beyond attaining superior IoU performance compared to prior methods, the distinctive advantage of our LAFR algorithm is also prominently apparent upon scrutinizing these outcomes. Since we employ Segmentor as our ViT structure for preliminary segmentation, compared to the results of raw Segmentor, the incremental IoU improvements facilitated by the LAFR algorithm are evident in the building, window, and door classes, with enhancements of approximately 0.21%, 1.98%, and 0.28%, respectively. This compelling evidence underscores LAFR's capacity to refine preliminary segmentation outputs precisely. It is pertinent to mention that the non-training of the LAFR algorithm extends to its applicability across various segmentation models. We illustrate this compatibility in the 4.5 section, showcasing its ability to integrate with diverse segmentation architectures.
In Figure 5, we systematically compare the qualitative results of our RTFP model and several competing approaches. Notably, the outcome achieved by the ViT-based Segmentor surpasses that of the CNNs-based PSPNet in terms of coherence and delineation clarity. Additionally, the ViT-based Segmentor excels in predicting classes with limited instance samples, a prime example being people and cars. Despite these advancements, it's important to highlight an observable limi
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Method & building & window & sky & roof & door & tree & people & car & sign & mIoU \\ \hline PSPNet Zhao et al. (2017) & 83.44 & 62.47 & 92.45 & 48.81 & 47.53 & 52.55 & 41.47 & 76.71 & 35.02 & 60.03 \\ Segmenter Strudel et al. (2021) & 85.28 & 63.34 & 93.20 & 54.72 & 54.45 & 52.61 & 41.47 & 78.03 & 31.58 & 61.60 \\ \hline Femiani et al. (2018) & 80.93 & 59.81 & 90.66 & 38.52 & 23.37 & 51.50 & 08.17 & 73.74 & 25.26 & 50.22 \\ DeepFacade Liu et al. (2017) & 85.71 & 64.68 & 93.25 & 55.74 & 53.58 & 52.59 & 41.43 & 78.02 & 31.64 & 61.85 \\ FacMagNet Dai et al. (2021) & 84.81 & 65.19 & 92.25 & 52.98 & 54.07 & 53.24 & 34.29 & 77.87 & 30.88 & 60.62 \\ ALKNet Ma et al. (2020) & 83.36 & 62.54 & 92.47 & 48.82 & 46.75 & 54.13 & 28.88 & 76.79 & 36.00 & 58.86 \\ \hline RTFP ViT-B/16 _(Ours)_ & **85.49** & **65.32** & 93.15 & 54.72 & **54.73** & 52.86 & 41.47 & 77.96 & 31.81 & **61.95** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Segmentation results of each class on our CFP dataset. All results are evaluated by mIoU.
Figure 5: Qualitative comparison to previous segmentation methods on our CFP dataset.
tation. The precision of window outlines remains somewhat untidy, thus inevitably affecting the overall accuracy of building predictions. For the method designed for facade revision, DeepFacade directly used the bounding box detection results from Faster-RCNN to revise the facade prediction. It may work well for the rectified scene, while most of the images are taken from different angles for our CFP dataset. Our LAFR algorithm uses the line segment rules of the facade element itself to generate a conformable outline, which greatly improves the accuracy of correction. It is obvious that our segmentation results show a straighter shape for windows and doors. However, LAFR does not succeed for all facade instances. We can find a few samples that fail to be revised.
Efficiency stands as a distinct advantage of RTFP over previous facade segmentation methods reliant on object detection. Traditionally, object detection models necessitate substantial GPU memory for computation, leading to time-intensive processes. Consequently, this approach may not align with the cost-effectiveness demanded by real-world applications. In Table 6, we undertake a comprehensive comparison of RTFP's computational efficiency against DeepFacade (utilizing Faster-RCNN) and FacMagNet (employing Mask-RCNN). This analysis specifically focuses on the computational costs attributed to the revision module, omitting the computational load posed by the backbones. The results demonstrate that our proposed LAFR algorithm outpaces the GPU configurations of both DeepFacade and FacMagNet by a noticeable margin (4.28 s and 3.65 s faster). Remarkably, even when constrained to CPU resources alone, RTFP's computational costs remain competitive. This attests to its potential suitability for real-world applications, unburdened by device limitations.
### Revision Demonstration
In Figure 6, we present a series of samples illustrating the performance of our LAFR pipeline. The images are arranged from left to right, showcasing the following components: the input image, the predicted mask generated by the segmentation model, the facade element mask, all detected lines using the LSD method, transformed lines after undergoing the LAFR process, the revised mask, and the ground truth. A noteworthy observation is the irregular outline of windows in the predicted mask produced by the segmentation model, highlighting a limitation of conventional segmentation models. This irregularity is particularly evident in the window mask displayed in the third row. In the fourth row, we display the results of line detection using the LSD algorithm. Notably, LSD detects a substantial number of line segments, many of which closely adhere to the edges of buildings and windows. This observation substantiates our hypothesis that employing a straightforward line segment detection approach can yield precise window positioning. However, it is in the fifth column, where the line segments have been transformed through the LAFR algorithm, that we witness a marked improvement. These integrated line segments accurately delineate the windows, demonstrating the potential for revising the original prediction mask in subsequent iterations of our pipeline.
LAFR exhibits strong performance across the majority of scenarios within our dataset. However, its effectiveness is notably influenced by the quality of the initial predicted mask. Predicted masks are relatively high quality in the first to third rows. Based on such prior, LAFR excels in providing valuable revision guidance. Conversely, the last two examples present a challenge. These instances involve numerous window elements, and the predictions for windows may be either incorrect or incomplete. In such situations, LAFR faces the possibility of either refraining from revising certain windows or making erroneous revisions. The dependence on segmentation model results represents a limitation of our current LAFR implementation. This phenomenon is also explored in the compatibility experiments in Table 8. We acknowledge this limitation and
Figure 6: Inference samples of our LAFR pipeline. From left to right are the input image, the predicted mask generated by the segmentation model, the facade element mask (for windows), all detected lines using the LSD method, transformed lines after undergoing the LAFR process, the revised mask, and the ground truth.
are committed to addressing it in our future research endeavors, striving to enhance the robustness and reliability of our revision algorithm.
### Ablation Study
**Compatibility** Our LAFR algorithm offers a straightforward approach to revision and is adaptable to various model architectures beyond just the Segmentor Strudel et al. (2021). In Table 8, we extend the application of LAFR to other models and analyze its impact. Notably, we observe a marked enhancement in performance for PSPNet Zhao et al. (2017) and Deeplabv3+ Strudel et al. (2021). Conversely, a performance decrement is evident when applying LAFR to UNet Ronneberger et al. (2015) and FCN Long et al. (2015). As discussed in the methodology section 3.3, LAFR relies on the quality of the initial segmentation results. UNet and FCN, being early pioneers in semantic segmentation, may exhibit reduced accuracy on our dataset due to their older design paradigms. In summation, our findings suggest that the efficacy of LAFR is notably bolstered by the utilization of advanced segmentation models, underscoring the importance of employing state-of-the-art architectures for optimal performance.
**Pre-training** A key focus of our research is to investigate the impact of pre-trained models on the efficacy of facade segmentation tasks. To ensure a fair and consistent evaluation, all experiments adopt our RTFP as the default configuration and adhere to an identical training protocol. As illustrated in Table 9, it becomes evident that MAE-based pre-training consistently outperforms other methods across all three datasets. Notably, there is a substantial performance gap when compared to models pre-trained on ImageNet, which lags behind. Conversely, models initialized with random weights yield notably inferior results. These findings provide robust evidence of the effectiveness of MAE-based pre-training for enhancing the performance of models in facade segmentation tasks
**ViT Structure** The configuration of the ViT model plays a pivotal role in influencing the performance of our RTFP. To investigate its impact, we conducted an experiment focusing on two crucial aspects: the model size and patch size. As depicted in Figure 7, our findings reveal a clear trend: both increasing the ViT model's size and reducing the patch size positively influence prediction accuracy. Nevertheless, it's worth noting that the improvement from the "Base" model to the "large" model appears to be relatively marginal. However, substantial computational demands are posed by larger ViT models, especially when dealing with smaller patches. In light of these observations, we recommend the utilization of the "Base" ViT model with a patch size of 16. This configuration strikes a practical balance between prediction performance and computational efficiency, making it an optimal choice for the RTFP system.
## 5 Conclusion
In this paper, we released a new dataset named CFP that serves as the benchmark for facade parsing. The creation of this dataset involved meticulous data collection and annotation, ensuring its high quality and relevance to real-world scenarios. Previous works have primarily focused on offering datasets comprising simplistic single-family building facade images, while our dataset takes a more comprehensive data source. Recognizing the intricate demands of real-world applications, our collection spans a diverse range of building facade images, ranging from straightforward residential contexts to the intricacies of densely populated urban areas. Furthermore, our dataset encompasses images of buildings captured from various angles, providing a richer and more comprehensive representation of architectural diversity in urban environments. We aim to foster collaboration and facilitate fair comparisons between different methods by offering a common dataset for evaluation. This standardized benchmark will accelerate progress and promote transparency and reproducibility in facade parsing research. We believe that CFP will significantly contribute to advancing state-of-the-art facade parsing, providing researchers and practitioners with a valuable resource for evaluating and comparing their algorithms.
We also proposed a new facade parsing pipeline RTFP based on vision transformers and line integration. Our empirical findings underscore the remarkable advantages of employing ViT. Notably, the incorporation of the pre-training method, MAE, amplifies the prowess of ViT even further. These results are indicative of ViT's immense potential for application across a spectrum of facade parsing scenarios. With its inherent capability to capture comprehensive global context within images,
\begin{table}
\begin{tabular}{l c c c} \hline \hline & ECP2011 & \multicolumn{2}{c}{RueMonge} & \multicolumn{2}{c}{CFP} \\ \cline{2-5} Pre-training & Acc & mIoU & Acc & mIoU \\ \hline Random & 29.27 & 12.11 & 23.55 & 08.17 & 32.50 & 19.24 \\ ImageNet & 93.55 & 83.49 & 87.76 & 72.27 & 88.39 & 59.41 \\ MAE & **93.92** & **84.93** & **88.12** & **73.46** & **88.80** & **61.95** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Experiments for compatibility evaluation.
\begin{table}
\begin{tabular}{l c c} \hline \hline Segmentation Model & Acc & mIoU \\ \hline UNet Ronneberger et al. (2015) & - 00.42 & - 01.91 \\ FCN Long et al. (2015) & - 00.25 & - 01.34 \\ PSPNet Zhao et al. (2017) & + 00.02 & + 00.10 \\ Deeplabv3+ Chen et al. (2017) & + 00.07 & + 00.27 \\ Segmenter Strudel et al. (2021) & + 00.08 & + 00.35 \\ \hline \hline \end{tabular}
\end{table}
Table 9: The segmentation performance using different pre-training. All experiments use our RTFP as the default setting and implement the same training process.
Figure 7: Ablation experiments over ViT model structure and patch size (quantified with mIoU).
we envision ViT as a versatile tool for a wide array of facade parsing applications. ViT exhibits a unique proficiency in discerning intricate relationships among distant elements within a facade--a pivotal factor for achieving accurate parsing of complex architectural structures. This, in turn, facilitates finer-grained segmentation and a deeper understanding of the constituent components comprising facades. In addition, an efficient yet accurate revision method LARF is designed to improve the segmentation results further. Leveraging prior knowledge of building facades, we demonstrate that simple line segment detection and integration can also match or exceed additional object detection models. However, our method currently only works on base facade elements (e.g., windows and doors in our CFP) and relies on the results of the segmentation model. Our future research will aim to improve these shortcomings.
As we conclude this work, we are poised to continue exploring new frontiers in facade parsing, addressing existing limitations, and extending the applicability of these techniques to a broader array of architectural elements. We remain committed to advancing the field, pushing the boundaries of what is possible, and contributing to the ever-evolving landscape of computer vision and architectural analysis.
## Acknowledgment
This work is partly supported by JSPS KAKENHI Grant Number 19K10662, 20K23343, 21K17764, and 22H03353. We also appreciate Tong Zhao and Yuting Wang for their annotation of our dataset during this research.
|
2309.03427 | On the invariant subspace problem via universal Toeplitz operators on
the Hardy space $H^{2}(\mathbb{D}^{2})$ | The Invariant Subspace Problem (ISP) for Hilbert spaces asks if every bounded
linear operator has a non-trivial closed invariant subspace. Due to the
existence of universal operators (in the sense of Rota) the ISP can be solved
by proving that every minimal invariant subspace of a universal operator is one
dimensional. In this paper, we obtain a nontrivial invariant subspace of
$T^{*}_{\varphi}|_{M}$, where $T_{\varphi}$ is the Toeplitz operator on the
Hardy space over the bidisk $H^{2}(\mathbb{D}^{2})$ induced by the symbol
$\varphi\in H^{\infty}(\mathbb{D})$ and $M$ is a $T_{\varphi}^{*}$-invariant
subspace. We use this fact to get sufficient conditions for the ISP. | JoΓ£o Marcos R. do Carmo, Marcos S. Ferreira | 2023-09-07T01:13:28Z | http://arxiv.org/abs/2309.03427v1 | On the invariant subspace problem via universal Toeplitz operators on the Hardy space \(H^{2}(\mathbb{D}^{2})\)
###### Abstract.
The _Invariant Subspace Problem_ (ISP) for Hilbert spaces asks if every bounded linear operator has a non-trivial closed invariant subspace. Due to the existence of universal operators (in the sense of Rota) the ISP can be solved by proving that every minimal invariant subspace of a universal operator is one dimensional. In this paper, we obtain a nontrivial invariant subspace of \(T_{\varphi}^{*}|_{M}\), where \(T_{\varphi}\) is the Toeplitz operator on the Hardy space over the bidisk \(H^{2}(\mathbb{D}^{2})\) induced by the symbol \(\varphi\in H^{\infty}(\mathbb{D})\) and \(M\) is a \(T_{\varphi}^{*}\)-invariant subspace. We use this fact to get sufficient conditions for the ISP.
Key words and phrases:Invariant subspace problem, Universal operator, Toeplitz operator, Hardy space 2020 Mathematics Subject Classification: Primary 30H10, 47B35; Secondary 47A15
## 1. Introduction
The _Invariant Subspace Problem_ (ISP) is one of the most important problems in functional analysis that remains unsolved for separable, infinite dimensional Hilbert spaces, which asks: given a Hilbert space \(\mathcal{H}\) and a bounded linear operator \(T\) on \(\mathcal{H}\), does \(T\) have a nontrivial invariant subspace? In recent years, several operator theorists have been developing approaches in an attempt to solve the ISP. Among these, we highlight some research involving Toeplitz operators and composition operators on the Hardy space and similar ones [4, 5, 6, 7, 8] and the classical book [12] by Radjavi and Rosenthal and the monograph by Chalendar and Partington [3].
In 1960, Rota [14] introduced the idea of an operator with an invariant subspace structure so rich as to model every Hilbert space operator. The notion of this operator is what we call today the universal operator.
**Definition 1**.: _[_3_, p. 213]_ _Let \(\mathcal{H}\) be a Hilbert space, U a bounded operator on \(\mathcal{H}\) and \(\mathcal{L}(\mathcal{H})\) the algebra of bounded operators on \(\mathcal{H}\). We say U is universal for \(\mathcal{H}\) if for each non zero \(A\in\mathcal{L}(\mathcal{H})\) there is an invariant subspace M for U and a non zero number \(\lambda\) such that \(\lambda A\) is similar to \(U|_{M}\), that is, there is a linear isomorphism X of \(\mathcal{H}\) onto M such that \(UX=\lambda XA\)._
The main tool for obtaining universal operators is the following Caradus criterion.
**Theorem 2**.: _[_2_, p. 527]_ _If \(\mathcal{H}\) is a separable Hilbert space and \(U\in\mathcal{L}(\mathcal{H})\) such that:_
1. _The null space of U is infinite dimensional._
2. _The range of U is_ \(\mathcal{H}\)_,_
_then \(U\) is universal for \(\mathcal{H}\)._
A well-known example of a universal operator on \(\mathcal{H}\) is the adjoint of a unilateral shift of infinite multiplicity, which was introduced by Rota. In fact, considering \(S^{*}\) acting on
\[\ell^{2}(\mathcal{H})=\left\{(f_{n})_{n=0}^{\infty}:\sum_{n=0}^{\infty}||f_{n}|| _{\mathcal{H}}^{2}<\infty\right\}\]
by
\[S^{*}(f_{0},f_{1},f_{2},\cdots)=(f_{1},f_{2},\cdots)\]
we have that \(S^{*}\) satisfies the Caradus criterion and so is an universal operator. Following this same idea, we obtain other interesting examples of universal operators, namely the adjoints of Toeplitz operators on the Hardy space over the disk induced by analytic symbols. In this direction, Cowen and Gallardo-Gutierrez showed the following.
**Theorem 3**.: _[_4_, Theorem 5]_ _Let \(\varphi\in H^{\infty}(\mathbb{D})\) such that \(1/\varphi\in L^{\infty}(\mathbb{T})\). If the Toeplitz operator \(t_{\varphi}^{*}\) has infinite dimensional kernel, then \(t_{\varphi}^{*}\) is universal for \(H^{2}(\mathbb{D})\)._
For Toeplitz operators on the Hardy space over the polydisk, Ferreira and Noor showed the following.
**Theorem 4**.: _([9, Theorem 1]) Let \(\varphi\in H^{\infty}(\mathbb{D}^{n})\) for \(n>1\). Then \(T_{\varphi}^{*}\) satisfies the Caradus criterion for universality if, and only if, \(\varphi\) is invertible in \(L^{\infty}(\mathbb{T}^{n})\) but non-invertible in \(H^{\infty}(\mathbb{D}^{n})\)._
In particular the backward shift operators \(T_{z_{1}}^{*},\dots,T_{z_{n}}^{*}\) are universal when \(n>1\).
**An equivalent version of the ISP.**_If \(U\) is universal for a separable, infinite dimensional Hilbert space \(\mathcal{H}\), then the ISP is equivalent to the assertion that every minimal invariant subspace for \(U\) is one dimensional._
In this work we will use this statement to obtain sufficient conditions for the ISP. More precisely, we will obtain conditions on \(\varphi\in H^{\infty}(\mathbb{D})\) to show that \(T_{\varphi}^{*}\) on \(H^{2}(\mathbb{D}^{2})\) is an universal operator and that no nontrivial invariant subspace of \(T_{\varphi}^{*}\) is minimal.
The purpose of this paper is the following. In Section 2, we collect some of the preliminaries. In Section 3, we introduce the concept of translation generalized inner function. In Section 4, we provide sufficient conditions for ISP to be true (Theorem 17). For that, we need two result sets (Theorems 13, 14 and 16). In Theorems 13 and 14, we consider \(\varphi\in H^{\infty}(\mathbb{D})\) and \(M\subset H^{2}(\mathbb{D}^{2})\) a \(T_{\varphi}^{*}\)-invariant subspace and we obtain conditions such that \(T_{\varphi}^{*}|_{M}\) has a proper invariant subspace. In Theorem 16, we introduce the operator \(J_{g,\varphi}\) so that \(M\) does not have a nontrivial invariant subspace.
## 2. Preliminaries
### The Hardy space \(H^{2}(\mathbb{D}^{2})\)
Let \(\mathbb{D}\) be the unit disk in the complex plane \(\mathbb{C}\) and \(\mathbb{T}=\partial\mathbb{D}\) its boundary. The bidisk \(\mathbb{D}^{2}\) and \(2\)-torus \(\mathbb{T}^{2}\) are the Cartesian product of \(2\) copies of \(\mathbb{D}\) and \(\mathbb{T}\) respectively. Let \(L^{2}(\mathbb{T}^{2})\) denote the usual Lebesgue space and \(L^{\infty}(\mathbb{T}^{2})\) the essentially bounded functions with respect the normalized Haar measure \(\sigma\).
The Hardy space on the bidisk \(H^{2}(\mathbb{D}^{2})\) is defined as the class of all holomorphic functions \(f\in\mathbb{D}^{2}\) for which
\[\|f\|^{2}=\sup_{0<r<1}\int_{\mathbb{T}^{2}}|f(r\zeta)|^{2}d\sigma(\zeta)<\infty.\]
It is well known that if \(f\in H^{2}(\mathbb{D}^{2})\), then the radial limit
\[f^{*}(\zeta)=\lim_{r\to 1}f(r\zeta)\]
exists for almost all \(\zeta\in\mathbb{T}^{2}\) and
\[\lim_{r\to 1}\int_{\mathbb{T}^{2}}|f_{r}-f^{*}|^{2}d\sigma=0,\]
where \(f_{r}(\zeta)=f(r\zeta)\) for all \(\zeta\in\mathbb{T}^{2}\). Thus, \(H^{2}(\mathbb{D}^{2})\) can be viewed a bounded subspace of \(L^{2}(\mathbb{T}^{2})\).
Denote by \(H^{\infty}(\mathbb{D}^{2})\) the space of bounded analytic functions on \(\mathbb{D}^{2}\). Also via radial limits, \(H^{\infty}(\mathbb{D}^{2})\) can be seen as a subspace of \(L^{\infty}(\mathbb{T}^{2})\). An inner function in \(\mathbb{D}^{2}\) is a function \(f\in H^{\infty}(\mathbb{D}^{2})\) such that \(|f^{*}|=1\) a.e. on \(\mathbb{T}^{2}\).
Let \(P\) denote the orthogonal projection from \(L^{2}(\mathbb{T}^{2})\) onto \(H^{2}(\mathbb{D}^{2})\). For a function \(\varphi\in L^{\infty}(\mathbb{T}^{2})\), the Toeplitz operator \(T_{\varphi}\) with symbol \(\varphi\) is defined by
\[T_{\varphi}f=P(\varphi f)\]
for \(f\in H^{2}(\mathbb{D}^{2})\). Then \(T_{\varphi}\) is a bounded linear operator on \(H^{2}(\mathbb{D}^{2})\) and its adjoint is given by \(T_{\varphi}^{*}=T_{\overline{\varphi}}\). Similarly, for a given \(\varphi\in L^{\infty}(\mathbb{T})\), the \(1\)-dimensional Toeplitz operator \(t_{\varphi}\) with symbol \(\varphi\) is the bounded linear operator on \(H^{2}(\mathbb{D})\) defined by
\[t_{\varphi}f=Q(\varphi f)\]
for \(f\in H^{2}(\mathbb{D})\), where \(Q\) is the orthogonal projection from \(L^{2}(\mathbb{D})\) onto \(H^{2}(\mathbb{D})\).
In this work, we will often decompose the space \(H^{2}(\mathbb{D}^{2})\) as follows. Let \(H^{2}(z)\) and \(H^{2}(w)\) denote the classical Hardy spaces over \(\mathbb{D}\) in the variables \(z\) and \(w\) respectively. Then \(H^{2}(\mathbb{D}^{2})\) may be defined as the \(H^{2}(z)\)-valued Hardy space
\[H^{2}(\mathbb{D}^{2})=\left\{g(z,w)=\sum_{n=0}^{\infty}g_{n}(z)w^{n}:\sum_{n=0 }^{\infty}\|g_{n}\|_{H^{2}(z)}^{2}<\infty\right\}.\]
Thus, considering \(H_{n}=H^{2}(z)w^{n}\) for each \(n\in\mathbb{N}\), we have
\[H^{2}(\mathbb{D}^{2})=\bigoplus_{n=0}^{\infty}H_{n}.\]
For each \(n\in\mathbb{N}\), we denote by \(P_{n}\) the orthogonal projection of \(H^{2}(\mathbb{D}^{2})\) onto \(H_{n}\).
### Invariant subspaces of \(H^{2}(\mathbb{D}^{2})\)
Let \(\varphi\in H^{\infty}(\mathbb{D}^{2})\). A closed subspace \(M\subset H^{2}(\mathbb{D}^{2})\) is said to be \(T_{\varphi}\)-invariant if \(\varphi M\subset M\). Beurling's theorem states that every nontrivial invariant subspace of \(t_{z}\) is of the form \(M=\varphi H^{2}(\mathbb{D})\), where \(\varphi\) is an inner function in \(\mathbb{D}\). Thus \(M\) is a cyclic subspace, i.e.
\[M=\overline{\operatorname{span}\{(t_{z})^{n}\varphi:n\in\mathbb{N}\}}.\]
Note that Beurling's theorem cannot be naturally extended to multivariable functions. In fact, considering the polynomial ring \(\mathcal{R}=\mathbb{C}[z,w]\), Rudin [13] observed that the invariant subspace
\[[z-w]:=\overline{\{(z-w)p:p\in\mathcal{R}\}}\]
is not the form \(\varphi H^{2}(\mathbb{D}^{2})\) for any inner function \(\varphi\in H^{\infty}(\mathbb{D}^{2})\).
Another property that cannot be transferred when working with invariant subspaces and universality on the Hardy space over the polydisk is as follows. \(zH^{2}(\mathbb{D})\) is a nontrivial invariant subspace of \(t_{z}\) but \(t_{z}^{*}\) is not an universal operator for \(H^{2}(\mathbb{D})\) since \(\dim\ker t_{\overline{z}}=1\). However, over \(H^{2}(\mathbb{D}^{2})\) we have the following:
**Proposition 5**.: _Let \(\varphi\in H^{\infty}(\mathbb{D}^{2})\). Then \(T_{\varphi}^{*}\) satisfies the Caradus criterion for universality if, and only if, \(\varphi H^{2}(\mathbb{D}^{2})\) is a nontrivial invariant subspace of \(H^{2}(\mathbb{D}^{2})\)._
Proof.: Since \(\varphi\in H^{\infty}(\mathbb{D}^{2})\) and \(T_{\varphi}^{*}\) is surjective, we have \(1/\varphi\in L^{\infty}(\mathbb{T}^{2})\). Hence \(\varphi H^{2}(\mathbb{D}^{2})\) is an invariant subspace of \(H^{2}(\mathbb{D}^{2})\) by [10, Thm. 2]. On the other hand, since \(\operatorname{Ker}(T_{\varphi}^{*})=[\varphi H^{2}(\mathbb{D}^{2})]^{\perp}\) has infinite dimension, we have by Ahern and Clark [1, p. 969] that \(\varphi H^{2}(\mathbb{D}^{2})\neq H^{2}(\mathbb{D}^{2})\). Conversely, if \(\varphi H^{2}(\mathbb{D}^{2})\) is an invariant subspace of \(H^{2}(\mathbb{D}^{2})\) again by [10, Thm. 2] we have that \(1/\varphi\in L^{\infty}(\mathbb{T}^{2})\). Now since
\[1/\varphi\in H^{\infty}(\mathbb{D}^{2})\Rightarrow\varphi H^{2}(\mathbb{D}^{2 })=H^{2}(\mathbb{D}^{2})\]
and \(\varphi H^{2}(\mathbb{D}^{2})\) is a nontrivial invariant subspace follow that \(\varphi\) is non-invertible in \(H^{\infty}(\mathbb{D}^{2})\) and so \(T_{\varphi}^{*}\) satisfies the Caradus criterion by Theorem 4.
We end this section with the following notation. Given \(\varphi\in H^{\infty}(\mathbb{D})\) and \(g\in H^{2}(\mathbb{D}^{2})\), the minimal closed invariant subspace for \(T_{\varphi}^{*}\) that contains \(g\) will be denoted by
\[V_{T_{\varphi}^{*},g}:=\overline{\operatorname{span}\{(T_{\varphi}^{*})^{n}g:n \in\mathbb{N}\}}.\]
## 3. Universal translations of \(T_{\varphi}^{*}\)
In general, not every translation of an universal operator is an universal operator. However, Cowen and Gallardo-Gutierrez [4, Thm. 2] showed that if \(U\in\mathcal{L}(\mathcal{H})\) satisfies the Caradus criterion then there is \(\epsilon>0\) so that for \(|\mu|<\epsilon\), the operator \(U+\mu I\) is universal. Particularly for Toeplitz operators over Hardy space \(H^{2}(\mathbb{D}^{2})\), let's introduce the concept of a symbol \(\varphi\) such that \(T_{\varphi}^{*}\) is not universal but \(T_{\varphi}^{*}+\lambda I\) is universal for some \(\lambda\in\mathbb{C}\).
**Definition 6**.: _We say that \(\varphi\in H^{\infty}(\mathbb{D}^{2})\) is a translation generalized inner function when there are \(z_{0}\in\mathbb{D}^{2}\) and \(\delta>0\) such that_
\[|\varphi^{*}(z)-\varphi(z_{0})|>\delta\text{ a.e. }z\in\mathbb{T}^{2}.\]
**Example 7**.: _It follows from [11, Thm. 2.2.10] that every inner function \(\varphi\in H^{\infty}(\mathbb{D})\) is a translation generalized inner function._
The following fact will be useful in Theorem 17.
**Proposition 8**.: _If \(\varphi\in H^{2}(z)\) is an inner function, then \(T_{\varphi}^{*}\) has an universal translation for \(H^{2}(\mathbb{D}^{2})\)._
Proof.: Since \(\varphi\) is an inner function on \(\mathbb{D}\), we have there are \(z_{0}\in\mathbb{D}\) and \(\delta>0\) such that
\[|\varphi^{*}(z)-\varphi(z_{0})|>\delta\text{ a.e. }z\in\mathbb{T}.\]
Thus, considering \(\psi(z)=\varphi(z)-\varphi(z_{0})\), it follows from Theorem 4 that \(T_{\varphi}^{*}=T_{\varphi}^{*}-\overline{\varphi(z_{0})}I\) is an universal operator for \(H^{2}(\mathbb{D}^{2})\)
## 4. Sufficient conditions for the ISP
We begin this section by showing that each \(H_{n}\) is a reducing subspace of \(T_{\varphi}\).
**Proposition 9**.: _Let \(\varphi\in H^{\infty}(z)\). Then \(T_{\varphi}(fw^{n})=(t_{\varphi}f)w^{n}\) and \(T_{\varphi}^{*}(fw^{n})=(t_{\varphi}^{*}f)w^{n}\) for all \(f\in H^{2}(z)\) and \(n\in\mathbb{N}\)._
Proof.: Let \(f\in H^{2}(z)\) and \(n\in\mathbb{N}\). Since \(\varphi\in H^{\infty}(z)\), we have
\[T_{\varphi}(fw^{n})=\varphi fw^{n}=(t_{\varphi}f)w^{n}.\]
Now, for \(g(z,w)=\sum_{n=0}^{\infty}g_{n}(z)w^{n}\in H^{2}(\mathbb{D}^{2})\) it follows that
\[\langle T_{\varphi}g,fw^{n}\rangle=\langle\varphi g_{n},f\rangle=\langle g_{ n},t_{\varphi}^{*}f\rangle=\langle g,(t_{\varphi}^{*}f)w^{n}\rangle,\]
as desired.
In fact, a bit more is true:
**Remark 10**.: _If \(\varphi\in H^{\infty}(z)\), then_
\[T_{\varphi}g=\sum_{n=0}^{\infty}(t_{\varphi}g_{n})w^{n}\text{ and }T_{\varphi}^{*}g= \sum_{n=0}^{\infty}(t_{\varphi}^{*}g_{n})w^{n}\]
_for all \(g(z,w)=\sum_{n=0}^{\infty}g_{n}(z)w^{n}\in H^{2}(\mathbb{D}^{2})\)._
Let \(\varphi\in H^{\infty}(\mathbb{D})\) and \(M\subset H^{2}(\mathbb{D}^{2})\) be a \(T_{\varphi}^{*}\)-invariant subspace. Our initial goal is to obtain conditions such that \(T_{\varphi}^{*}|_{M}\) has a nontrivial invariant subspace. This will be provided with Theorems 13, 14 and 16.
**Lemma 11**.: _If \(\varphi\in H^{\infty}(z)\), then_
\[P_{n}T_{\varphi}^{*}=T_{\varphi}^{*}P_{n}\]
_for all \(n\in\mathbb{N}\)._
Proof.: In fact, for \(g(z,w)=\sum_{m=0}^{\infty}g_{m}(z)w^{m}\in H^{2}(\mathbb{D}^{2})\), we have by Remark 10 and Proposition 9, respectively, that
\[P_{n}T_{\varphi}^{*}(g)=P_{n}\left(\sum_{m=0}^{\infty}(t_{\varphi}^{*}g_{m})w ^{n}\right)=(t_{\varphi}^{*}g_{n})w^{n}\]
and
\[T_{\varphi}^{*}P_{n}(g)=T_{\varphi}^{*}(g_{n}w^{n})=(t_{\varphi}^{*}g_{n})w^ {n}.\]
**Lemma 12**.: _Let \(\varphi\in H^{\infty}(z)\). If \(M\subset H^{2}(\mathbb{D}^{2})\) is an invariant subspace of \(T_{\varphi}^{*}\), then there is an invariant subspace \(V\subset H^{2}(\mathbb{D})\) of \(t_{\varphi}^{*}\) such that \(P_{n}(M)\) is dense in \(Vw^{n}\)._
Proof.: Consider \(V_{0}=\{g\in H^{2}(\mathbb{D}):gw^{n}\in P_{n}(M)\}\). It is easy to see that \(V_{0}\) is a closed subspace of \(H^{2}(\mathbb{D})\). If \(g\in V_{0}\), then there is \(f\in M\) such that \(gw^{n}=P_{n}f\). Thus
\[(t_{\varphi}^{*}g)w^{n}=(t_{\varphi}^{*}P_{n})f=P_{n}(t_{\varphi}^{*}f).\]
Since \(M\) is \(T_{\varphi}^{*}\)-invariant, we have \((t_{\varphi}^{*}g)w^{n}\in P_{n}(M)\) and therefore \(V_{0}\) is \(t_{\varphi}^{*}\)-invariant. Considering \(V=\overline{V_{0}}\) we have that \(V\) is \(t_{\varphi}^{*}\)-invariant and \(P_{n}(M)\) is dense in \(Vw^{n}\).
The first case in which we obtain a nontrivial invariant subspace of \(T_{\varphi}^{*}|_{M}\) is when one of the \(P_{n}(M)\) has finite dimension.
**Theorem 13**.: _Let \(\varphi\in H^{\infty}(\mathbb{D})\) and \(M\subset H^{2}(\mathbb{D}^{2})\) be a \(T_{\varphi}^{*}\)-invariant subspace. If there exists \(n\in\mathbb{N}\) such that \(\dim P_{n}(M)<\infty\), then \(T_{\varphi}^{*}|_{M}\) has a nontrivial invariant subspace._
Proof.: Since \(\dim P_{n}(M)<\infty\), it follows from Lemma 12 that \(P_{n}(M)\) is \(t_{\varphi}^{*}\)-invariant. Moreover there is \(g\in M\) such that \(P_{n}(g)=uw^{n}\), where \(t_{\varphi}^{*}u=\lambda u\). Therefore, by Lemma 11, \(P_{n}(h)=0\) where \(h=T_{\varphi}^{*}g-\lambda g\). So either \(g\) is an eigenvector of \(T_{\varphi}^{*}\) or \(V_{T_{\varphi}^{*},h}\neq M\). In either case, \(T_{\varphi}^{*}|_{M}\) has an invariant subspace.
In light of the previous theorem, we observe that there are many invariant subspaces \(M\) that do not satisfy such assumptions. Indeed, let \(g\in H^{2}(\mathbb{D})\) such that \(\dim V_{t_{\varphi}^{*},g}=\infty\). Consider then \(f\in H^{2}(\mathbb{D}^{2})\) given by \(P_{n}(f)=\lambda^{n}gw^{n}\), where \(|\lambda|<1\). Thus if \(M=V_{T_{\varphi}^{*},f}\) we have that \(\dim P_{n}(M)=\infty\) for all \(n\in\mathbb{N}\) and moreover \(M\neq H^{2}(\mathbb{D}^{2})\).
We must then consider a second version of Theorem 13.
**Theorem 14**.: _Let \(\varphi\in H^{\infty}(\mathbb{D})\) and \(M\subset H^{2}(\mathbb{D}^{2})\) be a \(T_{\varphi}^{*}\)-invariant subspace such that \(\dim P_{n}(M)=\infty\). If \(t_{\varphi}^{*}|_{V}\) has a invariant subspace for all subspace invariant \(V\subset H^{2}(\mathbb{D})\) and \(P_{n}(M)\) is closed for some \(n\in\mathbb{N}\), then \(T_{\varphi}^{*}|_{M}\) has an invariant subspace._
Proof.: Since \(P_{n}(M)\) is closed, it follows from Lemma 12 that \(P_{n}(M)=Vw^{n}\), where \(V\subset H^{2}(\mathbb{D})\) is a \(t_{\varphi}^{*}\)-invariant. Let then be \(V_{0}\subset V\) an invariant subspace of \(t_{\varphi}^{*}\) and consider
\[U=\overline{\text{span}\{v_{1}+v_{2}:v_{1}\in V_{0}\text{ and }v_{2}\in H_{n}^{ \perp}\}}.\]
Thus, we have that \(U\) is an invariant subspace of \(T_{\varphi}^{*}\) with \(\{0\}\neq M\cap U\neq M\), i.e. \(T_{\varphi}^{*}|_{M}\) has an invariant subspace.
Now, let's consider a more general case than Theorems 13 and 14. First, we need the following lemma.
**Lemma 15**.: _Let \(\varphi\in H^{2}(\mathbb{D})\) an inner function. If \(M\subset H^{2}(\mathbb{D}^{2})\) is an invariant subspace of \(T_{\varphi}^{*}\) for all \(g\in M\), then_
\[\left\{\sum_{i=0}^{\infty}\beta_{i}((t_{\varphi}^{*})^{i}g_{n})w^{n}:\beta=( \beta_{0},\beta_{1},...)\in\ell^{2}(\mathbb{C})\right\}\subset P_{n}(M).\]
Proof.: Since \(\varphi\) is an inner function, we have that \(\|T_{\varphi}^{*}\|=1\) and so \(\|(T_{\varphi}^{*})^{n}g\|\leq\|g\|\). Thus if \(\beta=(\beta_{0},\beta_{1},...)\in\ell^{2}(\mathbb{C})\), then \(\sum_{i=0}^{\infty}\beta_{i}(T_{\varphi}^{*})^{i}g\in M\) and therefore \(\sum_{i=0}^{\infty}\beta_{i}((t_{\varphi}^{*})^{i}g_{n})w^{n}\in P_{n}(M)\).
It follows from the previous lemma that, for each \(g\in M\), the operator \(J_{g_{n},\varphi}:\ell^{2}(\mathbb{C})\to\overline{P_{n}(M)}\) given by
\[J_{g,\varphi}(\beta)=\sum_{i=0}^{\infty}\beta_{i}((t_{\varphi}^{*})^{i}g_{n})w ^{n}\]
is well defined.
Note that if \(T_{\varphi}^{*}\) has a minimal invariant subspace \(M\), then \(M=V_{T_{\varphi}^{*},g}\) for all \(g\in M\) and \(J_{g,\varphi}(\ell^{2}(\mathbb{C}))\cap U=\{0\}\), for all \(g\in M\) and \(U\subset\overline{P_{n}(M)}\) invariant to \(t_{\varphi}^{*}\). Now we can consider the following:
**Theorem 16**.: _Let \(\varphi\in H^{\infty}(\mathbb{D})\) an inner function. If for each \(g\in H^{2}(\mathbb{D})\), there is an invariant subspace \(U\subset V_{t_{\varphi}^{*},g}\) such that \(J_{g,\varphi}(l^{2}(\mathbb{C}))\cap U\neq\{0\}\), then \(M\subset H^{2}(\mathbb{D}^{2})\) is not a minimal invariant subspace._
Proof.: Let \(n\in\mathbb{N}\) such that \(P_{n}(M)\neq\{0\}\).
If \(\dim P_{n}(M)<\infty\), then Theorem 13 guarantees that \(T_{\varphi}^{*}|_{M}\) has an invariant subspace and so \(M\) is non minimal.
Consider then \(\dim P_{n}(M)=\infty\). Given \(g\in H^{\infty}(\mathbb{D})\), consider the invariant subspace \(U\subset V_{t_{\varphi}^{*},g}\) such that \(J_{g,\varphi}(l^{2}(\mathbb{C}))\cap U\neq\{0\}\). Since \(\dim P_{n}(M)=\infty\), then Lemma 15 guarantees that \(U\subset\overline{P_{n}(M)}\) and \(g\in P_{n}(M)\). Thus \(P_{n}(M)\cap U\neq\{0\}\) and so
\[M\cap\overline{\operatorname{span}\{u+v:u\in U\text{ and }v\in H_{n}^{\perp} \}}\neq\{0\}\]
therefore \(M\) is non minimal.
The next result provides us with sufficient conditions for the ISP to be true.
**Theorem 17**.: _If there exists an inner function \(\varphi\in H^{2}(\mathbb{D})\) such that for each \(g\in H^{2}(\mathbb{D})\), there is an invariant subspace \(U\subset V_{t_{\varphi}^{*},g}\) of \(t_{\varphi}^{*}\) so that \(U\cap J_{g,\varphi}(l^{2}(\mathbb{C}))\neq\{0\}\), then the ISP is true._
Proof.: Since \(\varphi\) is an inner function, we have by Proposition 8 that \(T_{\varphi}^{*}\) has an universal translation for \(H^{2}(\mathbb{D}^{2})\). On the other hand, it follows from Theorem 16 that the invariant subspace \(M\subset H^{2}(\mathbb{D}^{2})\) of \(T_{\varphi}^{*}\) is not minimal. Thus the ISP is true.
|
2309.05056 | Cohen-Macaulay edge-weighted graphs of girth $5$ or greater | Let $G_\omega$ be an edge-weighted graph whose underlying graph is $G$. In
this paper, we enlarge the class of Cohen-Macaulay edge-weighted graphs
$G_\omega$ by classifying completely them when the graph $G$ has girth $5$ or
greater. | Truong Thi Hien | 2023-09-10T15:38:41Z | http://arxiv.org/abs/2309.05056v2 | # Cohen-Macaulay edge-weighted graphs of girth \(5\) or greater
###### Abstract.
Let \(G_{\omega}\) be an edge-weighted graph whose underlying graph is \(G\). In this paper, we enlarge the class of Cohen-Macaulay edge-weighted graphs \(G_{\omega}\) by classifying completely them when the graph \(G\) has girth \(5\) or greater.
Key words and phrases:Edge ideals, Cohen-Macaulay, Well-covered, edge-weighted graphs 2010 Mathematics Subject Classification: 13D02, 05C90, 05E40
## Introduction
Let \(R=K[x_{1},\ldots,x_{d}]\) be a standard graded polynomial ring over a given field \(K\). Let \(G\) be a simple graph with the vertex set \(V(G)=\{x_{1},\ldots,x_{d}\}\) and the edge set \(E(G)\). By abuse of notation, we also use \(x_{i}x_{j}\) to denote an edge \(\{x_{i},x_{j}\}\) of \(G\). A _edge-weighted graph_\(G_{\omega}\) (whose underlying graph is \(G\)) is the couple \((G,\omega)\), where \(\omega\) is a function \(\omega\colon E(G)\to\mathbb{Z}_{>0}\), which is called a _weight edge_ on \(G\). An edge-weighted graph \(G_{\omega}\) where each edge has the same weight is a trivial edge-weighted graph. The _weighted edge ideal_ of \(G_{\omega}\) was introduced by Paulsen and Sather-Wagstaff [7], given by
\[I(G_{\omega})=((x_{i}x_{j})^{\omega(x_{i}x_{j})}\mid x_{i}x_{j}\in E(G)).\]
We say that the edge-weighted graph \(G_{\omega}\) was called _Cohen-Macaulay_ if \(R/I(G_{\omega})\) is Cohen-Macaulay. In [7], the authors constructed the irreducible decomposition of \(I(G_{\omega})\) and classified Cohen-Macalay edge-weighted graphs \(G_{\omega}\) where the underlying graph \(G\) is a tree or a cycle. After that Fakhari, Shibata, Terai and Yassemi classified Cohen-Macalay edge-weighted graphs \(G_{\omega}\) when \(G\) is a very well-covered graph (see [11]). It is worth mentioning that the problem of classifying sequentially Cohen-Macaulay edge-weighted graphs is studied in [2], and classifying Cohen-Macaulay vertex-weighted oriented is studied in [3, 4, 8, 9]. In this paper, we study Cohen-Macaulay properties for the edge-weighted graphs \(G_{\omega}\). More specifically, we classify Cohen-Macaulay edge-weighted graphs \(G_{\omega}\) when \(G\) has girth at least \(5\). Recall that the _girth_ of a graph \(G\), denoted by \(\operatorname{girth}(G)\), is the length of the shortest cycle contained in it. If a graph contains no cycle, its girth is defined to be infinite.
The main result of the paper is the following theorem.
**Theorem 2.7**.: _Let \(G\) be a graph of girth at least \(5\) and \(\omega\) is a weight edge on \(G\). Then, the following conditions are equivalent:_
1. \(G_{\omega}\) _is Cohen-Macaulay._
2. \(G\) _is Cohen-Macaulay and_ \(G_{\omega}\) _is unmixed._
3. \(G\) _is in the class_ \(\mathcal{PC}\) _and the weight edge_ \(\omega\) _on_ \(G\) _satisfies:_ 1. _The weight of any pendant edge in_ \(G\) _is greater than or equal to the weight of every edge adjacent to it._ 2. _Every basic_ \(5\)_-cycle_ \(C\) _of_ \(G\) _has a balanced vertex adjacent to two vertices on_ \(C\) _of degree_ \(2\)_._ 3. _If a vertex_ \(x\) _is on a basic_ \(5\)_-cycle_ \(C\) _with_ \(\deg_{G}(x)\geqslant 3\) _and_ \(N_{C}(x)=\{y,v\}\)_, then_ \(\min\{\omega(xy),\omega(xv)\}\geqslant\max\{\omega(xw)\mid w\in N_{G}(x)\setminus \{y,v\}\}\)_._
To understand the above theorem clearly, we first recall some definitions and terminologies. An edge-weighted graph \(G_{\omega}\) is called _unmixed_ if the quotient ring \(R/I(G_{\omega})\) is unmixed. An edge of \(G\) is called the _pendant edge_ if one of its vertices is a leaf. A _basic \(5\)-cycle_ is a cycle of length \(5\) and there are no two adjacent vertices of degree three or more in \(G\).
For a given graph \(G\), let \(C(G)\) and \(P(G)\) denote the set of all vertices that belong to basic \(5\)-cycles and pendant edges, respectively. \(G\) is said to be _in the class \(\mathcal{PC}\)_ if
1. \(V(G)\) can be partitioned into \(V(G)=P(G)\cup C(G)\); and
2. the pendant edges form a perfect matching of \(G[P(G)]\).
Let \(C\) be an induced \(5\)-cycle of \(G\) with \(E(C)=\{xy,yz,zu,uv,vx\}\). We say that the vertex \(x\) is a _balanced vertex_ on \(C\) (with respect to \(\omega\)) if
1. \(\omega(xy)=\omega(xv)\); and
2. \(\omega(xy)\leqslant\omega(yz)\geqslant\omega(zu)\leqslant\omega(uv)\geqslant \omega(xv)\).
This definition is motivated by [7, Theorem 4.4], which says that \(C_{\omega}\) is Cohen-Macaulay if and only if \(C\) has a balanced vertex. In Figure 1, where the weight edge is indicated on edges, \(x\) is a balanced vertex on \(C\) if the following inequalities hold: \(m\leqslant p\geqslant q\leqslant r\geqslant m\).
Figure 1. The balanced vertex \(x\) on \(C\).
Let us explain the ideal to prove the theorem 2.7. We will prove this theorem by the following sequence: \((1)\Rightarrow(2)\Rightarrow(3)\Rightarrow(1)\). By [5, Theorem 2.6], if \(G_{\omega}\) is Cohen-Macaulay, then \(I(G)=\sqrt{I(G_{\omega})}\) is also Cohen-Macaulay, thus we get \((1)\Rightarrow(2)\). To prove \((2)\Rightarrow(3)\), we has the result that \(G\) is in the class \(\mathcal{PC}\) if \(G\) has girth at least \(5\). In addition, we introduce the notion of weighted vertex cover with minimal support to characterize the associated primes of \(I(G_{\omega})\). Together with the structure of \(G\), we can prove the combinatorial properties \((a)\)-\((c)\). It remains to show that \((3)\Rightarrow(1)\). If \(G_{\omega}\) satisfies the condition (3), we will prove \(G_{\omega}\) is Cohen-Macaulay by induction on the number of basic \(5\)-cycles of \(G\). Indeed, assume \(x\) is a balanced vertex on some basic \(5\)-cycle \(C\) as indicated in the property \((b)\) and \(m=\omega(xy)\) with \(xy\in E(C)\). We show that \((I(G_{\omega}),x^{m})\) and \(I(G_{\omega})\colon x^{m}\) are the weighted edge ideals of some edge-weighted graphs. Furthermore, these edge-weighted graphs also satisfy the condition (3) and have less the number of basic \(5\)-cycles than \(G\), then they are Cohen-Macaulay by induction. Therefore, the conclusion is followed.
The paper consists of two sections. In Section 1, we set up some basic notations, terminologies from the graph theory, the irreducible decomposition of the weighted edge ideal of an edge-weighted graph, and Cohen-Macaulay monomial ideals and their colon ideals. In Section 2, we classify Cohen-Macaulay edge-weighted graphs of girth at least \(5\) by giving some characteristics of the weight \(\omega\) on pendant edges and basic \(5\)-cycles of \(G\).
## 1. Preliminaries
We begin this section with some observations from the graph theory. Let \(G=(V(G),E(G))\) be a simple graph. Note that two vertices of \(G\) are adjacent if they are connected by an edge; two edges of \(G\) are adjacent if they share a common vertex.
**Definition 1.1**.: A set of vertices is called a _vertex cover_ of \(G\) if for every edge, \((u,v)\in E(G)\), either \(u\) or \(v\) or both are a part of the set. A _minimal vertex cover_ is a vertex cover that no its proper subset is still a vertex cover.
**Definition 1.2**.: The set of non-adjacent vertices is called an _independent set_. A _maximal independent set_ is an independent set that is not contained properly in any other independent set of \(G\). An independent set is called _maximum_ if it is of the largest cardinality.
**Remark.** Obviously, a vertex cover corresponds to the complement of an independent vertex set.
**Definition 1.3**.: A subset \(P\) of edges of \(G\) is a _matching_ if there are no two edges in \(P\) which are adjacent to each other. A matching \(P\) of \(G\) is _perfect_ if every vertex of \(G\) is incident to some edge in \(P\), i.e. in the case \(|V(G)|=2|P|\).
If \(X\subseteq V(G)\), \(G[X]\) is the induced subgraph of \(G\) on \(X\). By \(G\setminus X\), we mean the induced subgraph \(G[V\setminus X]\). The _neighbor_ of a vertex \(v\) of \(G\) means the vertices
that are adjacent to \(v\) in \(G\). The _(open) neighborhood_ of a vertex \(v\) is the set of its neighbors, i.e., \(N_{G}(v)=\{w\mid w\in V(G)\text{ and }vw\in E(G)\}\). The _closed neighborhood_ of \(v\) means to all the neighbors of \(v\) and itself, i.e., \(N_{G}[v]=N_{G}(v)\cup\{v\}\); if there is no ambiguity on \(G\), we use \(N(v)\) and \(N[v]\), respectively. We also use the symbol \(N_{G}[X]=X\cup\{v\mid vu\in E(G)\text{ for some }u\in X\}\) to denote the closed neighborhood of \(X\) in \(G\). The _degree_ of \(v\) in \(G\) is the number of its neighbors and is denoted by \(\deg_{G}(v)\). It implies that \(\deg_{G}(v)=|N_{G}(v)|\). Note that \(v\) is called a leaf if \(\deg_{G}(v)=1\).
We next introduce the class of vertex decomposable graphs (see e.g. [13]). For a vertex \(v\) of \(G\), denoted \(G\setminus v=G\setminus\{v\}\) and \(G_{v}=G\setminus N_{G}[v]\).
**Definition 1.4**.: A graph \(G\) is called _vertex decomposable_ if it is a totally disconnected graph (i.e. with no edges) or there is a vertex \(v\) in \(G\) such that
1. \(G\setminus v\) and \(G_{v}\) are both vertex decomposable, and
2. for every independent set \(S\) in \(G_{v}\), there is some \(u\in N_{G}(v)\) such that \(S\cup\{u\}\) is independent in \(G\setminus v\).
The vertex \(v\) which satisfies the condition (2) is called a _shedding vertex_ of \(G\). Recall a graph \(G\) is well-covered (see [10]) if every maximal independent set of \(G\) has the same size, namely \(\alpha(G)\). Thus, if \(G\) is well-covered and \(v\) is a shedding vertex of \(G\), then \(G\setminus v\) is also well-covered with \(\alpha(G\setminus x)=\alpha(G)\).
Now, we consider some results of the irreducible decomposition of the weighted edge ideal of an edge-weighted graph, and Cohen-Macaulay monomial ideals and their colon ideals, which we shall need in the proof of the main theorem.
It is widely known that (see e.g. [12, Proposition 6.1.16])
\[\operatorname{Ass}(R/I(G))=\{(v\mid v\in C)\mid C\text{ is a minimal vertex cover of }G\}.\]
Particularly, \(\dim R/I(G)=\alpha(G)\) whenever \(V(G)=\{x_{1},\ldots,x_{d}\}\). The graph \(G\) is called a Cohen-Macaulay graph if the ring \(R/I(G)\) is Cohen-Macaulay. In consequence, \(G\) is well-covered if it is Cohen-Macaulay.
Let \(G_{\omega}\) be an edge-weighted graph. We know that the usual edge ideal of \(G\), denoted by \(I(G)\), is a special case of the weighted edge ideal \(I(G_{\omega})\) when the weight \(\omega\) on \(G\) is the trivial one, i.e., \(\omega(e)=1\) for all \(e\in E(G)\). Since \(I(G)=\sqrt{I(G_{\omega})}\), by [5, Theorem 2.6], \(G\) is Cohen-Macaulay if so is \(G_{\omega}\). Therefore, if we know the structure of the underlying Cohen-Macaulay graph together with the weight edges on it, we can get the picture of the Cohen-Macaulayness of an edge-weighted graph. In this paper, we consider graphs of girth at least \(5\) and so the following result plays a crucial role in the paper (see [1, Theorem 20] or [6, Theorem 2.4]).
**Lemma 1.5**.: _Let \(G\) be a connected graph of girth at least \(5\). Then, the following statements are equivalent:_
1. \(G\) _is well covered and vertex decomposable;_
2. \(G\) _is Cohen-Macaulay;_
3. \(G\) _is either a vertex or in the class_ \(\mathcal{PC}\)
We next describe the associated primes of \(R/I(G_{\omega})\).
**Definition 1.6**.: Let \(G_{\omega}\) be an edge-weighted graph, \(C\) be a vertex cover of \(G\) and a function \(\delta\colon C\to\mathbb{Z}_{>0}\). The pair \((C,\delta)\) is called a _weighted vertex cover_ of \(G_{\omega}\) if for every \(e=uv\in E(G)\) we have either \(u\in C\) and \(\delta(u)\leqslant\omega(e)\) or \(v\in C\) and \(\delta(v)\leqslant\omega(e)\).
Observe that a pair \((C,\delta)\) where \(C\subseteq V(G)\) and \(\delta\colon C\to\mathbb{Z}_{>0}\) is a weighted vertex cover of \(G_{\omega}\) if and only if \(P(C,\delta)=(v^{\delta(v)}\mid v\in C)\supseteq I(G_{\omega})\). Now, we give a definition of an ordering of weighted vertex covers.
**Definition 1.7**.: Let \(G_{\omega}\) be an edge-weighted graph. For two weighted vertex covers \((C,\delta)\) and \((C^{\prime},\delta^{\prime})\) of \(G_{\omega}\), we say that \((C,\delta)\leqslant(C^{\prime},\delta^{\prime})\) if \(C\subseteq C^{\prime}\) and \(\delta(v)\geqslant\delta^{\prime}(v)\) for every \(v\in C\).
In the usual sense, \((C,\delta)\) is minimal if it is minimal with respect to this order. Then,
**Lemma 1.8**.: _[_7_, Theorem 3.5]_ \(I(G_{\omega})\) can be represented as_
\[I(G_{\omega})=\bigcap_{(C,\delta)\text{ is minimal}}P(C,\delta)\]
_and the intersection is irredundant._
This lemma implies that if \((C,\delta)\) is a minimal weighted vertex cover of \(G_{\omega}\), then \((v\mid v\in C)\in\operatorname{Ass}(R/I(G_{\omega}))\). We say that a weighted vertex cover \((C,\delta)\) of \(G_{\omega}\) is _minimal support_ if there is no proper subset \(C^{\prime}\) of \(C\) such that \((C^{\prime},\delta)\) is a weighted vertex cover of \(G_{\omega}\).
**Lemma 1.9**.: _If a weighted vertex cover \((C,\delta)\) of \(G_{\omega}\) is minimal support, then_
\[(v\mid v\in C)\in\operatorname{Ass}(R/I(G_{\omega})).\]
Proof.: Since \(I(G_{\omega})\subseteq P(C,\delta)\), by Lemma 1.8 we have \(P(C^{\prime},\delta^{\prime})\subseteq P(C,\delta)\) for some minimal weighted vertex cover \((C^{\prime},\delta^{\prime})\). In particular, \((C^{\prime},\delta^{\prime})\leqslant(C,\delta)\). This implies that \((C^{\prime},\delta)\) is a weighted vertex cover of \(G_{\omega}\). Since \((C,\delta)\) is minimal support, we have \(C=C^{\prime}\). Therefore, \((v\mid v\in C)\in\operatorname{Ass}(R/I(G_{\omega}))\), as required.
A monomial ideal \(I\) is _unmixed_ if every its associated prime has the same height. It is well known that if \(R/I\) is Cohen-Macaulay, then \(I\) is unmixed. Because \(I(G)=\sqrt{I(G_{\omega})}\), hence \(G\) is well-covered if \(G_{\omega}\) is unmixed. In this case, if \((C,\delta)\) is a weighted minimal vertex cover of \(G_{\omega}\), then \(C\) is a minimal vertex cover of \(G\).
We now recall some techniques to study the Cohen-Macaulayness of monomial ideals as mentioned in [3].
**Lemma 1.10**.: _[_3_, Lemma 1.4]_ _Let \(I\) be a monomial ideal and \(f\) a monomial not in \(I\). We have_
1. _If_ \(I\) _is Cohen-Macaulay, then_ \(I\colon f\) _is Cohen-Macaulay._
_._
2. _If_ \(I\colon f\) _and_ \((I,f)\) _are Cohen-Macaulay with_ \(\dim R/I\colon f=\dim R/(I,f)\)_, then_ \(I\) _is Cohen-Macaulay._
**Lemma 1.11**.: _[_3_, Lemma 1.5]_ _Let \(G\) be a well-covered graph. If \(v\) is a shedding vertex of \(G\), then_
\[\dim R/I(G)=\dim R/(I(G\setminus v),v)=\dim R/I(G):v.\]
In the sequel, we need the following lemma obtained from [3].
**Lemma 1.12**.: _Let \(G\) be a graph in the class \(\mathcal{PC}\). Let \(C\) be a basic \(5\)-cycle and \(x\) a vertex in \(C\) with degree at least \(3\). Assume that \(E(C)=\{xy,yz,zu,uv,vx\}\) and \(N(x)=\{y,v,y_{1},\ldots,y_{k}\}\). Then, there is an independent set of \(G\) with \(k\) vertices, say \(\{z_{1},\ldots,z_{k}\}\), such that_
1. \(G[y_{1},\ldots,y_{k},z_{1},\ldots,z_{k}]\) _consists of_ \(k\) _disjoint edges_ \(y_{1}z_{1},\ldots,y_{k}z_{k}\)_._
2. \(N[z_{1},\ldots,z_{k}]\cap V(C)=\emptyset\)_._
Proof.: Follows from Part (1) of [3, Lemma 2.2].
## 2. Cohen-Macaulay edge-weighted graphs
In this section, we classify the Cohen-Macaulay edge-weighted graphs \(G_{\omega}\) of girth at least \(5\). Because \(G\) is in the class \(\mathcal{PC}\), which \(V(G)=P(G)\cup C(G)\), then we will study the weight \(\omega\) on pendant edges and basic \(5\)-cycles of \(G\) as a natural.
To investigate the weight on pendant edges, we consider the following lemma.
**Lemma 2.1**.: _Let \(G_{\omega}\) be an unmixed edge-weighted graph. Assume that \((xy)^{\omega(xy)}\) and \((xz)^{\omega(xz)}\) are among minimal generators of \(I(G_{\omega})\colon f\) for some monomial \(f\notin I(G_{\omega})\). Assume that \(x^{k}\notin I(G_{\omega})\colon f\) for every \(k\). If \(y\) does not appear in any minimal generator of \(I(G_{\omega})\colon f\) except for \((xy)^{\omega(xy)}\), then \(\omega(xy)\geqslant\omega(xz)\)._
Proof.: Follows from [3, Lemma 2.1].
We now move on to investigate the weight of basic \(5\)-cycles.
**Lemma 2.2**.: _Let \(G_{\omega}\) be an unmixed edge-weighted graph where \(G\) is in the class \(\mathcal{PC}\). Assume that \(C\) is a basic \(5\)-cycle of \(G\) such that that \(E(C)=\{xy,yz,zu,uv,vx\}\) and \(\deg(x)>2\). Then,_
1. \(\omega(xw)\leqslant\min\{\omega(xy),\omega(xv)\}\) _for all_ \(w\in N(x)\setminus\{y,v\}\)_._
2. \(\omega(zu)\leqslant\min\{\omega(zy),\omega(uv)\}\)_._
Proof.: Let \(m=\omega(xy),n=\omega(xv),p=\omega(yz),q=\omega(zu),r=\omega(uv)\). Assume that
\[N(x)=\{y,v,y_{1},\ldots,y_{k}\},\text{ where }k\geqslant 1,\]
and \(m_{i}=\omega(xy_{i})\) for \(i=1,\ldots,k\) so that \(m_{1}\geqslant m_{2}\geqslant\cdots\geqslant m_{k}\).
(1) Assume on the contrary that \(m_{1}>\min\{m,n\}\). We may assume that \(m_{1}>m\). By Lemma 1.12, there is an independent set \(\{z_{1},\ldots,z_{k}\}\) of \(G\) such that the graph \(G[y_{1},\ldots,y_{k},z_{1},\ldots,z_{k}]\) consists of disjoint edges \(y_{1}z_{1},\ldots,y_{k}z_{k}\) and \(N[z_{1},\ldots,z_{k}]\cap V(C)=\emptyset\).
Let \(S_{1}=\{z_{2},\ldots,z_{k}\}\). As \(\operatorname{girth}(G)\geqslant 5\), we deduce that \(\{y_{1},y,u\}\cup S_{1}\) is an independent set of \(G\). Now extend this set to a maximal independent set of \(G\), say \(S\). Then, \(C^{*}=V(G)\setminus S\) is a minimal cover of \(G\). In particular, \(\operatorname{ht}(I(G_{\omega}))=|C^{*}|\). Let \(\delta\colon C^{*}\to\mathbb{Z}_{>0}\) be such that \((C^{*},\delta)\) is a weighted vertex cover of \(G_{\omega}\). Note that \(z,v,x,y_{2},\ldots,y_{k}\in C^{*}\) and \(y_{1},y,u\notin C^{*}\).
Let \(C^{\prime}=C^{*}\cup\{y\}\) and \(\delta^{\prime}\colon C^{\prime}\to\mathbb{Z}_{>0}\) defined by
\[\delta^{\prime}(w)=\begin{cases}m_{1}&\text{if }w=x,\\ m&\text{if }w=y,\\ \min\{n,\delta(v)\}&\text{if }w=v,\\ \min\{m_{i},\delta(y_{i})\}&\text{if }w=y_{i},\text{ for }i=2,\ldots,k,\\ \delta(w)&\text{otherwise}.\end{cases}\]
We now prove that \((C^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\omega}\). Indeed, since \(C^{\prime}=C^{*}\cup\{y\}\), \((C^{*},\delta)\) is a weighted vertex cover of \(G_{\omega}\) and by the definition of the function \(\delta^{\prime}\), it suffices to check the condition of a weighted vertex cover of \(G_{\omega}\) for the set of edges
\[\{xy,xv,xy_{1},xy_{2},\ldots,xy_{k},yz,vu\}.\]
If \(e=xy\), then \(y\in C^{\prime}\) and \(\delta^{\prime}(y)=m=\omega(xy)\).
If \(e=xv\), then \(v\in C^{\prime}\) and \(\delta^{\prime}(v)=\min\{n,\delta(v)\}\leqslant n=\omega(xv)\).
If \(e=xy_{1}\), then \(x\in C^{\prime}\) and \(\delta^{\prime}(x)=m_{1}=\omega(xy_{1})\).
If \(e=xy_{i}\), then \(y_{i}\in C^{\prime}\) and \(\delta^{\prime}(y_{i})=\min\{m_{i},\delta(y_{i})\}\leqslant m_{i}=\omega(xy_{ i})\), for \(i=2,\ldots,k\).
If \(e=yz\). By considering the weighted vertex cover \((C^{*},\delta)\) we have \(\delta(z)\leqslant\omega(yz)\) since \(y\notin C^{*}\) and \(z\in C^{*}\). Thus, for \((C^{\prime},\delta^{\prime})\), we have \(z\in C^{\prime}\) and \(\delta^{\prime}(z)=\delta(z)\leqslant\omega(yz)\).
If \(e=vu\), similarly as the previous case, we have \(v\in C^{\prime}\) and \(\delta^{\prime}(v)=\min\{n,\delta(v)\}\leqslant\delta(v)\leqslant\omega(uv)\).
Therefore, \((C^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\omega}\), as desired.
Next, we claim that it is minimal support. Indeed, assume on the contrary that it is not the case, then there is a vertex, say \(w\in C^{\prime}\) such that \((C^{\prime}\setminus\{w\},\delta^{\prime})\) is still a
Figure 2. The structure of \(G\).
weighted vertex cover of \(G_{\omega}\). We consider the case \(w\in\{y,x,v,z\}\). Since \(u,y_{1}\notin C^{\prime}\), it follows that \(w\) must be \(y\). But in this case, we have \(\delta^{\prime}(x)=m_{1}\leqslant\omega(xy)=m\) if we look at the edge \(e=xy\), a contradiction. Thus, \(w\notin\{y,x,v,z\}\). Note that \(|C^{\prime}\setminus\{w\}|=|C^{*}|\) and \(C^{*}\) is a minimal vertex cover of \(G\), it follows that \(C^{\prime}\setminus\{w\}\) is a minimal cover of \(G\) since \(G\) is well-covered. Consequently, \(S^{\prime}=V(G)\setminus(C^{\prime}\setminus\{w\})\) is a maximal independent set of \(G\). On the other hand, since \(y,x,v,z\notin S^{\prime}\) and \(N_{G}(y)=\{x,z\}\), it follows that \(\{y\}\cup S^{\prime}\) is an independent set of \(G\), a contradiction. Thus, \((C^{\prime},\delta^{\prime})\) is minimal support, as claimed.
Together with Lemma 1.9, this claim yields \(\operatorname{bight}(I(G_{\omega}))\geqslant|C^{\prime}|=|C^{*}|+1= \operatorname{ht}(I(G_{\omega}))+1\). Thus, \(I(G_{\omega})\) is not unmixed, a contradiction, and thus \(m_{1}\leqslant m\). By the same way, we get \(m_{1}\leqslant n\), and (1) follows.
(2) From Part (1) and our assumption, we have \(m_{k}\leqslant\min\{m_{1},\ldots,m_{k},m,n\}\), and hence
\[I(G_{\omega})\colon y_{k}^{m_{k}}=(y^{p}z^{p},z^{q}u^{q},u^{r}v^{r},x^{m_{k}}, \ldots).\]
Since \(N[y_{k}]\cap V(C)=\{x\}\) and \(\deg_{G}(y)=\deg_{G}(v)=2\), we imply that the four monomials in the representation above are among minimal generators of \(I(G_{\omega})\colon y_{k}^{m_{k}}\) and the remaining minimal generators of \(I(G_{\omega})\colon y_{k}^{m_{k}}\) are not involving both \(y\) and \(v\). Note that \(y_{k}z,y_{k}u\notin E(G)\), so \(z^{i},u^{i}\notin I(G_{\omega})\colon y_{k}^{m_{k}}\) for every \(i\). By Lemma 2.1, we obtain \(q\leqslant\min\{p,r\}\), as required.
In the following lemmas, we use the setting as illustrated in Figure 2. Let \(G_{\omega}\) be an unmixed edge-weighted graph where \(G\) is in the class \(\mathcal{PC}\) and let \(C\) be a basic \(5\)-cycle of \(G\). Assume that \(E(C)=\{xy,yz,zu,uv,vx\}\) and \(\deg(x)>2\). Set
\[m=\omega(xy),p=\omega(yz),q=\omega(zu),r=\omega(uv),\text{ and }n=\omega(zx).\]
The aim of these lemmas is to show that each basic \(5\)-cycle of \(G\) has a balanced vertex.
**Lemma 2.3**.: _If \(q<r\), then \(n\leqslant\min\{r,m\}\)._
Proof.: Since \(q\leqslant\min\{p,r\}\) by Lemma 2.2, we have
\[I(G_{\omega})\colon u^{q}=(x^{m}y^{m},u^{r-q}v^{r},v^{n}x^{n},\ldots).\]
Since \(q<r\), by using the same argument as in the proof of Part (2) of Lemma 2.2 above, we get \(m\geqslant n\).
We next prove that \(n\leqslant r\). Assume on the contrary that \(n>r\). Let \(S\) be a maximal independent set of \(G\) containing \(x\) and \(u\) and let \(C^{*}=V(G)\setminus S\). Then, \(C^{*}\) is a minimal vertex cover of \(G\). Let \(\delta\colon C^{*}\to\mathbb{Z}_{>0}\) be a function such that \((C^{*},\delta)\) is a minimal weighted vertex cover of \(G_{\omega}\). Note that \(x,u\notin C^{*}\) and \(y,z,v\in C^{*}\).
Let \(C^{\prime}=C^{*}\cup\{u\}\) and \(\delta^{\prime}\colon C^{\prime}\to\mathbb{Z}_{>0}\) given by
\[\delta^{\prime}(w)=\begin{cases}r&\text{ if }w=u,\\ n&\text{ if }w=v,\\ \delta(w)&\text{ otherwise.}\end{cases}\]
Then, \((C^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\omega}\). In fact, it suffices to check the condition of a weighted vertex cover of \(G_{\omega}\) for the set of edges \(\{zu,uv,xv\}\).
If \(e=zu\), then \(z\in C^{\prime}\) and \(\delta^{\prime}(z)=\delta(z)\leqslant\omega(zu)\) (the last inequality holds by look at the weighted vertex cover \((C^{*},\delta)\)).
If \(e=uv\), then \(u\in C^{\prime}\) and \(\delta^{\prime}(u)=r=\omega(uv)\).
If \(e=xv\), then \(v\in C^{\prime}\) and \(\delta^{\prime}(v)=n=\omega(xv)\).
Next, we prove \((C^{\prime},\delta^{\prime})\) is a minimal support weighted cover of \(G_{\omega}\). Assume on the contrary, there is a vertex \(w\in C^{\prime}\) such that \((C^{\prime}\setminus\{w\},\delta^{\prime})\) is still a weighted vertex cover of \(G_{\omega}\). Since \(x\notin C^{\prime}\) then \(w\) could not be \(y\) and \(v\). We consider the following cases:
If \(w=u\), then \(\delta^{\prime}(v)=n>r=\omega(uv)\), a contradiction.
If \(w=z\), then \(\delta^{\prime}(u)=r>q=\omega(zu)\), a contradiction.
In other cases, since \(|C^{\prime}\setminus\{w\}|=|C^{*}|\), \(C^{*}\) is a minimal vertex cover of \(G\), and \(G\) is well-covered, then \(C^{\prime}\setminus\{w\}\) is a minimal cover of \(G\). Thus, \(S^{\prime}=V(G)\setminus(C^{\prime}\setminus\{w\})\) is a maximal independent set of \(G\). On the other hand, since \(y,v,z,u\notin S^{\prime}\) and \(N_{G}(u)=\{v,z\}\), it follows that \(\{u\}\cup S^{\prime}\) is an independent set of \(G\), a contradiction.
Thus, \((C^{\prime},\delta^{\prime})\) is minimal support, as claimed.
Since \(|C^{\prime}|=|C^{*}|+1\), we have \(\operatorname{bight}(I(G_{\omega}))\geqslant|C^{*}|+1=\operatorname{ht}(I(G_{ \omega}))+1\). This contradicts the fact that \(I(G_{\omega})\) is unmixed. Therefore, \(n\leqslant r\), as required.
**Lemma 2.4**.: _If \(p=q<r\) and \(n<m\), then \(p\leqslant m\)._
Proof.: Assume on the contrary that \(m<p\). Let \(S\) be a maximal independent set of \(G\) containing \(x\) and \(z\) and let \(C^{*}=V(G)\setminus S\) so that \(C^{*}\) is a minimal vertex cover of \(G\). Let \(\delta\colon C^{*}\to\mathbb{Z}_{>0}\) be a function such that \((C^{*},\delta)\) is a minimal weighted cover of \(G_{\omega}\).
Let \(C^{\prime}=C^{*}\cup\{x\}\) and \(\delta^{\prime}\colon C^{\prime}\to\mathbb{Z}_{>0}\) given by
\[\delta^{\prime}(w)=\begin{cases}p&\text{ if }w=y,\\ m&\text{ if }w=x,\\ \delta(w)&\text{ otherwise.}\end{cases}\]
By the same argumnet as the above Lemma, we can verify that \((C^{\prime},\delta^{\prime})\) is a minimal support weighted cover of \(G_{\omega}\). Since \(|C^{\prime}|=|C^{*}|+1\), we have \(\operatorname{bight}(I(G_{\omega}))\geqslant|C^{*}|+1=\operatorname{ht}(I(G_{ \omega}))+1\). This contradicts the fact that \(I(G_{\omega})\) is unmixed. Therefore, \(p\leqslant m\), as required.
**Lemma 2.5**.: \(C\) _has a balanced vertex in the set \(\{x,z,u\}\)._
Proof.: By Lemma 2.2 we have \(q\leqslant\min\{p,r\}\). If \(q<\min\{p,r\}\), then \(n=m\), \(n\leqslant r\) and \(m\leqslant p\) by Lemma 2.3. Thus, \(x\) is a balanced vertex, and thus it remains to prove the lemma in the case \(q=\min\{p,r\}\). We may assume that \(p=q\leqslant r\). We now consider two possible cases:
_Case_ 1: \(p=q=r\). By symmetry, we may assume that \(m\leqslant n\). We first claim that \(\min\{m,n\}=m\leqslant p\). Indeed, assume on the contrary that \(m>p\). Let \(S\) be a maximal independent set of \(G\) containing \(x\) and \(u\) and let \(C^{*}=V(G)\setminus S\). Then,
is a minimal vertex cover of \(G\). Let \(\delta\colon C^{*}\to\mathbb{Z}_{>0}\) be a function such that \((C^{*},\delta)\) is a minimal vertex cover of \(G_{\omega}\).
Let \(C^{\prime}=C^{*}\cup\{u\}\) and \(\delta^{\prime}\colon C^{\prime}\to\mathbb{Z}_{>0}\) given by
\[\delta^{\prime}(w)=\begin{cases}m&\text{ if }w=y,\\ \min\{\delta(z),p\}&\text{ if }w=z,\\ p&\text{ if }w=u,\\ n&\text{ if }w=v,\\ \delta(w)&\text{ otherwise.}\end{cases}\]
By the same manner of the proof in the part (1) of Lemma 2.2, \((C^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\omega}\). It is straightforward to verify that it is a minimal support weighted cover of \(G_{\omega}\). Since \(|C^{\prime}|=|C^{*}|+1\), we have \(\operatorname{bight}(I(G_{\omega}))\geqslant|C^{*}|+1=\operatorname{ht}(I(G_{ \omega}))+1\). This contradicts the fact that \(I(G_{\omega})\) is unmixed. Therefore, \(m\leqslant p\), as claimed.
If \(n\geqslant p\), then \(u\) is a balanced vertex on \(C\).
If \(n<p\), we assume that \(m<n\). Let \(S\) be a maximal independent set of \(G\) containing \(x\) and \(u\) and let \(C^{*}=V(G)\setminus S\). Then, \(C^{*}\) is a minimal vertex cover of \(G\). Let \(\delta\colon C^{*}\to\mathbb{Z}_{>0}\) be a function such that \((C^{*},\delta)\) is a minimal weighted vertex cover of \(G_{\omega}\).
Let \(C^{\prime}=C^{*}\cup\{x\}\) and \(\delta^{\prime}\colon C^{\prime}\to\mathbb{Z}_{>0}\) given by
\[\delta^{\prime}(w)=\begin{cases}n&\text{ if }w=x,\\ p&\text{ if }w=v,\\ \delta(w)&\text{ otherwise.}\end{cases}\]
Again, \((C^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\omega}\). Moreover, it is a minimal support weighted cover of \(G_{\omega}\). Since \(|C^{\prime}|=|C^{*}|+1\), we have \(\operatorname{bight}(I(G_{\omega}))\geqslant|C^{*}|+1=\operatorname{ht}(I(G_{ \omega}))+1\). This contradicts the fact that \(I(G_{\omega})\) is unmixed. Thus, \(m<n\) is impossible so that \(m=n\). In this case, \(x\) is a balanced vertex on \(C\).
_Case \(2\)_: \(p=q<r\). Then, \(n\leqslant\min\{r,m\}\) by Lemma 2.3. In particular, \(n\leqslant m\). If \(n<m\), then \(p\leqslant m\) by Lemma 2.4, and then \(z\) is a balanced vertex on \(C\). In the case \(n=m\), we have \(z\) is a balanced vertex on \(C\) if \(m\geqslant p\) or \(x\) is a balanced vertex on \(C\) if \(m\leqslant p\). The proof of the lemma is complete.
**Lemma 2.6**.: _Assume further that \(\deg(z)>2\). Then, either \(x\) or \(z\) is a balanced vertex on \(C\)._
Proof.: Since \(\deg(x)>2\) and \(\deg(z)>2\), by Lemma 2.5, \(C\) has a balanced vertex in the set \(\{x,z,u,v\}\). If \(v\) is a balanced vertex, then
\[r=n\ \text{ and }n\leqslant m\geqslant p\leqslant q\geqslant r.\]
On the other hand, since \(\deg(x)>2\) and \(\deg(z)>2\), by Lemma 2.2 we obtain
\[p\geqslant q\leqslant r\text{ and }m\geqslant n\leqslant r.\]
From those inequalities, we get \(n=p=q=r\leqslant m\). Hence, \(z\) is also a balanced vertex on \(C\). In the same way, if \(u\) is a balanced vertex, then \(x\) is a balanced vertex on \(C\) as well. Therefore, we conclude that either \(x\) or \(z\) is a balanced vertex on \(C\).
We are now in a position to prove the main result of the paper.
**Theorem 2.7**.: _Let \(G\) be a graph of girth at least \(5\) and \(\omega\) is a weight edge on \(G\). Then, the following conditions are equivalent:_
1. \(G_{\omega}\) _is Cohen-Macaulay._
2. \(G\) _is Cohen-Macaulay and_ \(G_{\omega}\) _is unmixed._
3. \(G\) _is in the class_ \(\mathcal{PC}\) _and the weight edge_ \(\omega\) _on_ \(G\) _satisfies:_ 1. _The weight of any pendant edge in_ \(G\) _is greater than or equal to the weight of every edge adjacent to it._ 2. _Every basic_ \(5\)_-cycle_ \(C\) _of_ \(G\) _has a balanced vertex adjacent to two vertices on_ \(C\) _of degree_ \(2\)_._ 3. _If a vertex_ \(x\) _is on a basic_ \(5\)_-cycle_ \(C\) _with_ \(\deg_{G}(x)\geqslant 3\) _and_ \(N_{C}(x)=\{y,v\}\)_, then_ \(\min\{\omega(xy),\omega(xv)\}\geqslant\max\{\omega(xw)\mid w\in N_{G}(x) \setminus\{y,v\}\}\)_._
Proof.: \((1)\Longrightarrow(2)\) Since \(G_{\omega}\) is Cohen-Macaulay, then \(G_{\omega}\) is unmixed. On the other hand, \(I(G)=\sqrt{I(G_{\omega})}\), by [5, Theorem 2.6], \(G\) is Cohen-Macaulay.
\((2)\Longrightarrow(3)\) Since \(G\) is a Cohen-Macaulay graph of girth at least \(5\), by Lemma 1.5, \(G\) is in the class \(\mathcal{PC}\). Now, we consider two following cases: First, in the case \(G\) is just a \(5\)-cycle, we only need to prove the property \((b)\) and it follows immediately from [7, Theorem 4.4]. Second, in the remain cases, i.e. \(G\) is not a \(5\)-cycle, then the property \((a)\) equivalent to this statement: "For every pendant edge \(xy\) of \(G\) with \(y\) is a leaf, then \(\omega(xy)\geqslant\omega(xz)\) for any \(xz\in E(G)\)", and it follows from Lemma 2.1. In addition, the property \((b)\) follows from Lemma 2.5, and the property \((c)\) follows immediately from Lemma 2.2(1).
\((3)\Longrightarrow(1)\) We prove by induction on the number of basic \(5\)-cycles of \(G\).
If \(G\) has no basic \(5\)-cycle, then its pendant edges form a perfect matching in \(G\). In this case, combine with the condition \((a)\) and [7, Lemma 5.3], we get \(G_{\omega}\) is Cohen-Macaulay.
Assume that \(G\) has some basic \(5\)-cycles. If \(G\) is just a \(5\)-cycle, by [7, Theorem 4.4], \(G_{\omega}\) is Cohen-Macaulay as desired. If not then, assume \(C_{1},\ldots,C_{r}\) be the basic \(5\)-cycles of \(G\) with \(r\geqslant 1\) and \(P\) be the set of pendant edges of \(G\). Assume that \(E(C_{1})=\{xy,yz,zu,uv,vx\}\) with
\[\omega(xy)=m,\omega(yz)=p,\omega(zu)=q,\omega(uv)=r,\omega(vx)=n.\]
By our assumptions, \(C_{1}\) has a balanced vertex such that two neighbors in \(C_{1}\) are also of degree \(2\). We may assume \(x\) is such a vertex so that \(m=n\) and \(m\leqslant p\geqslant q\leqslant r\geqslant m\). Now we consider two possible cases:
_Case_ \(1\): \(\deg_{G}(x)=2\). In this case, \(N_{G}(x)=\{y,v\}\), and hence
\[I(G_{\omega})\colon x^{m}=(y^{m},v^{m},I((G_{x})_{\omega}))\ \ \text{and}\ I(G_{\omega})+(x^{m})=(x^{m})+I((G \setminus x)_{\omega}).\]
Now, we will prove these ideals are Cohen-Macaulay. Observe that \(G\setminus x\) is in the class \(\mathcal{PC}\) with \(r-1\) basic \(5\)-cycles \(C_{2},\ldots,C_{r}\) and pendant edges \(P\cup\{zy,uv\}\) where \(y\) and \(v\) are leaves. We now verify the graph \((G\setminus x)_{\omega}\) satisfies the condition (3). It suffices to prove the property \((a)\). Particularly, we only need to verify this property for the pendant edges \(zy\) and \(uv\). In particular, we will prove this property for the pendant edge \(zy\), and similarly for the pendant edge \(uv\). Let \(zw\in E((G\setminus x)_{\omega})\) for some \(w\in V(G\setminus x)\setminus\{y\}\). If \(w=u\), then by using condition of a balanced vertex \(x\) in \(C_{1}\), we have \(\omega(zu)\leqslant\omega(zy)\). If \(w\neq u\), then \(w\notin C_{1}\). By applying Lemma 2.2 on the basic \(5\)-cycle \(C_{1}\), we get \(\omega(zw)\leqslant\omega(zy)\). Thus, the property holds for the graph \((G\setminus x)_{\omega}\). By the induction hypothesis, \((G\setminus x)_{\omega}\) is Cohen-Macaulay, so that \(I(G_{\omega})+(x^{m})\) is Cohen-Macaulay.
In the same way, we will prove that \(I(G_{\omega})\colon x^{m}\) is Cohen-Macaulay as follows. Since \(C_{1}\) is a basic \(5\)-cycle then one of the vertices of \(\{z,u\}\) is a leaf in \(G_{x}=G\setminus\{x,y,v\}\). Thus, \(G_{x}\) is in the class \(\mathcal{PC}\) with \(r-1\) basic \(5\)-cycles \(C_{2},\ldots,C_{r}\) and pendant edges \(P\cup\{zu\}\). We now verify the graph \((G_{x})_{\omega}\) satisfies the condition (3). It suffices to prove the property \((a)\). Particularly, it remains to verify this property for the pendant edge \(zu\). If both vertices \(z\) and \(u\) are leaves, then nothing to do. Otherwise, assume \(u\) is a leaf and \(z\) is not. Let \(zw\) be any edge in \(E((G_{x})_{\omega})\), it follows that \(w\) is not in the basic \(5\)-cycle \(C_{1}\). Once again, applying Lemma 2.2 on the basic \(5\)-cycle \(C_{1}\), we get \(\omega(zw)\leqslant\omega(zu)\). Thus, the property holds for the graph \((G_{x})_{\omega}\), and hence \((G_{x})_{\omega}\) is Cohen-Macaulay by the induction hypothesis. Therefore, \(I(G_{\omega})\colon x^{m}\) is Cohen-Macaulay, too.
Since \(\sqrt{I(G_{\omega})+(x^{m})}=(x,I(G\setminus x))\) is Cohen-Macaulay, it forces \(G\setminus x\) is well-covered. Since \(x\) is not an isolated vertex, it is a shedding vertex. Moreover,
\(\sqrt{I(G_{\omega})\colon x^{m}}=(y,v,I(G_{x}))=I(G)\colon x\). By Lemma 1.11, we have
\[\dim R/I(G_{\omega})=\dim R/I(G_{\omega})\colon x^{m}=\dim R/(I(G_{\omega}),x^ {m}).\]
This implies that \(I(G_{\omega})\) is Cohen-Macaulay by Lemma 1.10.
_Case_ 2: \(\deg_{G}(x)>2\). Let \(N(x)=\{y,v,y_{1},\ldots,y_{k}\}\). Since \(m\geqslant m_{i}\) for all \(i\) by Lemma 2.2, we obtain
\[I(G_{\omega})\colon x^{m}=(y^{m},v^{m})+(y_{1}^{m_{1}},\ldots,y_{k}^{m_{k}},I (G\setminus\{x,y,v\})_{\omega})\]
and
\[I(G_{\omega})+(x^{m})=(x^{m},x^{m_{1}}y_{1}^{m_{1}},\ldots,x^{m_{k}}y_{k}^{m_{ k}},I(G\setminus x)_{\omega}).\]
We now will prove these ideals are Cohen-Macaulay. Let \(w\) be a new vertex and \(H\) be a graph which is obtained from \(G\) by removing two edges \(xy\) and \(xv\) but adding a new edge \(xw\). It means that \(H\) is a graph with \(V(H)=V(G)\cup\{w\}\) and \(E(H)=(E(G)\cup\{xw\})\setminus\{xy,xv\}\). Since \(C_{1}\) is a basic \(5\)-cycle and \(\deg_{G}(x)>2\), then \(\deg_{G}(y)=\deg_{G}(v)=2\). Thus, \(w,y,v\) are leaves in \(H\). Then \(H\) is in the class \(\mathcal{PC}\) with \(r-1\) basic \(5\)-cycles and pendant edges \(P\cup\{xw,uv\}\). Now we define the weight edge on
by sending
\[e\mapsto\begin{cases}m&\text{if }e=xw,\\ \omega(e)&\text{otherwise},\end{cases}\]
which is still denoted by \(\omega\).
We now verify that \(H_{\omega}\) satisfy the condition (3). It suffices to prove the property \((a)\). In order to do this, it remains to verify this property for the pendant edges \(xw\) and \(uv\). It follows from Lemma 2.2 (for the pendant edge \(e=uv\)) and the way we define the weight edge on \(H\), \(\omega(xw)=m\) (for the pendant edge \(e=xw\)). Thus, by the induction hypothesis, \(H_{\omega}\) is Cohen-Macaulay. Since \(xw\) is an pendant edge of \(H_{\omega}\), so that \(x^{m}w^{m}\in I(H_{\omega})\), by Lemma 1.10 we have \(I(H_{\omega})\colon w^{m}\) is Cohen-Macaulay. Note that
\[I(H_{\omega})\colon w^{m}=(x^{m},x^{m_{1}}y_{1}^{m_{1}},\dots,x^{m_{k}}y_{k}^{ m_{k}},I(G\setminus x)_{\omega})=I(G_{\omega})+(x^{m}).\]
Hence, \(I(G_{\omega})+(x^{m})\) is Cohen-Macaulay.
In order to prove \(I(G_{\omega})\colon x^{m}\) is Cohen-Macaulay we use the same technique as above. Let \(H^{\prime}\) be a graph with \(V(H^{\prime})=V(G\setminus\{y,v\})\cup\{w\}\) and \(E(H^{\prime})=E(G\setminus\{y,v\})\cup\{xw\}\). Next, define the weight edge on \(H^{\prime}\), by sending
\[e\mapsto\begin{cases}m&\text{if }e=xw,\\ \omega(e)&\text{otherwise},\end{cases}\]
which is still denoted by \(\omega\).
With this setting, \(H^{\prime}_{\omega}\) is Cohen-Macaulay by the same argument as the previous case. Thus,
\[I(H^{\prime}_{\omega})\colon x^{m}=(w^{m},y_{1}^{m_{1}},\dots,y_{k}^{m_{k}},I ((G\setminus\{x,y,v\})_{\omega})\]
is Cohen-Macaulay by Lemma 1.10. In particular, \((y_{1}^{m_{1}},\dots,y_{k}^{m_{k}},I((G\setminus\{x,y,v\})_{\omega})\) is Cohen-Macaulay, and hence \(I(G_{\omega})\colon x^{m}=(y^{m},v^{m})+(y_{1}^{m_{1}},\dots,y_{k}^{m_{k}},I((G \setminus\{x,y,v\})_{\omega}))\) is Cohen-Macaulay as well.
Finally, since
\[\sqrt{I(G_{\omega})\colon x^{m}}=(y,v,y_{1},\dots,y_{k})+I(G\setminus\{x,y,v \})=I(G)\colon x\]
and
\[\sqrt{I(G_{\omega})+(x^{m})}=(I(G),x),\]
by the same argument as in Case 1, we have
\[\dim R/I(G_{\omega})=\dim R/I(G_{\omega})\colon x^{m}=\dim R/(I(G_{\omega}), x^{m}).\]
Therefore, \(I(G_{\omega})\) is Cohen-Macaulay by Lemma 1.10, and the proof is complete.
**Example 2.8**.: The edge-weighted graph \(G_{\omega}\) as depicted in Figure 3 is Cohen-Macaulay.
Indeed, we see from the figure that the underlying graph \(G\) is in the class \(\mathcal{PC}\) with three pendant edges \(fg,hi,jk\); and two basic 5 cycles \(C_{1}:\ x\to y\to z\to u\to v\to x\) and \(C_{2}:\ a\to b\to c\to d\to e\to a\). Note that \(z\) is a balanced vertex on \(C_{1}\) and \(c\) is the one on \(C_{2}\) they satisfy the condition \((b)\) in Theorem 2.7.
We can easily verify that the conditions \((a)-(c)\) in Theorem 2.7 holds for \(G_{\omega}\), and thus \(G_{\omega}\) is Cohen-Macaulay.
**Acknowledgment**.: This work is partially supported by NAFOSTED (Vietnam) under the grant number 101.04-2023.36.
|
2302.03755 | Particle-level Simulation of Magnetorheological Fluids: A Fully-Resolved
Solver | Magnetorheological fluids (MRFs) are smart materials consisting of
micro-scale magnetizable particles suspended in a carrier fluid. The
rheological properties of a MRF can be changed from a fluid-state to a
solid-state upon the application of an external magnetic field. This study
reports the development of a particle-level simulation code for magnetic solid
spheres moving through an incompressible Newtonian carrier fluid. The numerical
algorithm is implemented within an open-source finite-volume solver coupled
with an immersed boundary method (FVM-IBM) to perform fully-resolved
simulations. The particulate phase of the MRF is modeled using the discrete
element method (DEM). The resultant force acting on the particles due to the
external magnetic field is computed based on the Clausius-Mossotti
relationship. The fixed and mutual dipole magnetic models are then used to
account for the magnetic (MAG) interactions between particles. Several
benchmark flows were simulated using the newly-developed FVM-IBM-DEM-MAG
algorithm to assess the accuracy and robustness of the calculations. | C. Fernandes, Salah A. Faroughi | 2022-12-24T00:56:56Z | http://arxiv.org/abs/2302.03755v1 | # Particle-level Simulation of Magnetorheological Fluids: A Fully-Resolved Solver
###### Abstract
Magnetorheological fluids (MRFs) are smart materials consisting of micro-scale magnetizable particles suspended in a carrier fluid. The rheological properties of a MRF can be changed from a fluid-state to a solid-state upon the application of an external magnetic field. This study reports the development of a particle-level simulation code for magnetic solid spheres moving through an incompressible Newtonian carrier fluid. The numerical algorithm is implemented within an open-source finite-volume solver coupled with an immersed boundary method (FVM-IBM) to perform fully-resolved simulations. The particulate phase of the MRF is modeled using the discrete element method (DEM). The resultant force acting on the particles due to the external magnetic field (i.e., magnetostatic polarization force) is computed based on the Clausius-Mossotti relationship. The fixed and mutual dipole magnetic models are then used to account for the magnetic (MAG) interactions between particles. Several benchmark flows were simulated using the newly-developed FVM-IBM-DEM-MAG algorithm to assess the accuracy and robustness of the calculations. First, the sedimentation of two spheres in a rectangular duct containing a Newtonian fluid is computed without the presence of an external magnetic field, mimicking the so-called drafting-kissing-tumbling (DKT) phenomenon. The numerical results obtained for the DKT case study are verified against published data from the scientific literature. Second, we activate both the magnetostatic polarization and the dipole-dipole forces and resultant torques between the spheres for the DKT case study. Next, we study the robustness of the FVM-IBM-DEM-MAG solver by computing multi-particle chaining (i.e., particle assembly) in a two-dimensional (2D) domain for area volume fractions of 20% (260 particles) and 30% (390 particles) under vertical and horizontal magnetic fields. Finally, the fourth computational experiment describes the multi-particle chaining in a three-dimensional (3D) domain allowing to study fully-resolved MRF simulations of 580 magnetic particles under vertical and horizontal magnetic fields.
keywords: Magnetorheological Fluids, Computational Fluid Dynamics, Discrete Element Method, Immersed Boundary Method, OpenFOAM, LIGGGHTS
## 1 Introduction
Magnetic particle suspensions, also known as magnetorheological fluids (MRFs), appear in a variety of applications [1; 2]. In the traditional fluid engineering field, the magnetorheological effect has been applied
to develop mechanical actuators and dampers [3; 4]. In the newly emerged bio-engineering and drug delivery fields, there have been strong efforts to synthesize magnetic-based multifunctional particles [5; 6]. In addition, in the field of natural resource and environmental engineering [7; 8], precious metals or harmful substances dissolved in sea water are captured and recovered using magnetic particles subjected to an externally applied magnetic field.
When a magnetically polarizable particle is subjected to an externally applied magnetic field, they acquire dipole moments and become magnetized [9; 10]. A magnetized particle starts interacting with neighboring magnetized particles leading to the formation of chain-like structures or clusters of particles aligned with the magnetic field direction (i.e., particle assembly) [11]. To date, numerous studies have investigated the dynamics of MRFs under magnetic fields. Hayes et al. [12] studied magnetic particles in microchannels by describing reversible self-assembled regularly spaced structures, when particles were exposed to an external magnetic field. From their study, they concluded that magnetic particles can be used in an extensive variety of on-chip applications and unique microfabrication techniques, automating the laboratory procedures. Melle and Martin [13] also developed a chain model for magnetorheological fluids in rotating magnetic fields. Through single-chain simulations as well as through experimental measurements, they showed that the chain shape and orientation depends strongly on the magnetic permeability of the particles \(\mu_{p}\). Subsequently, Keaveny and Maxey [14] developed a finite-dipole model, where the magnetization of a particle is represented as a distribution of current density. This was proposed to estimate the magnetic forces between magnetic particles accurately and efficiently such that it can be applicable for systems with thousands of particles. In their model, the induced magnetization of a particle is represented as a localized Gaussian distribution of current that is added as a source term in the Poisson equation for the vector potential of the magnetic field [11]. The procedure yields very accurate solutions to collinear three-body problems. However, the scheme is not as accurate when considering other configurations with a large number of particles, because there is the need to include more information from the far field (e.g., quadrupole moments). Han et al. [15] presented a two-stage computational procedure for the numerical modelling of magnetorheological fluids. At the first stage, the particle dynamics is modelled using the discrete element method (DEM), whereas the hydrodynamic forces on the particles are approximated simply using the Stokes' law (i.e., the fluid flow was not explicitly resolved) [16]. At the second stage, they deployed a combined approach using lattice Boltzmann method (LBM) and DEM to fully resolve the fluid fields, particle-particle, and particle-fluid hydrodynamic interactions. However, they raised an issue related to the accuracy of the magnetic interaction models while retaining the computational simplicity and efficiency. Subsequently, Ke et al. [17] developed a fully-resolved scheme based on lattice Boltzmann, immersed boundary, and discrete element methods (LBM-IBM-DEM) to simulate the behavior of magnetic particles moving in a fluid subject to an external magnetic field. The numerical results obtained showed that the LBM-IBM-DEM scheme was able to capture the major physical features of magnetic particle's motion in a fluid. Specifically, they showed that particles first form fragmented chains along the magnetic direction. These chain-like clusters then continue to grow and align, and eventually, they approach an _near_ steady state configuration. Additionally, it was shown that with the increase of the magnetic field a faster particle motion or merging between short chains occurs. Recently, Zhang et al. [18] developed a two-phase
numerical simulation method using LBM-IBM-DEM approach to investigate the yielding phenomena during the start-up process of a MRF flowing through a microchannel under a transverse uniform magnetic field. The yielding of the MRF flowing through the microchannel was studied as a proxy to the deformation of the chains composed of magnetic particles. They showed that the yielding of a single-chain at different inlet velocities was regular. However, for a multi-chain system where chains are entangled, the yielding behavior presented an unpredictable regularity. Zhou et al. [19] also studied the motion of magnetic particles in a 3D microchannel flow modulated by the alternating gradient magnetic field. They used the LBM-IBM numerical simulation scheme, and showed that magnetic particles initially agglomerate due to their magnetic dipole force and then move together with the carrier fluid. They also showed that, in an alternating gradient magnetic field, magnetic particles oscillate along the flow direction, disturb the flow field, and increase the overall turbulence intensity. Leps and Hartzell [20] modeled the dynamics of MRFs using DEM method alone leveraging the open source LIGGGHTS [21] software. The algorithm is based on the mutual-dipole model to allow for the use of a large number of magnetic particles with several close neighbors while keeping a good trade-off between model accuracy and computational cost. Using accurate particle size distributions, high heritage contact models, and an uncoupled fluid model, Leps and Hartzell [20] were able to match the experimentally derived yield stress results for MRFs more closely than using mono-disperse particle size distributions. Lastly, Tajfirooz et al. [22] presented an Eulerian-Lagrangian approach for simulating the magneto-Archimedes separation of neutrally buoyant non-magnetic spherical particles within MRFs. A four-way coupled point-particle method [23, 24] was employed, where all relevant interactions between an external magnetic field, a magnetic fluid and immersed particles were taken into account. First, the motion of rigid spherical particles in a magnetic liquid was studied in single- and two-particle systems. It was shown that numerical results of single- and two-particle configurations were in good agreement with detailed experimental results on particle position. Subsequently, the magneto-Archimedes separation of particles with different mass densities in many-particle systems interacting with the fluid was also studied. It was concluded that history effects and inter-particle interactions significantly influence the levitation dynamics of particles and have a detrimental impact on the separation performance.
Most of the aforementioned numerical studies around MRFs focus on the formation of magnetorheological structures using the simplified Stokes drag law and the dipole-dipole interaction model, excluding the hydrodynamic interactions between particles and higher order mutual magnetic interactions. The flow characteristics and chain formation features induced by coupled hydrodynamic and magnetic interactions are still missing in the literature. This is mainly due to the lack of proper numerical models that can take into account both inter-particle magnetic and hydrodynamic interactions, in addition to other relevant attributes (e.g., particle type, size, etc.), in a fully coupled algorithm.
In this work, we develop a fully-resolved simulation algorithm using a combination of the finite-volume, immersed boundary and discrete element methods to couple both hydrodynamic and magnetic interactions among magnetic particles suspended in Newtonian fluids. The newly-developed algorithm, so-called FVM-IBM-DEM-MAG solver, is able to describe flows with suspended magnetic particles immersed in a fluid subject to an external magnetic field. The magnetic force exerted on the particles is computed using the gradient of
the magnetic field strength, which is obtained from the imposed external magnetic field [17]. The magnetic interactions between the particles are implemented using a mutual dipole model [20] allowing the magnetic fields of other particles to contribute to the magnetization and motion of the particle under consideration. The presented numerical algorithm has several advantages, specifically: (i) it is based on open-source libraries, OpenFOAM and LIGGGHTS, which allows the extension of the algorithm for other applications (e.g., simulation of viscoelastic fluids with suspended magnetic particles); and (ii) it employs a direct particle-level simulation methodology to resolve both hydrodynamic and magnetic interactions allowing accurate predictions of the flow patterns and particle assembly. We focus on simulations of spherical particles suspended in a Newtonian fluid in order to introduce the numerical algorithm and study its feasibility for extension to more complex flows, involving fluids with non-linear rheological behavior, and also particle with different shapes.
The remainder of this work is structured as follows. In Section 2, we present the underlying physics and mathematical formulation describing the motion of magnetic particles in a Newtonian fluid. In Section 3, we present the particle-level numerical methodology leading to the FVM-IBM-DEM-MAG solver that couples the continuum and discrete phases in MRFs. In Section 4, we present four case studies with different level of complexities to test the developed algorithm, namely the motion and interaction of two magnetic spheres settling in an incompressible Newtonian fluid under external magnetic field, and the 2D and 3D flow behaviors of random arrays of magnetic spheres immersed in an incompressible Newtonian fluid. Finally, in Section 5, we summarize the main conclusions of this work.
## 2 Underlying Physics
The magnetorheological fluids (MRFs) considered in this study contain micro-scale magnetic particles with no-Brownian motion suspended in a non-magnetic incompressible Newtonian carrier fluid. MRFs deform and self-organize into mesoscopic structures depending on the internal (e.g., particle concentration) and external stimuli (e.g., temperature, flow, and magnetic fields). Among these stimuli, the application of magnetic fields is shown to provide instant action and contactless control of the mesoscopic physical structures, causing a reversible transition from a fluid-like to a solid-like state. When subjected to an external magnetic field, particle assembly occurs that provides the fluid with the ability to transmit force. In that state, the effective viscosity of the fluid increases to the extent of becoming a viscoelastic solid. The particle assembly promoted by magnetic field can be controlled, i.e., destroyed, deformed, or delayed. To accurately predict the particle assembly and chain formation in MRFs, the coupled interactions between the magnetic field, fluid, and particles must be resolved. The dynamics of MRFs, thus, present a multi-physics problem across different scales. In this section, we present the underlying physics governing the dynamics of MRFs made of rigid micro-scale magnetic spheres suspended in non-magnetic incompressible Newtonian carrier fluids under a static magnetic field.
### Magnetostatic fields
Macroscopic electromagnetic phenomena are described using Maxwell's fundamental equations [1, 9]. In this study, we assume the quantities of interest (e.g., magnetic field strength) do not vary with time, and there is no interaction between electric and magnetic fields. Therefore, we can decouple electrostatic and
magnetostatic fields, and consider the problem of magnetostatic field with no free electric currents. The Maxwell's equations for magnetostatic cases reduce to,
\[\nabla\cdot\mathbf{B}=0, \tag{1}\]
\[\nabla\times\mathbf{H}=0, \tag{2}\]
where \(\mathbf{B}\) is the magnetic flux density, and \(\mathbf{H}\) is the magnetic field strength. Here \(\nabla\) denotes the gradient operator, \(\nabla\cdot\) denotes the divergence operator, and \(\nabla\times\) denotes the curl tensor operations. For a linear isotropic domain (matrix) with a constant magnetic permeability, \(\mu\), the constitutive equation relating the two field quantities, \(\mathbf{B}\) and \(\mathbf{H}\), reads as,
\[\mathbf{B}=\mu\mathbf{H}, \tag{3}\]
where
\[\mu=\begin{cases}\mu_{p}&\text{ in the particle domain},\\ \mu_{f}&\text{ in the fluid domain},\end{cases} \tag{4}\]
with \(\mu_{p}\) and \(\mu_{f}\) denoting the particles and base fluid's magnetic permeability, respectively. Notice that \(\mu\) is discontinuous at fluid-particle interfaces, and, therefore, should be evaluated by following a similar interpolation of material properties as the one used in the level set method [11; 25]. Hereafter, consider that the total computational domain is represented by \(\Omega=\Omega_{s}\cup\Omega_{f}\), where \(\Omega_{s}\) is solid ("solid particles") domain, and \(\Omega_{f}\) is the fluid domain. The total domain, solid and fluid boundaries are represented by \(\partial\Omega\), \(\partial\Omega_{s}\) and \(\partial\Omega_{f}\), respectively.
To solve the first-order differential equations involving the two magnetic field quantities, Eqs. (1) and (2), we first convert them into a second-order differential equation involving only one magnetic field quantity. For that purpose, Eq. (2) admits the existence of a scalar potential, \(\phi\), such that,
\[\mathbf{H}=-\nabla\phi, \tag{5}\]
which can be substituted into Eq (1) with the aid of Eq. (3) to yield the following second-order differential equation,
\[\nabla^{2}\left(\mu\phi\right)=0. \tag{6}\]
#### 2.1.1 Magnetic forces and torques
In order to describe the particle motion and the flow around it influenced by a magnetic field, a relationship between the applied magnetic field and the resultant force acting on the particles is needed. This force, known as magnetostatic polarization force [17], on particle \(i\) can be evaluated as [17],
\[\mathbf{F}_{i}^{me}=\int_{\Omega_{s}}\left(\mu_{f}\chi_{e}H\nabla H\right)\ d \Omega_{s}, \tag{7}\]
where \(\chi_{e}\) stands for the particle's magnetic susceptibility given by the Clausius-Mossotti relationship [26],
\[\chi_{e}=\frac{3\mu_{p}}{3+\mu_{p}}. \tag{8}\]
The torque generated by the magnetostatic polarization force on particle \(i\) is computed as,
\[\mathbf{T}_{i}^{mc}=\int_{\Omega_{s}}\left(\mu_{f}\chi_{e}H\times H\right)\ d \Omega_{s}. \tag{9}\]
Another fundamental force in MRFs is the magnetic dipoles evidenced by particles with opposite magnetic point poles [27]. Therefore, in MRFs, particle motion is affected not only by an external magnetic field, but also by other nearby magnetized particles since each particle has a permanent magnetic moment, \(\mathbf{m}\). The dipole-dipole interactions between particles \(i\) and \(j\) results in dipole-dipole inter-particle magnetic force (\(\mathbf{F}_{ij}^{d-d}\)) and torque (\(\mathbf{T}_{ij}^{d-d}\)) that are calculated using the dipole-dipole contact model [28; 29] as,
\[\mathbf{F}_{ij}^{d-d}=\frac{3}{r^{5}}\left[\left(\mathbf{m}_{i}\cdot\mathbf{m }_{j}\right)\mathbf{r}-\frac{5}{r^{2}}\left(\mathbf{m}_{i}\cdot\mathbf{r} \right)\left(\mathbf{m}_{j}\cdot\mathbf{r}\right)\mathbf{r}+\left(\mathbf{m}_ {j}\cdot\mathbf{r}\right)\mathbf{m}_{i}+\left(\mathbf{m}_{i}\cdot\mathbf{r} \right)\mathbf{m}_{j}\right], \tag{10}\]
and
\[\mathbf{T}_{ij}^{d-d}=-\frac{1}{r^{3}}\left[\left(\mathbf{m}_{i}\times\mathbf{ m}_{j}\right)-\frac{3}{r^{2}}\left(\mathbf{m}_{j}\cdot\mathbf{r}\right) \left(\mathbf{m}_{i}\times\mathbf{r}\right)\right], \tag{11}\]
where \(\mathbf{m}_{i}\) and \(\mathbf{m}_{j}\) are the magnetic moment vectors of the two particles, \(\mathbf{r}\) is the separation vector between the two particles, and \(r\) is the magnitude of the separation vector \(\mathbf{r}\). For MRFs consisting of \(N\) particles, a direct evaluation of the dipole-dipole interaction alone is \(O(N^{2})\) operations. This puts a severe computational constraint on the number of particles that can be simulated with a direct computation of the inter-particle dipole-dipole force. To compute the magnetic moment of each particle, \(\mathbf{m}\), the fixed dipole model [15] or the mutual dipole model [20] can be used. In dilute MRFs (i.e., low concentration of magnetic particles), it is often acceptable to use the magnetic moment calculated from the background magnetic field (i.e., fixed dipole model). However, in concentrated MRFs (i.e., high concentration of magnetic particles), the induced magnetic fields from magnetized neighboring particles begin to have a significant effect on the particles magnetic moment vector. Therefore, for accuracy in force calculations, a more complex model (i.e., mutual dipole model) should be leveraged.
#### 2.1.2 Fixed dipole model
When the effect of the extra magnetic field generated by neighboring magnetized particles is negligible on particles dynamics, then it is safe to assume that any particle is theoretically magnetized only by the externally applied magnetic field. Therefore, each particle is considered as a point dipole and the magnetic force between the particles are pairwise only. In this model, the magnetic moment of particle \(i\) is given by [15],
\[\mathbf{m}_{i}=4\pi r^{3}\frac{\chi-1}{\chi+2}\mathbf{H}_{0}, \tag{12}\]
where \(\chi=\mu_{p}/\mu_{f}\) is the relative susceptibility of the particles over the carrier fluid, and \(\mathbf{H}_{0}\) is the magnetic field strength of the externally applied uniform magnetic field. Notice that as the carrier fluid is assumed to be non-magnetic, its permeability is the same as that of a vacuum, i.e., \(\mu_{f}=\mu_{0}=4\pi\times 10^{-7}\) [Tm/A], where T is Tesla, m is meter, and A is Ampere. This model is accurate when the two particles are far apart, and it loses accuracy when the separation distance of the particles decreases. The accuracy of the model also
depends on the relative susceptibility, \(\chi\). It has been shown by Keaveny and Maxey [14] that, at \(\chi=5\), the fixed dipole model underestimates the maximum attractive force by around 35%, whereas overestimates the maximum repulsive force by 50% or more, and the errors increase for larger \(\chi\) values.
#### 2.1.3 Mutual dipole model
The mutual dipole model [14] allows for the magnetic fields of the neighboring particles to contribute to the magnetization of the particle under consideration. A particle, thus, is subjected not only to the primary magnetization due to the external magnetic field, but also to a secondary magnetization from the other particles' magnetic fields. Considering the mutual magnetization of \(N\) magnetizable particles with their centres at \(\mathbf{x}_{i}\) (\(i=1,\cdots,N\)) in a uniform magnetic field with strength \(\mathbf{H}_{0}\), the magnetic moment of the particle \(i\), \(\mathbf{m}_{i}\), is given by [20],
\[\mathbf{m}_{i}=4\pi r^{3}\frac{\chi-1}{\chi+2}\left[\mathbf{H}_{0}+\mathbf{H} (\mathbf{x}_{i})\right]\qquad(i=1,\cdots,N), \tag{13}\]
where \(\mathbf{H}(\mathbf{x}_{i})\) represents the total secondary magnetic field strength generated by other magnetized particles. The total secondary magnetic field strength can be expressed as [20],
\[\mathbf{H}(\mathbf{x}_{i})=\sum_{j,j\neq i}^{N}\mathbf{H}_{j}(\mathbf{m}_{j}, \mathbf{r}_{ij})=\sum_{j,j\neq i}^{N}\frac{1}{4\pi}\frac{3\hat{\mathbf{r}}_{ ij}(\mathbf{m}_{j}\cdot\hat{\mathbf{r}}_{ij})-\mathbf{m}_{j}}{r_{ij}^{3}}, \tag{14}\]
with \(\mathbf{r}_{ij}=\mathbf{x}_{i}-\mathbf{x}_{j}\), \(r_{ij}=|\mathbf{r}_{ij}|\), and \(\hat{\mathbf{r}}_{ij}=\mathbf{r}_{ij}/r_{ij}\). Once the \(\mathbf{m}_{i}\) values are computed for all particles, the inter-particle dipole-dipole force and torque between any two pairs are obtained using Eqs. (10) and (11), respectively.
### Incompressible fluid flow
In the MRFs considered in this study, the carrier fluid is considered to be a non-magnetic incompressible Newtonian fluid. The governing equations for the flow of these fluids consist of the continuity equation,
\[\nabla\cdot\mathbf{u}=0\quad\text{in}\quad\Omega_{f}, \tag{15}\]
and the Cauchy momentum equation,
\[\rho_{f}\left(\frac{\partial}{\partial t}+\mathbf{u}\cdot\nabla\right) \mathbf{u}=-\nabla p+\eta_{S}\nabla^{2}\mathbf{u}\quad\text{in}\quad\Omega_{ f}. \tag{16}\]
Here \(\rho_{f}\) and \(\mathbf{u}\) are the fluid density and velocity vector, respectively, \(t\) is the time, \(p\) is the pressure, and \(\eta_{S}\) is the viscosity of the Newtonian fluid. To complete the strong mathematical form describing the flow of MRFs, the following initial and boundary conditions are considered,
\[\begin{cases}\mathbf{u}(\mathbf{x},t=0)=\mathbf{u}_{0}(\mathbf{x})\quad\text {in}\quad\Omega_{f},\\ \mathbf{u}(\mathbf{x},t)=\mathbf{u}_{\partial\Omega}\quad\text{on}\quad \partial\Omega_{f},\\ \mathbf{u}(\mathbf{x},t)=\mathbf{u}_{i}\quad\text{on}\quad\partial\Omega_{s}, \\ \left(-p\mathbf{I}+\eta_{S}\left(\nabla\mathbf{u}+\nabla\mathbf{u}^{T}\right) \right)\cdot\hat{\mathbf{n}}=\boldsymbol{\sigma}_{\partial\Omega_{s}}\quad \text{on}\quad\partial\Omega_{s}.\end{cases} \tag{17}\]
In Eq. (17), \(\hat{\mathbf{n}}\) is the outward normal unit vector to \(\partial\Omega_{s}\), \(\boldsymbol{\sigma}_{\partial\Omega_{s}}\) is the stress vector acting from the fluid on the solid body surface, and \(\mathbf{u}_{i}\) is the (unknown) velocity of the solid-fluid interface. The initial velocity
is required to satisfy Eq. (15), and the boundary velocity \({\bf u}_{0\Omega}\) should satisfy the compatibility condition (last equation in Eq. 17) at all times.
The motion of magnetic particles is strongly affected by short-range and long-range hydrodynamic forces (drag, lift, etc.), and the resultant torques, when they are dispersed in a viscous incompressible fluid. The hydrodynamic force acting on the surface of particle \(i\) can be obtained using [30; 31],
\[{\bf F}_{i}^{h}=\int_{\Omega_{s}}\left(-\nabla p+\eta_{S}\nabla^{2}{\bf u} \right)\ d\Omega_{s}. \tag{18}\]
The resultant hydrodynamic torques on particle \(i\), denoted by \({\bf T}_{i}^{h}\), can be then calculated by taking the cross product between the position vector \({\bf r}\) (pointing from the fluid cell centroid to the particle centroid) and the total force from Eq. (18) that reads as,
\[{\bf T}_{i}^{h}=\int_{\Omega_{s}}\left\{{\bf r}\times\left(-\nabla p+\eta_{S} \nabla^{2}{\bf u}\right)\,\right\}\,d\Omega_{s}. \tag{19}\]
The force contribution arising from pressure does not give rise to any torque contribution, due to symmetry of spherical magnetic particles. Thus, normal forces acting perpendicular to the particle surface, such as pressure, do not induce any torque. This is not the case if particle shape departs from the spherical shape (e.g., spheroids). In MRFs, particles also experience the buoyancy force, denoted by \({\bf F}_{i}^{g}\), which is given by the weight of the displaced fluid. The buoyancy force can be calculated as,
\[{\bf F}_{i}^{g}=\int_{\Omega_{s}}(\rho_{f}{\bf g})\ d\Omega_{s}, \tag{20}\]
where \({\bf g}\) is the gravitational acceleration vector.
### Particle transient motion
The transient motion of dispersed magnetic particles (i.e., solid phase), can be modeled using the Newton's second law of motion as,
\[m_{i}\frac{d{\bf U}_{i}^{p}}{dt}=\sum_{j=1}^{n_{i}^{c}}{\bf F}_{ij}^{c}+\sum_{ j=1}^{n_{i}^{cust}}{\bf F}_{ij}^{d-d}+{\bf F}_{i}^{me}+{\bf F}_{i}^{h}+{\bf F }_{i}^{g}, \tag{21}\]
and
\[I_{i}\frac{d{\mathbf{\omega}}_{i}^{p}}{dt}=\sum_{j=1}^{n_{i}^{c}}{\bf T}_{ij}^{c} +\sum_{j=1}^{n_{i}^{cust}}{\bf T}_{ij}^{d-d}+{\bf T}_{i}^{me}+{\bf T}_{i}^{h}, \tag{22}\]
for the conservation of linear and angular momentum of the particle \(i\) with mass \(m_{i}\) and moment of inertia \(I_{i}\), respectively. Here, \({\bf U}_{i}^{p}\) and \({\mathbf{\omega}}_{i}^{p}\) denote the translational and angular velocities of particle \(i\), respectively, \({\bf F}_{ij}^{c}\) and \({\bf T}_{ij}^{c}\) are the contact force and contact torque resulting from the particle-particle and particle-wall interactions (with the number of total contacts, \(n_{i}^{c}\), for particle \(i\)) that can be calculated using different contact models [32; 33], \({\bf F}_{ij}^{d-d}\) and \({\bf T}_{ij}^{d-d}\) are the dipole-dipole inter-particle magnetic force and torque for a number of \(n_{i}^{cut}\) possible interactions in the admissible cut-off region, respectively, \({\bf F}_{i}^{me}\) and \({\bf T}_{i}^{me}\) are the magnetostatic polarization force and torque due to the external magnetic field, respectively, \({\bf F}_{i}^{h}\) and \({\bf T}_{i}^{h}\) are the hydrodynamic force and torque acting on particle \(i\), respectively, and \({\bf F}_{i}^{g}\) is the buoyancy force.
We leverage DEM, developed by Cundall and Strack [34] and implemented in LIGGGHTS open-source library [21], to model the transient motion of dispersed magnetic particles described by Eqs. (21) and (22). In DEM, multiple search algorithms are employed to identify contacting pairs of discrete particles [35], and
different contact models are developed to integrate various mechanisms and effects such as elasticity, plasticity, viscoelasticity, friction, cohesion, damage, fracture, etc. in the contact points [21]. In this study, we adopted the spring-dashpot contact model that can be extended to other non-linear models depending on the chosen stiffness and damping parameters as function of the particle overlap displacement [33]. In this model, the total contact force between particle \(i\) and particle \(j\) is calculated using [36],
\[\mathbf{F}_{ij}^{c}=(\mathbf{F}_{ij}^{c})_{n}+(\mathbf{F}_{ij}^{c})_{t}, \tag{23}\]
where \((\mathbf{F}_{ij}^{c})_{n}\) is the normal contact force,
\[(\mathbf{F}_{ij}^{c})_{n}=-k_{n}\,\delta_{n}\,\mathbf{n}-\gamma_{n}\,(\mathbf{ U}_{ij}^{p})_{n}, \tag{24}\]
and \((\mathbf{F}_{ij}^{c})_{t}\) is the tangential contact force,
\[(\mathbf{F}_{ij}^{c})_{t}=\min\left(-k_{t}\,\delta_{t}-\gamma_{t}\,(\mathbf{U} _{ij}^{p})_{t},\,\beta_{s}\,|(\mathbf{F}_{ij}^{c})_{n}|\,\frac{\delta_{t}}{| \delta_{t}|}\right), \tag{25}\]
with
\[\delta_{t}{}^{(n)}=\delta_{t}{}^{(n-1)}+(\mathbf{U}_{ij}^{p})_{t}\,\Delta t. \tag{26}\]
In Eqs. (24), (25) and (26), \(\mathbf{n}\) is the unit vector in the normal direction, \(k_{n}\) and \(k_{t}\) are the elastic stiffness for normal and tangential contacts, respectively, \(\gamma_{n}\) and \(\gamma_{t}\) denote the damping coefficients in normal and tangential directions, respectively, \(\delta_{n}\) is normal overlap displacement between two particles, \((\mathbf{U}_{ij}^{p})_{n}\) and \((\mathbf{U}_{ij}^{p})_{t}\) are relative velocities in normal and tangential directions of particle \(i\) relative to particle \(j\), respectively, with the relative velocity defined as \(\mathbf{U}_{ij}^{p}=\mathbf{U}_{i}^{p}-\mathbf{U}_{j}^{p}\), \(\beta_{s}\) is the sliding friction coefficient, \(\delta_{t}^{(n)}\) and \(\delta_{t}^{(n-1)}\) are the tangential overlap at the current and previous step, and \(\Delta t\) is the time step. The resultant contact torque on particle \(i\) due to its contact with particle \(j\), denoted by \(\mathbf{T}_{ij}^{c}\), can be then calculated by taking the cross product between the total contact force from Eq. (23) and the position vector leading to,
\[\mathbf{T}_{ij}^{c}=\mathbf{F}_{ij}^{c}\,\,\times\,\,(\mathbf{x}_{c}-\mathbf{ x}_{i}), \tag{27}\]
where \(\mathbf{x}_{c}\) and \(\mathbf{x}_{i}\) are the position of contact point and particle \(i\) centroid, respectively.
## 3 Numerical Methodology
This section presents the numerical formulation for an algorithm using the FVM, IBM and DEM that is able to efficiently handle the rigid body motion of magnetic spherical particles surrounded by a Newtonian fluid. The algorithm considers a fictitious domain formulation, which provides a rigorous basis for the immersed boundary (IB) implementation performed in the open source framework code \(CFDEMcoupling\)[31, 37, 38]. The open source IB solver originally developed by Hager et al. [37] is modified and improved for this study to take into account both hydrodynamic and magnetic interactions between the fluid continuum phase and the particulate disperse phase in a fully coupled manner. Algorithm 1 summarizes the so-called FVM-IBM-DEM-MAG solver describing the solution procedure of the fluid phase and magnetic field equations, the DEM approach to handle the particle's motion, and the IBM scheme to fully couple the continuum phase with the particulate phase.
```
step 1: at time \(t=0\)
```
1. Set initial and boundary conditions 2. Send initial particle position and velocities to CFD solver from DEM solver step 2: at time \(t=t+\Delta t\) ```
1. Compute particle volume fraction 2. Dynamic mesh refinement 3. Calculate loads on particles (hydrodynamic and magnetic external forces, \(\mathbf{F}_{i}^{h}\) and \(\mathbf{F}_{i}^{me}\), respectively, and torques, \(\mathbf{T}_{i}^{h}\) and \(\mathbf{T}_{i}^{me}\), respectively; particle-particle contact force and torque, \(\mathbf{F}_{ij}^{c}\) and \(\mathbf{T}_{ij}^{c}\), respectively; magneto dipole-dipole force and torque, \(\mathbf{F}_{ij}^{d-d}\) and \(\mathbf{T}_{ij}^{d-d}\), respectively; and buoyancy force \(\mathbf{F}_{i}^{g}\), etc.) given by \[\mathbf{F}_{i}^{h}=\sum_{c\in\overline{\mathcal{T}}_{h}}(-\nabla p+\eta_{S} \nabla^{2}\mathbf{u})(c)\cdot V(c) \mathbf{F}_{i}^{me}=\sum_{c\in\overline{\mathcal{T}}_{h}}(\mu_{0} \chi_{e}\mathbf{H}\nabla\mathbf{H})(c)\cdot V(c)\] \[\mathbf{T}_{i}^{h}=\sum_{c\in\overline{\mathcal{T}}_{h}}\left[\mathbf{r}(c )\times(-\nabla p+\eta_{S}\nabla^{2}\mathbf{u})(c)\right]\cdot V(c) \mathbf{T}_{i}^{me}=\sum_{c\in\overline{\mathcal{T}}_{h}}(\mu_{0} \chi_{e}\mathbf{H}\times\mathbf{H})(c)\cdot V(c)\] \(\mathbf{F}_{ij}^{c}\) and \(\mathbf{T}_{ij}^{c}\) are calculated using the non-linear elastic Hertz-Mindlin contact model. \[\mathbf{F}_{ij}^{d-d}=\frac{3}{r^{5}}\left[\left(\mathbf{m}_{i}\cdot\mathbf{ m}_{j}\right)\mathbf{r}-\frac{5}{r^{2}}\left(\mathbf{m}_{i}\cdot\mathbf{r} \right)\left(\mathbf{m}_{j}\cdot\mathbf{r}\right)\mathbf{r}+\left(\mathbf{m} _{j}\cdot\mathbf{r}\right)\mathbf{m}_{i}+\left(\mathbf{m}_{i}\cdot\mathbf{r} \right)\mathbf{m}_{j}\right]\] \(\mathbf{T}_{ij}^{d-d}=-\frac{1}{r^{3}}\left[\left(\mathbf{m}_{j}\times \mathbf{m}_{i}\right)-\frac{3}{r^{2}}\left(\mathbf{m}_{i}\cdot\mathbf{r} \right)\left(\mathbf{m}_{j}\times\mathbf{r}\right)\right]\) \(\mathbf{F}_{i}^{g}=\sum_{c\in\overline{\mathcal{T}}_{h}}(\rho_{f}\mathbf{g})(c )\cdot V(c)\)
2. Solve Newton-Euler equations (Velocity-Verlet integration) to obtain new particle position, and linear and angular velocities (in \(\Omega_{p}\)) \[\begin{array}{l}m_{i}\frac{d\mathbf{U}_{i}^{p}}{dt}=\sum_{j=1}^{n_{i}^{c}} \mathbf{F}_{ij}^{c}+\sum_{j=1}^{n_{i}^{cut}}\mathbf{F}_{ij}^{d-d}+\mathbf{F}_ {i}^{me}+\mathbf{F}_{i}^{h}+\mathbf{F}_{i}^{g}\hskip 42.679134ptI_{i}\frac{d \mathbf{u}_{i}}{dt}=\sum_{j=1}^{n_{i}^{c}}\mathbf{T}_{ij}^{c}+\sum_{j=1}^{n_{ i}^{cut}}\mathbf{T}_{ij}^{d-d}+\mathbf{T}_{i}^{me}+\mathbf{T}_{i}^{h}\end{array}\]
3. Solve fluid governing equations subjected to an external magnetic field (in \(\Omega_{f}\)) \(\nabla^{2}\left(\mu\phi\right)=0\) \(\nabla\cdot\mathbf{u}=0\) \(\rho_{f}\left(\frac{\partial\mathbf{u}}{\partial t}+\mathbf{u}\cdot\nabla \mathbf{u}\right)=-\nabla p+\eta_{S}\nabla^{2}\mathbf{u}\)
4. Impose the rigid-body motion of the particles on the fluid velocity field
5. Correct velocity and pressure fields
```
**Algorithm 1** Fully-resolved FVM-IBM-DEM-MAG algorithm to model magnetorheological fluids
At time \(t=0\), the fluid and particle initial and boundary conditions are read from the case study input files (step 1(a) in Algorithm 1). Additionally, the DEM solver sends the particle initial position and velocities to the CFD solver (step 1(b) in Algorithm 1). At time \(t=t+\Delta t\), the numerical algorithm starts with the location of the magnetic particles, saving the cell ID of the centre position of each particle. This procedure, then, allows to compute the particle volume fraction in each cell (step 2(a) in Algorithm 1). Subsequently, as shown in Fig. 1, the algorithm uses dynamic mesh refinement near the particles' surface (\(\partial\Omega_{s}\)) to accurately capture the fluid (domain \(\Omega_{f}\)) forces developed on those regions (step 2(b) in Algorithm 1).
Using the fluid solution from the last time-step in the regions marked by the particle volume fraction, the hydrodynamic, magnetostatic polarization, and buoyancy forces, \(\mathbf{F}_{i}^{h},\mathbf{T}_{i}^{h},\mathbf{F}_{i}^{me},\mathbf{T}_{i}^{me}, \mathbf{F}_{i}^{g}\), that act on each particle's surface are computed (step 2(c) in Algorithm 1). The hydrodynamic force acting on the surface of particle \(i\), denoted by \(\mathbf{F}_{i}^{h}\) and defined by Eq. (18), can be rewritten as,
\[\int_{\Omega_{s}}\left(-\nabla p+\eta_{S}\nabla^{2}\mathbf{u}\right)\ d\Omega_{s}= \int_{\Omega}\left(-\nabla p+\eta_{S}\nabla^{2}\mathbf{u}\right)\delta_{\Omega }\ d\Omega, \tag{28}\]
where, \(\mathbf{x}\) is an arbitrary region within the domain \(\Omega\), and \(\delta_{\Omega}=1\) if \(\mathbf{x}\in\Omega_{s}\), otherwise \(\delta_{\Omega}=0\). Assuming that \(T_{h}\) is a decomposition of \(\Omega\) consisting of computational cells \(c\), we can approximate Eq. (28) as,
\[\int_{\Omega}\left(-\nabla p+\eta_{S}\nabla^{2}\mathbf{u}\right)\delta_{\Omega }\ d\Omega=\sum_{c\in\overline{T}_{h}}\int_{V(c)}\left(-\nabla p+\eta_{S} \nabla^{2}\mathbf{u}\right)\delta_{\Omega}\ dV(c), \tag{29}\]
where \(V(c)\) is the volume of cell \(c\). Notice that for notation purposes we use the parentheses (\(c\)) to evaluate a function on cell \(c\). Numerical integration of Eq. (29) leads to the final form of the hydrodynamic forces acting on the particle,
\[\mathbf{F}_{i}^{h}=\sum_{c\in\overline{T}_{h}}\left(-\nabla p+\eta_{S}\nabla^ {2}\mathbf{u}\right)(c)\cdot V(c), \tag{30}\]
where \(\overline{T}_{h}\) is the set of all cells covered, in full or in part, by a magnetic particle. The resultant hydrodynamic torque on particle \(i\), denoted by \(\mathbf{T}_{i}^{h}\) and defined by Eq. (19), can be then approximated by taking the cross
Figure 1: Typical immersed boundary computational mesh configuration using dynamic refinement of the control-volumes (cells) near the particlesβ surface. \(\Omega_{f}\) and \(\Omega_{s}\) are the fluid and solid domains, respectively, with boundaries denoted by \(\partial\Omega_{f}\) and \(\partial\Omega_{s}\).
product between the position vector \(\mathbf{r}\) and the total force from Eq. (30) that reads as,
\[\mathbf{T}_{i}^{h}=\sum_{c\in\overline{T}_{h}}\left\{\mathbf{r}(c)\times\left(- \nabla p+\eta_{S}\nabla^{2}\mathbf{u}\right)(c)\right\}\cdot V(c). \tag{31}\]
Similarly, the magnetostatic polarization force and torque, defined by Eqs. (7) and (9), are approximated numerically as,
\[\mathbf{F}_{i}^{me}\sum_{c\in\overline{T}_{h}}(\mu_{0}\chi_{e}\mathbf{H} \mathbf{V}\mathbf{H})(c)\cdot V(c), \tag{32}\]
and
\[\mathbf{T}_{i}^{me}=\sum_{c\in\overline{T}_{h}}(\mu_{0}\chi_{e}\mathbf{H} \times\mathbf{H})(c)\cdot V(c). \tag{33}\]
The buoyancy force, defined by Eq. (20), can be also approximated numerically by integrating the fluid density (\(\rho_{f}\)) over the volume of the solid region in the mesh, i.e., \(V(c)\) with \(c\in\overline{T}_{h}\), to obtain the total displaced fluid mass, i.e., \(\rho_{f}V(c)\) with \(c\in\overline{T}_{h}\). Next, by multiplying the fluid mass by the gravitational acceleration vector (\(\mathbf{g}\)), the buoyancy force can be calculated as,
\[\mathbf{F}_{i}^{g}=\sum_{c\in\overline{T}_{h}}(\rho_{f}\mathbf{g})(c)\cdot V (c). \tag{34}\]
As the next step in the FVM-IBM-DEM-MAG algorithm, the resulting forces and torques for each particle are returned to the DEM solver. Additionally, if collision between particles or particle-wall are detected, the collision force and torque, \(\mathbf{F}_{ij}^{c}\) and \(\mathbf{T}_{ij}^{c}\), are calculated using Eqs. (23)-(27). Finally, the dipole-dipole magnetic force and torque, \(\mathbf{F}_{ij}^{d-d}\) and \(\mathbf{T}_{ij}^{d-d}\), are calculated using Eqs. (10) and (11) with either the fixed dipole model for dilute suspensions or the mutual dipole model for non-dilute suspensions to retrieve the particle dipole moment.
A data exchange model is also used to run a DEM script, which computes the particles' positions, translational and angular velocities (Eqs. (21)-(22)), using Velocity-Verlet integration [39] (step 2(d) in Algorithm 1). The particles' new positions and velocities are then transferred to the CFD solver. The CFD solver proceeds with the PISO (Pressure-Implicit with Splitting of Operators) algorithm [40] (step 2(e) in Algorithm 1), which solves the magneto-static potential equation, Eq. (6), and fluid flow governing equations, Eqs. (15)-(17). An intermediate velocity field \(\mathbf{\widehat{u}}\) is first obtained by solving the momentum balance equations, Eq. (16), and then an intermediate pressure \(\widetilde{p}\) is obtained from the continuity equation, Eq. (15), which results in a Poisson equation for the pressure correction.
The next step is to correct the intermediate velocity field \(\mathbf{\widehat{u}}\) in the particle region by imposing the rigid body velocity provided by the DEM calculation (step 2(f) in Algorithm 1). This correction is equivalent to adding a body force per unit volume defined as,
\[\mathbf{f}=\rho\frac{\partial}{\partial t}(\mathbf{\widehat{u}}-\mathbf{ \widehat{u}}), \tag{35}\]
in the momentum balance equations, Eq. (16), to obtain a corrected velocity field \(\mathbf{\widetilde{u}}\). Here \(\mathbf{\widetilde{u}}=\mathbf{U}_{i}^{p}+\mathbf{\omega}_{i}\times\mathbf{r}\) is defined only for the cells within the solid body. The translational and angular velocities, \(\mathbf{U}_{i}^{p}\) and \(\mathbf{\omega}_{i}\), respectively, were previously computed in step 2(d).
The previous step introduces a discontinuity in the velocity field at the interface, giving rise to a non-zero divergence in that location. Hence, the velocity field \(\mathbf{\widetilde{u}}\) and the pressure field \(\widetilde{p}\) need to be corrected (step 2(g)
in Algorithm 1). For that purpose, \(\widetilde{\mathbf{u}}\) is projected onto a divergence-free velocity space, \(\widetilde{\mathbf{u}}\), by using a scalar field \(\psi\), as:
\[\overline{\mathbf{u}}=\widetilde{\mathbf{u}}-\nabla\psi, \tag{36}\]
where \(\psi\) is obtained by solving the following Poisson equation,
\[\nabla^{2}\psi=\nabla\cdot\widetilde{\mathbf{u}}. \tag{37}\]
Then \(\overline{\mathbf{u}}\) is calculated by Eq. (36). The last step is equivalent to adding a pressure force \(-\rho_{\frac{\nabla\psi}{\Delta t}}^{\nabla\psi}\) in the momentum conservation equations, which requires the pressure field to be corrected by,
\[p=\widetilde{p}+\frac{\psi}{\Delta t}. \tag{38}\]
This new FVM-IBM-DEM-MAG solver is implemented within the \(CFDEMcoupling\)[41] framework.
## 4 Results and discussion
This section presents the validation of the proposed FVM-IBM-DEM-MAG solver against several benchmark case studies. The first case study is devoted to the sedimentation of two sphere's in a rectangular duct containing a Newtonian fluid, mimicking the so-called drafting-kissing-tumbling (DKT) phenomenon. We start by turning off the external magnetic field to verify the solver's capabilities to simulate the motion and interaction of the two settling spheres. Subsequently, in the second case study, we activate both the external magnetic field and the dipole-dipole force (and resultant torque) between the spheres for the DKT problem. This case study allows us to test the implementation of the magnetic force acting on the particles induced by the external magnetic field and the nearby magnetized particles. The magnetic potential gradient is applied in the vertical and horizontal directions to verify the ability of the algorithm to predict particle chaining in both directions. The third case study tests the robustness of the FVM-IBM-DEM-MAG solver by computing multi-particle chaining with 260 and 390 spheres whose centers are located in a 2D plane. Finally, the fourth case study describes the multi-particle chaining when particles are randomly distributed in a 3D domain.
### DKT phenomenon under zero magnetic field
The objective of this test case is to simulate the motion and the interaction of two equal rigid spheres settling in a duct as shown in Fig. 2. The spherical particles are placed vertically with a distance equal to four particle's radius. The leading sphere (i.e., the one in below) is slightly off-centred to avoid the symmetric solution. In this case, we expect the simulations to reproduce the well-documented DKT phenomenon, which has been observed in laboratory experiments [42] and modeled through numerical simulations using different computational methods [17; 43; 44; 45]. This benchmark is specifically selected to test the accuracy and effectiveness of the FVM-IBM-DEM-MAG algorithm, when the magnetic field is set to zero [31; 37].
The computational domain is \(\Omega=[0,1]\times[0,1]\times[0,4]\) cm\({}^{3}\). The diameter of the spheres is \(d=1/6\) cm. The initial positions of the spheres centers are \((0.5,0.5,3.5)\) and \((0.5,0.49,3.16)\), and the fluid and spheres are initially at rest. On the boundary of the channel, no-slip fluid velocity is imposed. The fluid density is \(\rho_{f}=1\) g/cm\({}^{3}\), the sphere's density is \(\rho_{s}=1.14\) g/cm\({}^{3}\), and the fluid kinematic viscosity is \(\nu=0.01\) cm\({}^{2}\)/s [30].
For the potential inter-particle contacts and particle-wall contacts, the coefficient of normal restitution, coefficient of friction, Poisson's ratio and Young's modulus are considered to be 0.97, 0.10, 0.45, and \(2\times 10^{9}\) Pa, respectively.
The numerical experiments were performed using two hexahedral meshes with initial configuration M1: \(40\times 40\times 160\) and M2: \(60\times 60\times 240\) grid cells. In addition, dynamic mesh capability (\(dynamicRefineFvMesh\)) [46] is used to refine the mesh near the solid-fluid interface at each time-step. In this work, the maxRefinement parameter (a property of the dynamic mesh method defining the maximum number of layers of refinement that a cell can experience) is equal to two layers. The simulation time-step is set to \(\Delta t=10^{-4}\) s corresponding to an average Courant number of 0.1. The total computational elapsed time for the simulations was 1h52m and 6h16m for M1 and M2, respectively, executed on a 3.00-GHz 48 cores Intel Xeon Gold 6248R CPU processor with 128 GB of RAM.
Figure 3 shows the \(z\)-component of the spheres centers, \(z_{i}^{p}\), and the \(z\)-component of the spheres translation velocities, \((U_{z})_{i}^{p}\), as function of time for calculations using M1 and M2 meshes. Additionally, the results obtained by Glowinski et al. [30] using two levels of mesh refinement, \(h_{\Omega}=1/60\) and \(h_{\Omega}=1/80\), are included for comparison purposes. Our results for M1 and M2 meshes obtained using the newly-developed algorithm (Algorithm 1) are in good agreement indicating that the results are mesh independent. As can be observed, the particle on top (following particle) is first carried by the wake generated by the particle on the bottom (leading particle) forcing to the so-called drafting phenomenon (\(0\leq t<0.14\) s). Then, the following particle velocity increases, the distance between the two particle's centres decreases, and ultimately a contact forms between them forcing to the so-called kissing phenomenon (\(0.14<t<0.35\) s). Since the vertical configuration is unstable and particles cannot stay attached [47], the particles start tumbling and are found side by side, which is known as tumbling phenomenon (\(0.35<t<0.5\) s). Subsequently, the following particle passes ahead of the leading particle causing the deviation of the leading particle from the middle of the channel influenced by the fluid's back-flows along the wall. Ultimately, the particle stagnate against the wall (\(t\approx 0.65\) s) [48].
When comparing our results with the results computed by Glowinski et al. [30] with \(h_{\Omega}=1/60\) and
Figure 2: Configuration of the drafting-kissing-tumbling (DKT) benchmark case study, where the transient motion of two spheres is considered while settling through an initially quiescent viscous fluid confined in a duct of width \(6d\) and height \(24d\), where \(d=1/6\) cm is the sphere diameter. The schematic diagram illustrates the computational domain including the coordinate system, the boundary walls, the gravitational acceleration \(g\), and the initial positions of the spheres located on \((0.5,0.5,3.5)\) and \((0.5,0.49,3.16)\).
\(h_{\Omega}=1/80\), it can be seen that they both predicted similar physical behaviors but with small discrepancy on timing. It must be noted that the kissing, drafting, and tumbling (DKT) benchmark case study is a non-smooth case involving several symmetry breaking. The exact agreement between different numerical algorithms after the kissing phenomenon is difficult to achieve, in part due to the lack of achieving mesh-independent results, or the use of different inter-particle contact models that influences the particles' position drastically. To show the completeness of our solution, Fig. 4 presents particles location and the contour distribution for the longitudinal fluid velocity, \(u_{z}\) (cm/s), obtained at the middle plane \(x=0.5\) cm for times \(t=0.01,\ 0.30,\ 0.35,\ 0.45,\ 0.50\), and \(0.65\) s obtained with M2. One can distinctly observe that the drafting (\(t=0.3\) s), kissing (\(t=0.35\) s), and tumbling (\(t=0.45\) s) phenomena are indeed taking place. Next test-cases explore how the DKT benchmark case study changes when particles are magnetized under a constant magnetic field.
Figure 3: A comparison of the \(z\)-component of the spheresβ (a) center location, and (b) translation velocity as function of time for the drafting-kissing-tumbling (DKT) benchmark case study obtained using Algorithm 1 and those computed by Glowinski et al. [30].
### DKT phenomenon under magnetic field
This computational experiments examines the effect of the application of an external magnetic field on the DKT benchmark case study described in Section 4.1. We apply the external magnetic field both vertically (see Fig. 5(a)) and horizontally (see Fig. 5(b)), and explore how the magnetic field affects the sedimentation of the two magnetic spheres, i.e., the DKT phenomenon [17]. In both cases, the applied magnetic potential gradient field, \(\nabla\phi\), is set to 50 A/m. In addition, the fixed dipole model (see Eq. (12)) is employed to magnetize the particles with a relative susceptibility of \(\chi=2000\)[49].
Figure 4: The drafting-kissing-tumbling (DKT) benchmark case study simulated using Algorithm 1 with no magnetic field. The positions of spheres at \(t=0.01,\,0.30,\,0.35,\,0.45,\,0.50\) and \(0.65\) s and the contour of the longitudinal (\(z-\)component) fluid velocity, \(u_{z}\) (cm/s), at the midplane \(x=0.5\) cm are shown.
Figure 6 shows the \(z\)-component of the spheres centers, \(z_{i}^{p}\), and of the \(z\)-component of the spheres translation velocities, \((U_{z})_{i}^{p}\), as function of time for the calculations using mesh M2. In addition, the results of the DKT benchmark case study under no magnetic field (obtained in Section 4.1) are also shown for comparison purposes. When the external magnetic field is applied in the vertical direction, the two magnetic particles are attracted together forming a string (\(t\approx 0.5\) s). The string last until they contact the bottom wall of the domain. On the other turn, when the external magnetic field is induced in the horizontal direction, the spherical particles do not approach each other, but instead they tumble side-by-side. During the rest of the sedimentation process, the wake generated by the leading particle leads to a faster settling of the following particle (\(t>0.5\) s), also see Fig. 8 for an illustrative representation of this phenomenon.
Figure 5: Configuration of the drafting-kissing-tumbling (DKT) benchmark case study with application of an external magnetic field potential in the (a) vertical (\(z\)-axis) and (b) horizontal (\(y\)-axis) directions. The schematic diagram illustrates the computational domain including the coordinate system, the gravitational acceleration \(g\), the potential magnetic field, and the initial positions of the spheres located on \((0.5,0.5,3.5)\) and \((0.5,0.49,3.16)\).
Figure 7 presents particles settling under vertical magnetic field. The particle location and the contour distribution for the longitudinal fluid velocity, \(u_{z}\) (cm/s), obtained at the midplane \(x=0.5\) cm for times \(t=0.01,\ 0.30,\ 0.35,\ 0.45,\ 0.50\), and \(0.65\) s are shown. As can be seen, the particles experience longer drafting period, and form a tight string that does not get separated in the rest of the sedimentation process.
Figure 6: A comparison of the \(z\)-component of the spheresβ (a) center location, and (b) translation velocity as function of time obtained using Algorithm 1 to simulate the drafting-kissing-tumbling (DKT) benchmark case study under no magnetic field, vertical magnetic field, and horizontal magnetic field.
Figure 8 shows the settling of the spherical particles under an horizontal magnetic field. The particle location and the contour distribution for the longitudinal fluid velocity, \(u_{z}\) (cm/s) obtained at the midplane \(x=0.5\) cm for times \(t=0.01,\ 0.30,\ 0.35,\ 0.45,\ 0.50\), and \(0.65\) s are shown in Fig. 8. In this case, the direction of the particles sedimentation is transverse to the magnetic field direction, and hence, particles form a repulsive magnetic force [17]. For that reason, the particles, instead of approaching and contacting each other, just tumble as a non-kissing pair (\(t\approx 0.50\) s). Shortly after the tumble, the particles approach the vertical walls, where the external magnetic field is applied (\(t\approx 0.65\) s).
Figure 7: The change in drafting-kissing-tumbling (DKT) benchmark case study under vertical magnetic field simulated using Algorithm 1. The positions of spheres at \(t=0.01,\ 0.30,\ 0.35,\ 0.45,\ 0.50\) and \(0.65\) s, and the contour of the longitudinal (\(z-\)component) fluid velocity, \(u_{z}\) (cm/s), at the midplane \(x=0.5\) cm are shown.
### Multi-particle chaining under magnetic field: 2D
In this test case, we analyze the motion of a random array of magnetic particles whose centers are located at the midplane of a rectangular box filled with a Newtonian fluid under the influence of external magnetic fields [15; 17; 49]. Two computational domains are employed as \(\Omega_{h}=[0,4]\times[0,1]\times[0,1]\) cm\({}^{3}\) and \(\Omega_{v}=[0,1]\times[0,4]\times[0,1]\) cm\({}^{3}\) (see Fig. 9). The initial positions of the spheres centers, with diameter \(d=1/16\) cm and density of \(\rho_{s}=1.01\) g/cm\({}^{3}\), are randomly generated and constrained such that the minimum distance between particles and between the particles and walls is equal to \(1.5d\). The spheres move under the action of gravity, hydrodynamic forces, mutual dipole-dipole forces, and the applied external magnetic force [17].
Figure 8: The change in drafting-kissing-tumbling (DKT) benchmark case study under horizontal magnetic field simulated using Algorithm 1. The positions of spheres at \(t=0.01,\ 0.30,\ 0.35,\ 0.45,\ 0.50\) and \(0.65\) s, and the contour of the longitudinal (\(z-\)component) fluid velocity, \(u_{z}\) (cm/s), at the midplane \(x=0.5\) cm are shown.
Two area fractions of spheres were tested, 20% and 30%, corresponding to 260 and 390 spheres under the effect of both a vertical and horizontal magnetic fields with magnetic potential gradient of \(\nabla\phi=50\) A/m. The fluid and the spheres are initially at rest. On the channel walls, the no-slip boundary condition is applied for the fluid velocity. A cyclic boundary condition is applied on the other boundaries. In addition, the fluid density and kinematic viscosity are set to \(\rho_{f}=1\) g/cm\({}^{3}\) and \(\nu=0.01\) cm\({}^{2}\)/s, respectively. The dipole-dipole magnetic forces and torques are calculated using the mutual dipole model, see Eq. (13), with a relative susceptibility of \(\chi=2000\)[49]. For the inter-particle contacts and particle-wall contacts, the coefficient of normal restitution, coefficient of friction, Poisson's ratio and Young's modulus are considered to be 0.90, 0.33, 0.33, and \(7\times 10^{8}\) Pa, respectively [17].
The calculations were performed in an hexahedral mesh with initial configuration \(128\times 32\times 32\) grid cells for the horizontal domain (M\({}_{h}\)) and \(32\times 128\times 32\) grid cells for the vertical domain (M\({}_{v}\)). Again, the dynamic mesh refinement was employed in the calculations with two levels of refinement. The time-step used in the simulations is \(\Delta t=10^{-4}\) s corresponding to a maximum Courant number of 0.1. The total computational elapsed time for the simulations was approximately 2h05m and 2h35m for the 20% and 30% particle's area fractions executed on a 3.00-GHz 48 cores Intel Xeon Gold 6248R CPU processor with 128 GB of RAM.
Figures 10 and 11 show the snapshots of 260 and 390 particles moving in the rectangular channel under
Figure 9: Configuration of the multi-particle problem where spheres centers are located on a 2D plane and with application of an external magnetic potential field in the (a) vertical (\(y\)-axis) and (b) horizontal (\(x\)-axis) directions. The schematic diagram illustrates the computational domain including the coordinate system, the potential magnetic field, the gravitational acceleration \(g\), and random position of particles.
the effect of gravity and an external magnetic field applied in the vertical direction. At the initial instants of the simulations (\(t\leq 0.2\) s), short fragmented chains or clusters of particles are formed in the \(y-\)direction (the same as the applied external magnetic field direction). Subsequently, at later instants (\(0.4\leq t\leq 0.6\) s), the short chains start to merge together and form long chains, i.e., they form mesoscale structures made of magnetic particles with shapes and orientations comparable to the results presented by Han et al. [15], Ke et al. [17] and Ly et al. [49].
Figures 12 and 13 show the snapshots of 260 and 390 particles moving in the rectangular channel under the action of gravity and of an external magnetic field applied in the horizontal direction. Again, at the initial instants of the simulations (\(t\leq 0.2\) s), short fragmented chains or clusters of particles are formed in
Figure 11: Behavior of a random array of magnetic spheres on a 2D domain with 30% particle area fraction at \(t=0,\ 0.05,\ 0.1,\ 0.2,\ 0.4\) and \(0.6\) s under the action of gravity and an external magnetic field applied in the vertical direction.
Figure 10: Behavior of a random array of magnetic spheres on a 2D domain with 20% particle area fraction at \(t=0,\ 0.05,\ 0.1,\ 0.2,\ 0.4\) and \(0.6\) s under the action of gravity and an external magnetic field applied in the vertical direction.
the \(x-\)direction (the same as the applied external magnetic field direction). Then, at later instants (\(0.4\leq t\leq 0.6\) s), the short chains start to merge together and form long horizontally aligned chains, i.e., they form mesoscale structures made of magnetic particles with distinct shapes and orientations.
Figures 10 to 13 also show the presence of isolated magnetic particles and a number of chains with shorter lengths. Predominantly, these chains are linear as head-to-tail aggregation of magnetic dipoles, but as claimed by Ke et al. [17], Mohebi et al. [50] and Fermigier and Gast [51], it is also observed thick particle clusters due to the lateral merging of the linear chains.
### Multi-particle chaining under magnetic field: 3D
In this subsection, we analyze the robustness of the proposed FVM-IBM-DEM-MAG solver by studying the chain formation in MRFs within a three-dimensional (3D) domain. A random array of magnetic spheres is placed in a rectangular box filled with a Newtonian fluid under the influence of gravity and external magnetic fields applied in different directions [15]. The computational domain employed was \(\Omega=[0,2]\times[0,2]\times[0,1]\) cm\({}^{3}\) (see Fig. 14). The diameter of the spheres is \(d=1/16\) cm. The initial positions of the spheres centers are randomly generated with a restriction such that the minimum distance between particles and between the particles and walls is higher than \(1.5d\). The spheres move under the action of gravity, hydrodynamic forces,
Figure 12: Behavior of a random array of magnetic spheres on a 2D domain with 20% particle area fraction at \(t=0,\,0.05,\,0.1,\,0.2,\,0.4\) and \(0.6\) s under the action of gravity and an external magnetic field applied in the horizontal direction.
Figure 13: Behavior of a random array of magnetic spheres on a 2D domain with 30% particle area fraction at \(t=0,\,0.05,\,0.1,\,0.2,\,0.4\) and \(0.6\) s under the action of gravity and an external magnetic field applied in the horizontal direction.
mutual dipole-dipole forces, and the applied external magnetic force. The sphere volume fraction was fixed at 1.85%, corresponding to 580 spheres. We considered an external magnetic field with magnetic gradient potential \(\nabla\phi=50\) A/m applied vertically or horizontally. The fluid and the spheres are initially at rest. On the channel walls, the no-slip boundary condition is imposed for the fluid velocity. A cyclic boundary condition is applied on the other boundaries. The fluid and particle densities are \(\rho_{f}=1\) g/cm\({}^{3}\) and \(\rho_{s}=1.01\) g/cm\({}^{3}\), respectively. The fluid kinematic viscosity is \(\nu=0.01\) cm\({}^{2}\)/s. The dipole-dipole magnetic forces and torques are calculated using the mutual dipole model, see Eq. (13), with a relative susceptibility of \(\chi=2000\)[49]. For the inter-particle contacts and particle-wall contacts, the coefficient of normal restitution, coefficient of friction, Poisson's ratio and Young's modulus are considered to be 0.90, 0.33, 0.33, and \(7\times 10^{8}\) Pa, respectively [17].
The calculations were performed in an hexahedral mesh with initial configuration of \(64\times 64\times 32\) grid cells. Again, the dynamic mesh refinement was employed in the calculations with maxRefinement = 2. The time-step used in the simulations is \(\Delta t=10^{-4}\) s, corresponding to a maximum Courant number of 0.1. The total computational elapsed time for the simulations was approximately 18h12m executed on a 3.00-GHz 48 cores Intel Xeon Gold 6248R CPU processor with 128 GB of RAM.
Figures 15 and 16 depict the evolution of the particles at six time instants for the two directions of the imposed external magnetic potential field. It can be seen that with the application of the magnetic field, the particles become magnetized and acquire a magnetic dipole moment [15], which promotes the particles to
Figure 14: Configuration of the multi-particle problem where spheres centers are located on the 3D spatial domain and with application of an external magnetic potential field in the (a) vertical (\(z\)-axis) and (b) horizontal (\(x\)-axis) directions. The schematic diagram illustrates the computational domain including the coordinate system, the potential magnetic field, the gravitational acceleration \(g\), and random position of particles.
aggregate and form short fragmented chains (\(t\leq 1\) s). As time advances, these short chains merge together and form longer chains (i.e., mesoscopic structures) that align in the direction of the applied magnetic field [15].
Figure 15: Behavior of a random array of magnetic spheres in a 3D domain with 1.85% particle volume fraction at \(t=0,\,0.5,\,1,\,1.5,\,2\) and \(3\) s under the action of gravity and an external magnetic field applied in the vertical direction.
## 5 Conclusions
A numerical formulation for fully-resolved simulation of magnetorheological fluids (MRF) consisting of solid magnetic particles suspended in a Newtonian carrier fluid was presented. The implementation was carried out by extending the open-source \(CFDEMcoupling\) framework with a force calculation at the particles surface due to the applied external magnetic field, and with the implementation of the fixed and mutual dipole-dipole magnetic models to account for the magnetic interactions between the particles. The overall algorithm procedure solves a second-order differential equation for the magnetic potential field, followed by the flow equations, including the continuity and momentum balance equations, and an immersed boundary algorithm to model the flow around discrete magnetic particles present in the flow domain. This approach guarantees a tight coupling between the dynamics of the fluid and the magnetic solid discrete phase. The coupling is provided by the calculation of the net hydrodynamic and magnetic forces and torques exerted by the fluid on the solid particles. The algorithm subsequently uses the discrete element method to model the particle motion, comprising linear and rotational motions, as well as the particles magnetic moment, which in turn provides new boundary conditions for the fluid domain.
Figure 16: Behavior of a random array of magnetic spheres in a 3D domain with 1.85% particle volume fraction at \(t=0,\ 0.5,\ 1,\ 1.5,\ 2\) and 3 s under the action of gravity and an external magnetic field applied in the horizontal direction.
The accuracy and robustness of the proposed FVM-IBM-DEM-MAG algorithm were evaluated using four benchmark studies. First, for the sedimentation of two spheres in a rectangular duct containing a Newtonian fluid without the presence of an external magnetic field (mimicking the drafting-kissing-tumbling, DKT, phenomena), the particles velocity and location were compared with numerical data available in the literature and a good agreement was obtained. The velocity contour profiles of the particles falling through the Newtonian fluid distinctly showed several symmetry breaking physical aspects of the non-smooth DKT phenomenon. We also demonstrated the capability of the algorithm to predict the dynamics of two magnetic particles falling under the action of gravity and an external magnetic field, i.e., the simulation of the DKT benchmark case study but with activating the magnetic forces calculations. For a vertical magnetic field, the particles experience a longer drafting period and form a tight string which does not separate during the rest of the sedimentation process. For a horizontal magnetic field, the particles just tumble as a non-kissing pair and approach the vertical walls of the domain, where the external magnetic field is applied. The FVM-IBM-DEM-MAG solver was also used to study the multi-particle chaining when particles are placed randomly on a 2D-plane. Two area fractions of spheres were tested, 20% and 30%, corresponding to 260 and 390 spheres under the effect of gravity and a vertical or horizontal magnetic fields. The snapshots of the particles locations showed that, at the initial instants, short fragmented chains or clusters of particles are formed. With time advancing, the short chains merge together and form longer column-like chains always aligned with the direction of the externally imposed magnetic field. Finally, the robustness of the FVM-IBM-DEM-MAG solver was tested in a 3D domain, where an array of 580 randomly distributed magnetic particles were subjected to gravity and a horizontal or vertical magnetic field. Again, the snapshots of the particles location demonstrated the formation of long column-like chains in the direction of the applied magnetic field.
In summary, the results presented in this study show that the newly developed code can accurately predict the flow patterns and particle assembly in MRF for a number of benchmark problems.
## 6 Acknowledgements
C. Fernandes acknowledges the support by FEDER funds through the COMPETE 2020 Programme and National Funds through FCT (Portuguese Foundation for Science and Technology) under the projects UID-B/05256/2020 and UID-P/05256/2020.
Salah A. Faroughi would like to acknowledge support by National Science Foundation Partnership for Research and Education in Materials (PREM) (award no. DMR-2122041).
|
2308.16465 | Haplotype frequency inference from pooled genetic data with a latent
multinomial model | In genetic studies, haplotype data provide more refined information than data
about separate genetic markers. However, large-scale studies that genotype
hundreds to thousands of individuals may only provide results of pooled data,
where only the total allele counts of each marker in each pool are reported.
Methods for inferring haplotype frequencies from pooled genetic data that scale
well with pool size rely on a normal approximation, which we observe to produce
unreliable inference when applied to real data. We illustrate cases where the
approximation breaks down, due to the normal covariance matrix being
near-singular. As an alternative to approximate methods, in this paper we
propose exact methods to infer haplotype frequencies from pooled genetic data
based on a latent multinomial model, where the observed allele counts are
considered integer combinations of latent, unobserved haplotype counts. One of
our methods, latent count sampling via Markov bases, achieves approximately
linear runtime with respect to pool size. Our exact methods produce more
accurate inference over existing approximate methods for synthetic data and for
data based on haplotype information from the 1000 Genomes Project. We also
demonstrate how our methods can be applied to time-series of pooled genetic
data, as a proof of concept of how our methods are relevant to more complex
hierarchical settings, such as spatiotemporal models. | Yong See Foo, Jennifer A. Flegg | 2023-08-31T05:17:26Z | http://arxiv.org/abs/2308.16465v1 | # Haplotype frequency inference from pooled genetic data with a latent multinomial model
###### Abstract
In genetic studies, haplotype data provide more refined information than data about separate genetic markers. However, large-scale studies that genotype hundreds to thousands of individuals may only provide results of pooled data, where only the total allele counts of each marker in each pool are reported. Methods for inferring haplotype frequencies from pooled genetic data that scale well with pool size rely on a normal approximation, which we observe to produce unreliable inference when applied to real data. We illustrate cases where the approximation breaks down, due to the normal covariance matrix being near-singular. As an alternative to approximate methods, in this paper we propose exact methods to infer haplotype frequencies from pooled genetic data based on a latent multinomial model, where the observed allele counts are considered integer combinations of latent, unobserved haplotype counts. One of our methods, latent count sampling via Markov bases, achieves approximately linear runtime with respect to pool size. Our exact methods produce more accurate inference over existing approximate methods for synthetic data and for data based on haplotype information from the 1000 Genomes Project. We also demonstrate how our methods can be applied to time-series of pooled genetic data, as a proof of concept of how our methods are relevant to more complex hierarchical settings, such as spatiotemporal models.
_Keywords:_ haplotype frequency estimation; latent multinomial; Markov basis; Markov chain Monte Carlo; pooled DNA
## 1 Introduction
In large-scale genetic studies, individuals are genotyped at multiple genetic markers, often for the purpose of studying genetic association. These markers may exhibit mutational change, the most common being single nucleotide polymorphisms (SNPs), where nucleotide variations of single bases are called alleles (Wright, 2005). In order to reduce genotyping costs, DNA data of up to hundreds of individuals may be pooled into several groups before genotyping, instead of determining the sequence of alleles for each individual separately. As a result, we only retain the allele counts of each SNP for each pool, and lose information about the configuration of alleles over SNPs. Apart
from data that is pooled during genotyping, pooled results can also come from studies where data is partially reported. Even if individual-level genotyping is performed, the results may be summarised such that only pooled data over individual markers is available.
SNPs that are close to each other are often correlated, resulting in limited variation of haplotypes (combinations of SNP alleles in a genetic region) (Wright, 2005). Rather than analysing SNPs separately, haplotypes provide finer information when associating genetic data to phenotypes (observable traits of an organism) (Tam et al., 2019). In this paper, we address the statistical inverse problem of inferring the frequencies of haplotypes given pooled genetic data, i.e. pooled allele counts of each marker. Some previous methods rely on enumerating all possible haplotype assignments (Ito et al., 2003; Kirkpatrick et al., 2007; Iliadis et al., 2012), but they are only applicable to small pool sizes (\(\leq 20\) haplotype samples per pool). As genetic studies can have up to hundreds of samples per pool (Zhang et al., 2008), methods that scale well with pool size are needed. An example of such an approach is sparse optimisation, which solves to find haplotype frequency vectors that are compatible with the observed allele frequencies, and have only a few nonzero entries (Jajamovich et al., 2013; Zhou et al., 2019). This reflects the reality that given a sequence of markers, only a few out of the exponentially many possible haplotypes are present in a population (Patil et al., 2001). However, it is not straightforward to quantify uncertainties of the inferred frequencies, which impedes downstream statistical inference. There are also statistical methods that avoid enumerating haplotype assignments by using a normal approximation (Zhang et al., 2008; Kuk et al., 2009; Pirinen, 2009), thereby achieving computational runtimes that are fairly insensitive to pool size. The authors claim that the error introduced by the normal approximation is negligible for large pool sizes due to the central limit theorem. In particular, it is the _multivariate_ central limit theorem that applies, which requires the covariance matrix to be non-singular for the probability density to be finite. However, some haplotype frequencies can give rise to singular covariance matrices, which causes the normal approximation to break down. This issue has not been previously acknowledged in the works that use the normal approximation, bringing the reliability of their methods into question.
To address this issue, we develop two exact Bayesian methods to perform haplotype frequency estimation for large pools of genetic data, in order to test whether approximate methods give results that are comparable to exact methods. Our first method enumerates all haplotype assignments using a branch-and-bound algorithm, whereas the second method treats the counts of each haplotype for each pool as latent variables to be inferred. Although the first method does not scale well with pool size, we demonstrate its utility for an example over 8 haplotypes with pools up to 100 samples each. On the other hand, the runtime of our second method scales well; its runtime is approximately linear with respect to pool size. To scale our methods with the number of markers, we incorporate partition ligation (Niu et al., 2002). When dealing with a long sequence of markers, the partition ligation procedure estimates frequencies of partial haplotypes over short segments of markers, and subsequently stitches the segments back in a recursive manner. This avoids having to perform inference on too many haplotypes simultaneously.
We formulate both of our methods under a _latent multinomial_ framework, where the counts of each haplotype for each pool are modelled as latent multinomial counts that are unobserved, and the haplotype frequencies are modelled as multinomial probabilities. The observed allele counts of each marker in each pool are subsequently modelled as integer combinations of the latent counts. For our first exact method, we marginalise out these latent counts exactly, resulting in an enumeration-based approach. In the analysis of mark-recapture data (Link et al., 2010; Schofield and Bonner, 2015), where animals are captured and released multiple times, the latent multinomial model has been used to handle the fact that the capture history of each individual is only partially observable and potentially erroneous. The authors treat the latent counts as discrete parameters, and sample them with a Markov chain Monte Carlo (MCMC) scheme. This is an exact inference method for the latent multinomial model, which we adopt for our second exact method.
We compare the performance of our two exact methods with an approximate counterpart of our first method, along with approximate methods from literature; the different methods are detailed in Section 2. To carry out the comparisons, we apply these methods to a simulation study based on synthetic data, and an example based on data from the 1000 Genomes Project (The 1000 Genomes Project Consortium et al., 2015) in Section 3. We demonstrate that our exact methods produce more reliable inference without resorting to approximation, at the cost of longer computational runtimes. We also illustrate how our proposed methods can be applied in hierarchical settings e.g. time-series modelling or spatiotemporal modelling, which has not been previously done for haplotype frequency estimation on pooled genetic data. Finally, we discuss the implications of our findings in Section 4.
## 2 Methods
We aim to perform inference on population haplotype frequencies over \(M\) biallelic markers (i.e. each marker can be one of two possible alleles). For each marker, we represent the allele that occurs with higher frequency (major allele) as \(0\), and the allele that occurs with lower frequency (minor allele) as \(1\). A haplotype is represented by a string of \(M\) binary digits. Suppose we have a set of \(H\)_input haplotypes_, where \(2\leq H\leq 2^{M}\), such that the haplotypes present in the population is a subset of the input haplotypes. The genetic data is divided into \(N\) pools, where pool \(i\) consists of \(n_{i}\) haplotype samples for \(i=1,\ldots,N\). Let \(\mathbf{z}_{i}\coloneqq(z_{i1},\ldots,z_{iH})\) denote the number of occurrences of each input haplotype in pool \(i\) for \(i=1,\ldots,N\). Assuming that the haplotype samples are unrelated, we have that
\[\mathbf{z}_{i}|\mathbf{p}\sim\mathrm{Mult}(n_{i};\mathbf{p}), \tag{1}\]
where \(\mathbf{p}\coloneqq(p_{1},\ldots,p_{H})\) are the population haplotype frequencies. Since some input haplotypes may be absent from the population, we allow the entries of \(\mathbf{p}\) to be zero.
However, we do not directly observe the haplotype counts \(\mathbf{z}_{i}\). Instead, for each pool \(i\), we observe the numbers of samples belonging to various subsets of haplotypes, and treat \(\mathbf{z}_{i}\) as latent counts. For example, suppose that there are \(M=3\) markers and we have prior knowledge to exclude
haplotype 111 from the input haplotypes. Observing the number of samples with a minor allele at the first marker is then equivalent to observing the number of samples whose haplotypes are in the subset \(\{100,101,110\}\). Suppose for each pool \(i\), we observe \(R_{i}\) counts arranged as a vector \(\mathbf{y}_{i}\coloneqq(y_{i1},\ldots,y_{iR_{i}})\). The observed count vector \(\mathbf{y}_{i}\) is related to the latent count vector \(\mathbf{z}_{i}\) through a \(R_{i}\times H\) binary matrix \(\mathbf{A}_{i}\) by the linear system \(\mathbf{y}_{i}=\mathbf{A}_{i}\mathbf{z}_{i}\). The matrices \(\mathbf{A}_{i}\) are called _configuration matrices_. Each row of a configuration matrix is determined by the haplotypes associated with the corresponding observed count. Continuing the previous example, if for each marker we observe the number of samples with a minor allele, then each column of the configuration matrix matches the binary representation of the corresponding haplotype. In general, the configuration matrix may be different for each pool, depending on the subsets of haplotypes accounted by the observed counts for that pool. This is relevant for meta-analyses, where the genetic markers that each study reports on are not all the same.
The distribution of \(\mathbf{y}_{i}|\mathbf{p}\) is known as a _latent multinomial distribution_(Link et al., 2010). A direct calculation of the probability mass function \(p(\mathbf{y}_{i}|\mathbf{p})\) is requires finding all latent counts \(\mathbf{z}_{i}\) that are compatible with the observed counts \(\mathbf{y}_{i}\), i.e. solving the system
\[\mathbf{A}_{i}\mathbf{z}_{i} =\mathbf{y}_{i}, \tag{2}\] \[z_{i1}+\cdots+z_{iH} =n_{i},\] (3) \[z_{ih} \geq 0\quad\text{for }h=1,\ldots,H \tag{4}\]
over nonnegative integers \(z_{i1},\ldots,z_{iH}\). Solving the system (2)-(4) is considered computationally intensive for large pool sizes \(n_{i}\)(Zhang et al., 2008; Kuk et al., 2009). We review two methods in the literature that avoid this computation by using a normal approximation, and propose alternative approaches of handling \(p(\mathbf{y}_{i}|\mathbf{p})\) without resorting to approximations.
### Existing approaches
According to the central limit theorem, the observed counts \(\mathbf{y}_{i}\) are asymptotically normally distributed as the pool size \(n_{i}\) increases (Zhang et al., 2008). In particular, the distribution \(\mathbf{y}_{i}|\mathbf{p}\) is approximately multivariate normal:
\[\mathbf{y}_{i}|\mathbf{p}\approx\mathcal{N}\!\left(n_{i}\mathbf{A}_{i} \mathbf{p},n_{i}\mathbf{A}_{i}(\text{diag}(\mathbf{p})-\mathbf{p}\mathbf{p}^ {T})\mathbf{A}_{i}^{T}\right), \tag{5}\]
given that the covariance matrix \(n_{i}\mathbf{A}_{i}(\text{diag}(\mathbf{p})-\mathbf{p}\mathbf{p}^{T})\mathbf{ A}_{i}^{T}\) is non-singular. Kuk et al. (2009) proposed an approximate expectation-maximisation (AEM) algorithm to approximate the maximum likelihood estimate of \(\mathbf{p}\), where the likelihood is approximated based on (5). The authors assume that the observed counts for each pool are the allele counts of each marker in that pool, forcing all configuration matrices \(\mathbf{A}_{i}\) to be identical. They also set the input haplotypes to be all \(2^{M}\) haplotypes. Pirinen (2009) provides an implementation of this frequentist approach that instead allows the user to specify an arbitrary list of \(H\) input haplotypes, known as 'AEM algorithm with List' (AEML).
Pirinen (2009) introduced a Bayesian approach where the list of input haplotypes is treated as random, instead of being specified by the user. This is achieved by specifying a joint prior distribution over the number of input haplotypes, the configuration matrix, and the haplotype frequencies. The program HIPPO (Haplotype estimation under incomplete prior information using pooled observations) implements a reversible-jump MCMC sampler to perform inference on this model. Similar to Kuk et al. (2009), HIPPO also uses a normal approximation, and assumes that the observed counts for each pool are the allele counts of each marker in that pool. Since the list of input haplotypes is random, the configuration matrices \(\mathbf{A}_{i}\) may not include all \(2^{M}\) possible haplotypes, but are still identical across \(i=1,\ldots,N\).
The accuracy of AEML and HIPPO hinges on the quality of the normal approximation. The exact marginal distribution of each observed count is a binomial distribution. Recall that the normal approximation to the binomial distribution \(\mathrm{Bin}(n,p)\) is only accurate for sufficiently large \(np(1-p)\). In the context of haplotype frequency estimation, some input haplotypes may be rare or even absent from the population. This leads to inaccuracies in the normal approximation for the case where some entries of \(\mathbf{p}\) are small. HIPPO may suffer less from this issue as it is able to remove such input haplotypes from the configuration matrix during sampling. Moreover, numerical issues may arise when the covariance matrix in (5) is nearly singular, which can happen if a pair of markers are highly correlated (high linkage disequilibrium), e.g. two markers where major alleles occur primarily together. We attempt to alleviate numerical issues by adding a small stabilising constant to the diagonal, i.e. replacing the covariance matrix in (5) with \(n_{i}[\mathbf{A}_{i}(\mathrm{diag}(\mathbf{p})-\mathbf{p}\mathbf{p}^{T}) \mathbf{A}_{i}^{T}+\epsilon\,\mathbf{I}]\), where \(\epsilon=10^{-9}\) and \(\mathbf{I}\) is the identity matrix. Nevertheless, near-singularity may still degrade the quality of the normal approximation, which we illustrate in Section 3.1.
### Proposed methods
In this paper, we propose MCMC methods to perform Bayesian inference on the haplotype frequencies \(\mathbf{p}\). We assume a Dirichlet prior with equal concentration \(\alpha\) for the haplotype frequencies, i.e.
\[\mathbf{p}\sim\mathrm{Dir}(\alpha,\ldots,\alpha). \tag{6}\]
Unlike AEML and HIPPO, we relax the assumption that observed counts are allele counts for our methods. A motivating example can be found from genetic studies on sulfadoxine-pyrimethamine (SP) resistance in _Plasmodium falciparum_ parasites. There are primarily 3 SNPs of interest on the _dhps_ gene that are indicative of SP resistance, namely _dhps_437/540/581 (Sibley et al., 2001). However, some studies only report haplotype data over 2 markers, _dhps_437 and _dhps_540. This can be understood as the observed counts being the counts of 4 partial haplotypes (over the 2 markers). Each partial haplotype in turn corresponds to a subset of the full haplotypes (over all 3 markers); in this case each subset consists of 2 full haplotypes, as we consider _dhps_581 to have two possible alleles. Our methods include the flexibility for each observed count to correspond to a different subset of the full haplotypes, which is not implemented in AEML and HIPPO. We assume that the user specifies the \(H\) input haplotypes, and determines the configuration matrices \(\mathbf{A}_{i}\) from the
nature of the observed counts.
All of our methods require some preprocessing of the configuration matrices. For each \(i=1,\ldots,N\), we include the pool size \(n_{i}\) as an entry of the observed count vector \(\mathbf{y}_{i}\), where the corresponding row in \(\mathbf{A}_{i}\) is a row of 1s. This absorbs the equality condition (3) into the linear system (2). If any configuration matrix \(\mathbf{A}_{i}\) is not of full row rank, we use row reduction to obtain a submatrix consisting of a maximal set of linearly independent rows. This removes redundant information observed from a latent multinomial model, see Zhang et al. (2019) for an explanation of why inference results are not affected by this procedure. Hereafter, we assume that all configuration matrices are of full row rank.
#### 2.2.1 Marginalisation
Our first approach is to marginalise out the latent counts \(\mathbf{Z}\coloneqq\{\mathbf{z}_{i}\}_{i=1}^{n}\) from the likelihood
\[p(\mathbf{Y},\mathbf{Z}|\mathbf{p})=\prod_{i=1}^{N}p(\mathbf{y}_{i},\mathbf{z}_ {i}|\mathbf{p})=\prod_{i=1}^{N}\underbrace{\binom{n_{i}}{z_{i1},\ldots,z_{iH}}p _{1}^{z_{i1}}\cdots p_{H}^{z_{iH}}}_{p(\mathbf{z}_{i}|\mathbf{p})}\underbrace{ \mathds{1}(\mathbf{y}_{i}=\mathbf{A}_{i}\mathbf{z}_{i})}_{p(\mathbf{y}_{i}| \mathbf{z}_{i})}, \tag{7}\]
where \(\mathbf{Y}\coloneqq\{\mathbf{y}_{i}\}_{i=1}^{n}\). This amounts to computing \(p(\mathbf{y}_{i}|\mathbf{p})\) by enumerating all possible latent counts \(\mathbf{z}_{i}\) for each \(i=1,\ldots,N\). Although this approach does not scale well with pool size, it is still appropriate for cases where the number of input haplotypes is small enough. For each \(i=1,\ldots,N\), we define the _feasible set_ to be the set of solutions to (2)-(4), i.e.
\[\mathcal{F}(\mathbf{A}_{i},\mathbf{y}_{i})\coloneqq\{\mathbf{z}_{i}\colon \mathbf{A}_{i}\mathbf{z}_{i}=\mathbf{y}_{i},z_{i1}\geq 0,\ldots,z_{iH}\geq 0\},\]
where the equality condition (3) is absorbed into the linear system \(\mathbf{A}_{i}\mathbf{z}_{i}=\mathbf{y}_{i}\). The probability mass function of \(\mathbf{y}_{i}\) is given by
\[p(\mathbf{y}_{i}|\mathbf{p})=\hskip-14.226378pt\sum_{\mathbf{z}_{i}\in \mathcal{F}(\mathbf{A}_{i},\mathbf{y}_{i})}p(\mathbf{z}_{i}|\mathbf{p})=\hskip-14.226378pt \sum_{\mathbf{z}_{i}\in\mathcal{F}(\mathbf{A}_{i},\mathbf{y}_{i})}\binom{n_{ i}}{z_{i1},\ldots,z_{iH}}p_{1}^{z_{i1}}\cdots p_{H}^{z_{iH}}. \tag{8}\]
To perform Bayesian inference, we first enumerate the feasible sets \(\mathcal{F}(\mathbf{A}_{i},\mathbf{y}_{i})\) for each \(i=1,\ldots,N\), then proceed with running MCMC to obtain samples from the posterior distribution \(p(\mathbf{p}|\mathbf{Y})\). We perform MCMC using the No-U-Turn sampler (NUTS) (Hoffman & Gelman, 2014) as implemented in PyMC(Salvatier et al., 2016). NUTS simulates a Markov chain that converges to the posterior distribution by utilising gradient information of the log-posterior, which avoids the inefficient random-walk behaviour exhibited by traditional Metropolis-Hastings proposals.
In the case where a configuration matrix \(\mathbf{A}_{i}\) consists of arbitrary integer entries, one can enumerate the feasible set with 4ti2 (4ti2 team, n.d.), a software package for 'algebraic, geometric and combinatorial problems on linear spaces'. However, our configuration matrices only have 0s and 1s as entries, which allows for a more efficient branch-and-bound algorithm for finding the feasible set as described in Appendix A. If the number of input haplotypes or pool size is too large, the feasible set may have too many elements to be enumerated within a reasonable amount of time. In
this case, we either resort to a normal approximation (5), or sample the latent counts instead of marginalising them out, as described in the next section.
#### 2.2.2 Latent count sampling
In order to avoid using approximations when the feasible set is too large, we treat latent counts \(\mathbf{z}_{i}\) as model parameters to be sampled during MCMC alongside with \(\mathbf{p}\). Sampling the latent counts \(\mathbf{z}_{i}\) is not straightforward as the proposed values must belong to the feasible set. Gasbarra et al. (2011) addresses this constraint by relaxing \(\mathbf{z}_{i}\) to be continuous, and expressing each \(\mathbf{z}_{i}\) as a convex combination of the extremal points of \(\mathcal{F}(\mathbf{A}_{i},\mathbf{y}_{i})\). This approach comes at the cost of approximating the discrete multinomial distribution in (1) with a continuous Dirichlet distribution. In this paper, we instead aim to sample discrete latent counts \(\mathbf{z}_{i}\) using a custom Metropolis-within-Gibbs sampler without resorting to any approximations. Note that despite the connection between our approach and that of Gasbarra et al. (2011), we do not include their approach in our comparison as there is no software publicly available, and HIPPO has been shown to give better performance (Pirinen, 2009).
Before we describe our sampler, we first exploit the Dirichlet-multinomial conjugacy due to (1) and (6). Define \(z_{\cdot h}\coloneqq z_{1h}+\cdots+z_{Nh}\) for each \(h=1,\ldots,H\). The full conditional distribution of \(\mathbf{p}\) is given by
\[p(\mathbf{p}|\mathbf{Y},\mathbf{Z}) \propto p(\mathbf{p},\mathbf{Y},\mathbf{Z})\] \[=\underbrace{\Gamma(H\alpha)}_{p(\mathbf{p})}p_{1}^{\alpha-1} \cdots p_{H}^{\alpha-1}\underbrace{\left[\prod_{i=1}^{N}\binom{n_{i}}{z_{i1}, \ldots,z_{iH}}p_{1}^{z_{i1}}\cdots p_{H}^{z_{iH}}\mathds{1}(\mathbf{A}_{i} \mathbf{z}_{i}=\mathbf{y}_{i})\right]}_{p(\mathbf{Y},\mathbf{Z}|\mathbf{p})}\] \[\propto p_{1}^{\alpha+z_{\cdot i}-1}\cdots p_{H}^{\alpha+z_{\cdot H }-1},\]
i.e.
\[\mathbf{p}|\mathbf{Y},\mathbf{Z}\sim\text{Dir}(\alpha+z_{\cdot 1},\ldots,\alpha+z_{ \cdot H}). \tag{9}\]
Moreover, we can marginalise \(\mathbf{p}\) out from the joint distribution \(p(\mathbf{p},\mathbf{Y},\mathbf{Z})\):
\[p(\mathbf{Y},\mathbf{Z}) =\int p(\mathbf{p},\mathbf{Y},\mathbf{Z})\,d\mathbf{p}\] \[=\frac{\Gamma(H\alpha)}{\Gamma(\alpha)^{H}}\left[\prod_{i=1}^{N} \binom{n_{i}}{z_{i1},\ldots,z_{iH}}\mathds{1}(\mathbf{A}_{i}\mathbf{z}_{i}= \mathbf{y}_{i})\right]\int p_{1}^{\alpha+z_{\cdot 1}-1}\cdots p_{H}^{\alpha+z_{\cdot H}-1}\,d \mathbf{p}\] \[=\frac{\Gamma(H\alpha)}{\Gamma(\alpha)^{H}\Gamma(H\alpha+\sum_{i= 1}^{N}n_{i})}\left[\prod_{i=1}^{N}\binom{n_{i}}{z_{i1},\ldots,z_{iH}}\mathds{1 }(\mathbf{A}_{i}\mathbf{z}_{i}=\mathbf{y}_{i})\right]\prod_{h=1}^{H}\Gamma(z_{ \cdot h}+\alpha), \tag{10}\]
where the integral is the normalising constant of a Dirichlet distribution. This allows us to simulate posterior samples of \((\mathbf{Z},\mathbf{p})\) in two stages. We first obtain \(S\) samples \(\{\mathbf{Z}^{(s)}\}_{s=1}^{S}\) from \(p(\mathbf{Z}|\mathbf{Y})\) using MCMC, which is possible as the unnormalised posterior \(p(\mathbf{Y},\mathbf{Z})\) is available through (10). For each
MCMC sample \(\mathbf{Z}^{(s)}\) where \(s=1,\ldots,S\), we then sample \(\mathbf{p}^{(s)}\) from \(p(\mathbf{p}|\mathbf{Y},\mathbf{Z}=\mathbf{Z}^{(s)})\) using (9). The Markov chain \(\{(\mathbf{Z}^{(s)},\mathbf{p}^{(s)})\}_{s=1}^{S}\) converges to the joint posterior \(p(\mathbf{Z},\mathbf{p}|\mathbf{Y})\) since
\[p(\mathbf{Z},\mathbf{p}|\mathbf{Y})=p(\mathbf{p}|\mathbf{Y},\mathbf{Z})p( \mathbf{Z}|\mathbf{Y}).\]
For the remainder of this section, we describe a Metropolis-within-Gibbs (MwG) sampler for obtaining the samples \(\{\mathbf{Z}^{(s)}\}_{s=1}^{S}\) from the posterior \(p(\mathbf{Z}|\mathbf{Y})\). Let \(\mathbf{Z}_{-i}\coloneqq\{\mathbf{z}_{1},\ldots,\mathbf{z}_{i-1},\mathbf{z}_{ i+1},\ldots,\mathbf{z}_{N}\}\) for each \(i=1,\ldots,N\). To specify the MwG sampler, we need to specify for each \(i=1,\ldots,N\) a Metropolis-Hastings sampler whose target distribution is \(p(\mathbf{z}_{i}|\mathbf{Y},\mathbf{Z}_{-i})\). Let \(\mathbf{z}_{i}^{\prime}\) denote the current value of \(\mathbf{z}_{i}\) at any point of the sampler. In order to satisfy the constraint (2), we consider proposals that add or subtract a vector \(\mathbf{u}\) chosen randomly from a subset \(\mathcal{B}_{i}\) of the kernel of \(\mathbf{A}_{i}\). Given that the current value \(\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}\) satisfies (2), the resulting proposal \(\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}\pm\mathbf{u}\) will also satisfy (2). Link et al. (2010) set the subset \(\mathcal{B}_{i}\) to be an arbitrary basis of \(\mathbf{A}_{i}\), however, the resulting Markov chain may not be irreducible (Schofield & Bonner, 2015), i.e. some points in \(\mathcal{F}(\mathbf{A}_{i},\mathbf{y}_{i})\) may never be reached. This is because if the only'moves' are vectors of an arbitrary basis, there may be points of the feasible set that can only be reached through points with negative entries, which violates (4). An alternative is to generate a proposal by adding linear combinations of the basis vectors to \(\mathbf{z}_{i}^{\prime}\). Diaconis and Sturmfels (1998) found this approach to be inefficient, as it generates proposals with negative entries too often. Instead, the authors proposed to use a larger subset \(\mathcal{B}_{i}\) of the kernel of \(\mathbf{A}_{i}\), such that all points of the feasible set may be reached through points with nonnegative entries only. Such a subset \(\mathcal{B}_{i}\) is known as a _Markov basis_ of \(\mathbf{A}_{i}\), and satisfies the condition that a graph with \(\mathcal{F}(\mathbf{A}_{i},\mathbf{y}_{i})\) as its vertices and
\[\{(\mathbf{v},\mathbf{w})\colon\mathbf{v},\mathbf{w}\in\mathcal{F}(\mathbf{A} _{i},\mathbf{y}_{i}),\mathbf{v}-\mathbf{w}\in\mathcal{B}_{i}\text{ or }\mathbf{w}-\mathbf{v}\in\mathcal{B}_{i}\}\]
as its edges is always a connected graph for any vector \(\mathbf{y}_{i}\) of \(R_{i}\) nonnegative integers. The authors use techniques in commutative algebra to find the Markov basis of a matrix, which is implemented in 4ti2 (4ti2 team, n.d.).
Given a Markov basis \(\mathcal{B}_{i}\) and the current value \(\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}\), we generate the proposal \(\mathbf{z}_{i}^{*}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u}\) with probability \(q(\mathbf{z}_{i}^{*}|\mathbf{z}_{i}^{\prime})\) proportional to \(p(\mathbf{z}_{i}=\mathbf{z}_{i}^{*}|\mathbf{Y},\mathbf{Z}_{-i})\), where \(\delta\in\{-1,1\}\) and \(\mathbf{u}\in\mathcal{B}_{i}\). In other words, the proposal distribution is
\[q(\mathbf{z}_{i}^{*}|\mathbf{z}_{i}^{\prime}) =\frac{p(\mathbf{z}_{i}=\mathbf{z}_{i}^{*}|\mathbf{Y},\mathbf{Z}_ {-i})}{\sum_{\delta\in\{-1,1\}}\sum_{\mathbf{u}\in\mathcal{B}_{i}}p(\mathbf{z} _{i}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u}|\mathbf{Y},\mathbf{Z}_{-i})}\] \[=\frac{p(\mathbf{Y},\mathbf{Z}_{-i},\mathbf{z}_{i}=\mathbf{z}_{i}^ {*})}{\sum_{\delta\in\{-1,1\}}\sum_{\mathbf{u}\in\mathcal{B}_{i}}p(\mathbf{Y},\mathbf{Z}_{-i},\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u})}, \tag{11}\]
where the formula for \(p(\mathbf{Y},\mathbf{Z}_{-i},\mathbf{z}_{i})\) is given in (10). Note that \(p(\mathbf{Y},\mathbf{Z}_{-i},\mathbf{z}_{i})\) is zero whenever \(\mathbf{z}_{i}\) contains negative entries. The last equality in (11) follows from the fact that \(p(\mathbf{z}_{i}|\mathbf{Y},\mathbf{Z}_{-i})\) is proportional to \(p(\mathbf{Y},\mathbf{Z})\) as a function of \(\mathbf{z}_{i}\). This proportionality also allows us to write the
Metropolis-Hastings acceptance ratio as
\[a(\mathbf{z}_{i}^{*};\mathbf{z}_{i}^{\prime}) \coloneqq\min\left\{1,\frac{p(\mathbf{z}_{i}=\mathbf{z}_{i}^{*}| \mathbf{Y},\mathbf{Z}_{-i})}{p(\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}|\mathbf{Y },\mathbf{Z}_{-i})}\frac{q(\mathbf{z}_{i}^{\prime}|\mathbf{z}_{i}^{*})}{q( \mathbf{z}_{i}^{*}|\mathbf{z}_{i}^{\prime})}\right\}\] \[=\min\left\{1,\frac{\sum_{\delta\in\{-1,1\}}\sum_{\mathbf{u}\in \mathcal{B}_{i}}p(\mathbf{Y},\mathbf{Z}_{-i},\mathbf{z}_{i}=\mathbf{z}_{i}^{ \prime}+\delta\mathbf{u})}{\sum_{\delta\in\{-1,1\}}\sum_{\mathbf{u}\in \mathcal{B}_{i}}p(\mathbf{Y},\mathbf{Z}_{-i},\mathbf{z}_{i}=\mathbf{z}_{i}^{ *}+\delta\mathbf{u})}\right\}. \tag{12}\]
The choice of a proposal distribution (18) that is proportional to the full conditional distribution can be considered as a restricted Gibbs proposal, though the entire support of \(\mathbf{z}_{i}\) is unlikely to be covered by one proposal iteration. Nevertheless, the use of a Markov basis guarantees the chain to be irreducible. Note that the proposal distribution (18) is different from that of Schofield and Bonner (2015), who sample the basis vector \(\mathbf{u}\) uniformly. Hazelton et al. (2021) show that a Gibbs-like proposal explores the posterior distribution more efficiently due to a higher acceptance rate.
```
Input: Initial values \(\{\mathbf{z}_{i}^{(0)}\}_{i=1}^{N}\), Markov bases \(\{\mathcal{B}_{i}\}_{i=1}^{N}\) Output: Posterior samples \(\{\mathbf{p}^{(s)},\mathbf{Z}^{(s)}\}_{s=1}^{S}\) for\(i\gets 1\)to\(N\)do \(\mathbf{z}_{i}^{\prime}\leftarrow\mathbf{z}_{i}^{(0)}\) for\(t\gets 1\)to\(T+S\)do for\(c\gets 1\)to\(C\)do randomly select \(i\) from \(\{1,\ldots,N\}\) with probability proportional to \(n_{i}\) sample \(\mathbf{z}_{i}^{*}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u}\) according to \(q(\mathbf{z}_{i}^{*}|\mathbf{z}_{i}^{\prime})\) from (11) replace \(\mathbf{z}_{i}^{\prime}\) with \(\mathbf{z}_{i}^{*}\) with probability \(a(\mathbf{z}_{i}^{*};\mathbf{z}_{i}^{\prime})\) from (12) if\(t>T\)then \(s\gets t-T\) \((\mathbf{z}_{1}^{(s)},\ldots,\mathbf{z}_{N}^{(s)})\leftarrow(\mathbf{z}_{1}^ {\prime},\ldots,\mathbf{z}_{N}^{\prime})\) for\(h\gets 1\)to\(H\)do \(\mathbf{z}_{h}^{(s)}\leftarrow z_{1h}^{(s)}+\cdots+z_{Nh}^{(s)}\) sample \(\mathbf{p}^{(s)}\sim\text{Dir}\Big{(}\alpha+z_{\cdot 1}^{(s)},\ldots,\alpha+z_{ \cdot H}^{(s)}\Big{)}\) according to (9) return\(\{\mathbf{p}^{(s)},\mathbf{Z}^{(s)}\}_{s=1}^{S}\)
```
**Algorithm 1**Collapsed random-scan Metropolis-within-Gibbs sampler for the latent multinomial model with Dirichlet conjugacy. \(T\) is the number of burn-in iterations, \(S\) is the number of inference iterations, \(C\) is the number of latent count updates per iteration.
Augmenting the MwG sampler for \(p(\mathbf{Z}|\mathbf{Y})\) with sampling \(\mathbf{p}\) according to (9) leads to a collapsed MwG sampler (Liu, 1994), which we describe in Algorithm 1. The sampler starts with \(T\) burn-in iterations, where the samples are discarded as the chain may not have converged to the posterior distribution. We use a random scan order when updating the latent counts, where the probability of choosing \(\mathbf{z}_{i}\) to update is proportional to the pool size \(n_{i}\) as the corresponding feasible set grows in size with \(n_{i}\). We perform \(C\) such updates every iteration, where larger values of \(C\) lead to less autocorrelation in the posterior samples at the cost of longer computational runtime. We set \(C\) to be proportional to the total pool size \(n_{1}+\cdots+n_{N}\). The initial values for \(\mathbf{Z}\) can be found by solving (2)-(4) using integer programming methods.
### Partition ligation for determining input haplotypes
For a moderate number of markers (\(M\geq 6\)), the number of haplotypes present in a population is typically much smaller than the number of possible haplotypes, \(2^{M}\). For our methods to be scalable with the number of markers, we need to prevent the number of input haplotypes from growing exponentially with \(M\). This is not a concern if a complete list of the haplotypes present is available. If the list is incomplete, we use _partition ligation_(Niu et al., 2002) to determine input haplotypes, i.e. haplotypes whose frequencies we will infer. We first segment the sequence of \(M\) markers into blocks of 3 or 4 markers. We call the haplotypes implicated over a block of markers _partial haplotypes_. The idea of partition ligation is to construct full input haplotypes by combining from each block the partial haplotypes with the highest estimated frequencies. First, we obtain point estimates of the frequencies of the partial haplotypes from each block using one of the methods from Section 2.1 or 2.2. In this paper, we perform this using MCMC-Approx, and use the posterior mean as the point estimate. Suppose we have \(b\) blocks \(B_{1},\ldots,B_{b}\) of markers. For \(i=1,\ldots,b\), let \(\mathcal{H}_{i}\) be the set of partial haplotypes from block \(B_{i}\) whose point estimates are larger than some threshold \(f\). For each \(j=1,\ldots,\lfloor b/2\rfloor\), we concatenate every partial haplotype in \(\mathcal{H}_{2j-1}\) with every partial haplotype in \(\mathcal{H}_{2j}\) to form the set of haplotypes for the concatenated block \(B_{2j-1}B_{2j}\). This procedure halves the numbers of blocks, and is repeated recursively until all blocks are concatenated together. The final list of concatenated haplotypes are used as the input haplotypes for subsequent inference. Choosing a lower threshold for \(f\) makes it more likely for the constructed input haplotypes to include all haplotypes present in the population, but also introduces more input haplotypes that do not occur in the population, making subsequent inference less efficient. Details of partition ligation are further described in haplotype phasing literature, see for example, Stephens and Donnelly (2003).
### Hierarchical extension
In meta-analysis studies, genetic data collected from multiple populations are analysed together, where each population has its own set of haplotype frequencies. We extend the latent multinomial model (1)-(2) to a hierarchical model where each pool of samples is drawn from a different population. To account for the correlation between haplotype frequencies of different populations, we model the haplotype frequencies as a softmax transformation of \(H\) Gaussian processes (GPs):
\[\mathbf{y}_{i} =\mathbf{A}_{i}\mathbf{z}_{i} \text{for }i=1,\ldots,N, \tag{13}\] \[\mathbf{z}_{i}|\mathbf{p}_{i} \sim\text{Mult}(n_{i},\mathbf{p}_{i}) \text{for }i=1,\ldots,N,\] (14) \[p_{ih} =\frac{\exp(f_{h}(\mathbf{x}_{i}))}{\exp(f_{1}(\mathbf{x}_{i}))+ \cdots+\exp(f_{H}(\mathbf{x}_{i}))} \text{for }i=1,\ldots,N,\,h=1,\ldots,H,\] (15) \[f_{h}(\mathbf{x}_{1}),\ldots,f_{h}(\mathbf{x}_{N}) \sim\text{N}(\mathbf{m}_{h}(\mathbf{X}),\mathbf{C}_{h}(\mathbf{X },\mathbf{X})) \text{for }h=1,\ldots,H, \tag{16}\]
where \(\mathbf{p}_{i}\) are the haplotype frequencies of population \(i\), \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) are the covariates observed for each population, and \(f_{h}\) is the \(h\)-th GP whose mean function and covariance function are
\(\mathbf{m}_{h}\) (vector-valued) and \(\mathbf{C}_{h}\) (matrix-valued) respectively. The mean and covariance functions are further parametrised by GP hyperparameters \(\boldsymbol{\theta}\). A graphical representation of this model is shown in Figure 1.
As an example, we consider time-series modelling of haplotype frequencies, where the only covariate for each population \(i\) is the time of data collection \(t_{i}\). We specify each mean function to be a constant \(\mathbf{m}_{h}(\mathbf{X})=(\mu_{h}\ldots,\mu_{h})^{T}\), and each covariance function to be the sum of a rational quadratic kernel and a white noise kernel, i.e. the \((i,i^{\prime})\)-th entry of \(\mathbf{C}_{h}(\mathbf{X},\mathbf{X})\) is
\[c_{h}(t_{i},t_{i^{\prime}})\coloneqq s_{h}^{2}\bigg{(}1+\frac{(t_{i}-t_{i^{ \prime}})^{2}}{2\tau_{h}^{2}}\bigg{)}^{-1}\!\!+\sigma^{2}\mathds{1}(i=i^{ \prime}), \tag{17}\]
where \(\tau_{h}\) is the timescale, \(s_{h}\) is the temporal standard deviation, \(\sigma\) is the noise standard deviation, and \(\mathds{1}(\cdot)\) is the indicator function. Pools that are observed closer in time have haplotype frequencies that are more strongly correlated since \(c(t_{i},t_{i^{\prime}})\) increases as \(|t_{i}-t_{i^{\prime}}|\) decreases. The noise term \(\sigma^{2}\mathds{1}(i=i^{\prime})\) accounts for overdispersion of the multinomial counts. The GP hyperparameters \(\boldsymbol{\theta}\coloneqq(\{\mu_{h},\tau_{h},s_{h}\}_{h=1}^{H},\sigma)\) are given priors according to domain knowledge.
Given the large number of continuous parameters, we perform MCMC inference with NUTS for the parameters \(\mathbf{P}\coloneqq\{\mathbf{p}_{i}\}_{i=1}^{N}\) and \(\boldsymbol{\theta}\). To deal with the latent counts, we may either use (i) exact marginalisation by enumerating feasible sets, (ii) approximate marginalisation according to (5), or (iii) latent count sampling. It is straightforward to apply NUTS to both of the marginalisation approaches. The latent count sampling approach requires modification as the use of a GP prior implies that we no longer have Dirichlet-multinomial conjugacy. We instead use a MwG sampler with target distributions \(p(\mathbf{P},\boldsymbol{\theta}|\mathbf{Z})\) and \(p(\mathbf{z}_{i}|\mathbf{Y},\mathbf{Z}_{-i},\mathbf{P},\boldsymbol{\theta})\) for each \(i=1,\ldots,N\).
Given pre-computed Markov bases \(\mathcal{B}_{i}\) and the current value \(\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}\), we generate the proposal \(\mathbf{z}_{i}^{*}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u}\) with probability \(q(\mathbf{z}_{i}^{*}|\mathbf{z}_{i}^{\prime})\) proportional to \(p(\mathbf{z}_{i}=\mathbf{z}_{i}^{*}|\mathbf{Y},\mathbf{Z}_{-i},\mathbf{P}, \boldsymbol{\theta})\), where \(\delta\in\{-1,1\}\) and \(\mathbf{u}\in\mathcal{B}_{i}\). We note that \(p(\mathbf{z}_{i}|\mathbf{Y},\mathbf{Z}_{-i},\mathbf{P},\boldsymbol{\theta})\) is proportional to \(p(\mathbf{z}_{i}|\mathbf{p}_{i})p(\mathbf{y}_{i}|\mathbf{z}_{i})\), where \(p(\mathbf{z}_{i}|\mathbf{p}_{i})\) is given by (14) and \(p(\mathbf{y}_{i}|\mathbf{z}_{i})=1\) since any proposed value of \(\mathbf{z}_{i}\) satisfies \(\mathbf{A}_{i}\mathbf{z}_{i}=\mathbf{y}_{i}\). This allows us to
Figure 1: Graphical model for latent multinomial data with multiple populations whose haplotype frequencies \(\{\mathbf{p}_{i}\}_{i=1}^{N}\) are correlated through Gaussian processes. \(f_{h}(\mathbf{X})\) denotes the vector \((f_{h}(\mathbf{x}_{1}),\ldots,f_{h}(\mathbf{x}_{N}))\). Circles and squares correspond to random variables and constants respectively. A shaded node indicates that the variable is observed. A dotted outline indicates that the variable is deterministically calculated from its parent variables. Variables contained within a plate are repeated according to the index at the bottom right.
write the proposal distribution as
\[q(\mathbf{z}_{i}^{*}|\mathbf{z}_{i}^{\prime}) =\frac{p(\mathbf{z}_{i}=\mathbf{z}_{i}^{*}|\mathbf{Y},\mathbf{Z}_{- i},\mathbf{P},\boldsymbol{\theta})}{\sum_{\delta\in\{-1,1\}}\sum_{\mathbf{u}\in \mathcal{B}_{i}}p(\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u}| \mathbf{Y},\mathbf{Z}_{-i},\mathbf{P},\boldsymbol{\theta})}\] \[=\frac{p(\mathbf{z}_{i}=\mathbf{z}_{i}^{*}|\mathbf{p}_{i})}{\sum_ {\delta\in\{-1,1\}}\sum_{\mathbf{u}\in\mathcal{B}_{i}}p(\mathbf{z}_{i}= \mathbf{z}_{i}^{\prime}+\delta\mathbf{u}|\mathbf{p}_{i})}, \tag{18}\]
and the acceptance ratio as
\[a(\mathbf{z}_{i}^{*};\mathbf{z}_{i}^{\prime}) =\min\left\{1,\frac{p(\mathbf{z}_{i}=\mathbf{z}_{i}^{*}|\mathbf{Y },\mathbf{Z}_{-i},\mathbf{P},\boldsymbol{\theta})}{p(\mathbf{z}_{i}=\mathbf{z }_{i}^{\prime}|\mathbf{Y},\mathbf{Z}_{-i},\mathbf{P},\boldsymbol{\theta})}\,q (\mathbf{z}_{i}^{\prime}|\mathbf{z}_{i}^{*})\right\}\] \[=\min\left\{1,\frac{\sum_{\delta\in\{-1,1\}}\sum_{\mathbf{u}\in \mathcal{B}_{i}}p(\mathbf{z}_{i}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u}| \mathbf{p}_{i})}{\sum_{\delta\in\{-1,1\}}\sum_{\mathbf{u}\in\mathcal{B}_{i}}p (\mathbf{z}_{i}=\mathbf{z}_{i}^{*}+\delta\mathbf{u}|\mathbf{p}_{i})}\right\}. \tag{19}\]
As for the target distribution \(p(\mathbf{P},\boldsymbol{\theta}|\mathbf{Z})\), we use NUTS to propose MCMC samples \(\{\mathbf{P}^{(s)},\boldsymbol{\theta}^{(s)}\}_{s=1}^{S}\). The full MCMC scheme is described in Algorithm 2. Since updating \(\mathbf{z}_{i}\) only depends on its current value and \(\mathbf{p}_{i}\), there is no need for a random scan order. The number of updates for \(\mathbf{z}_{i}\) is denoted as \(C_{i}\), which we set to be proportional to the pool size \(n_{i}\).
```
Input: Initial values \(\{\mathbf{z}_{i}^{(0)}\}_{i=1}^{N}\), Markov bases \(\{\mathcal{B}_{i}\}_{i=1}^{N}\) Output: Posterior samples \(\{\mathbf{P}^{(s)},\boldsymbol{\theta}^{(s)},\mathbf{Z}^{(s)}\}_{s=1}^{S}\) for\(i\gets 1\)to\(N\)do \(\mathbf{z}_{i}^{\prime}\leftarrow\mathbf{z}_{i}^{(0)}\) for\(t\gets 1\)to\(T+S\)do sample \((\mathbf{P}^{(s)},\boldsymbol{\theta}^{(s)})\) from \(p(\mathbf{P},\boldsymbol{\theta}|\mathbf{Z}=(\mathbf{z}_{1}^{\prime},\dots, \mathbf{z}_{N}^{\prime}))\) using NUTS for\(i\gets 1\)to\(N\)do for\(c\gets 1\)to\(C_{i}\)do sample \(\mathbf{z}_{i}^{*}=\mathbf{z}_{i}^{\prime}+\delta\mathbf{u}\) according to \(q(\mathbf{z}_{i}^{*}|\mathbf{z}_{i}^{\prime})\) from (18) replace \(\mathbf{z}_{i}^{\prime}\) with \(\mathbf{z}_{i}^{*}\) with probability \(a(\mathbf{z}_{i}^{*};\mathbf{z}_{i}^{\prime})\) from (19) if\(t>T\)then \(s\gets t-T\) \((\mathbf{P}^{(s)},\boldsymbol{\theta}^{(s)},(\mathbf{z}_{1}^{(s)},\dots, \mathbf{z}_{N}^{(s)}))\leftarrow(\mathbf{P}^{\prime},\boldsymbol{\theta}^{ \prime},(\mathbf{z}_{1}^{\prime},\dots,\mathbf{z}_{N}^{\prime}))\) return\(\{\mathbf{P}^{(s)},\boldsymbol{\theta}^{(s)},\mathbf{Z}^{(s)}\}_{s=1}^{S}\)
```
**Algorithm 2**Metropolis-within-Gibbs sampler for the latent multinomial model with a GP hierarchical extension. \(T\) is the number of burn-in iterations, \(S\) is the number of inference iterations, \(C_{i}\) is the number of updates per iteration for \(\mathbf{z}_{i}\).
## 3 Results
We implement three MCMC methods: 'MCMC-Exact' marginalises out \(\mathbf{Z}\) exactly using (8), 'MCMC-Approx' marginalises out \(\mathbf{Z}\) approximately using (5), and 'LC-Sampling' samples \(\mathbf{Z}\) according to Algorithm 1 or Algorithm 2 depending on whether the haplotype frequencies are shared across pools. The code is available at [https://github.com/ysfoo/haplm](https://github.com/ysfoo/haplm). We present four sets of results: (i) a comparison of the exact likelihood (8) and the approximate likelihood (5) based on a
toy example, (ii) a comparison of our methods and existing methods (AEML and HIPPO) based on synthetic data, (iii) a comparison of our methods and existing methods based on real human data, and (iv) a demonstration of our methods applied to time-series data in a hierarchical setting. For all examples, the observed data consists of the allele counts of each marker in each pool.
### Accuracy of normal approximation
In this section, we illustrate cases where the normal approximation (5) is inaccurate, even when applied to a large pool of 100 samples. Consider the simplest example where we have one data point \(\mathbf{y}=(y_{1},y_{2})\) of allele counts across \(M=2\) markers for a pool of \(n\) haplotype samples. We denote the haplotype frequencies as \(\mathbf{p}\coloneqq(p_{00},p_{10},p_{01},p_{11})\), where \(p_{h}\) is the frequency of haplotype \(h\). We set the pool size to be \(n=100\) and the allele count of the first marker to be \(y_{1}=50\), and vary \(y_{2}\) between 1 and 50. We find that the exact likelihood (8) is maximised for two sets of haplotype frequencies: \(\hat{\mathbf{p}}=(0.5,0.5-y_{2}/n,0,y_{2}/n)\) and \(\hat{\mathbf{p}}^{\prime}=(0.5-y_{2}/n,0.5,y_{2}/n,0)\), i.e. these are the exact maximum likelihood estimators (MLEs). In Figure 2, we compare the exact likelihood (8)
Figure 2: Exact (solid) and approximate (dashed) log-likelihoods \(p(\mathbf{y}|\mathbf{p})\) evaluated at haplotype frequencies \(\mathbf{p}=(0.5,0.5-p_{11},0,p_{11})\), where \(\mathbf{y}\) consists of allele counts across two markers for one pool of size \(n=100\). The dotted lines indicate where the exact and approximate log-likelihoods are maximised.
and the approximate likelihood (5) for values of \(\mathbf{p}\) that are close to the first MLE, \(\hat{\mathbf{p}}\), for various values of \(y_{2}\). Since \(y_{2}\) has no effect on the entries \((p_{00},p_{01})\) of the first MLE, we only vary the values of \((p_{10},p_{11})\) in our comparison. Overall, the values of \((p_{10},p_{11})\) that maximise the exact and approximate likelihoods do not differ by more than \(0.01\). However, we notice that the normal approximation is less accurate when \(y_{2}\) is close to \(0\) or \(50\). In fact, the approximate likelihood increases without bound as \(\mathbf{p}\rightarrow(0.5,0,0,0.5)\) when \(y_{2}=50\), while the exact likelihood remains bounded. This is because the covariance matrix in (5) becomes singular as \(\mathbf{p}\rightarrow(0.5,0,0,0.5)\). In general, the covariance matrix may become singular when certain entries of \(\mathbf{p}\) approach zero. As such, the accuracy of the normal approximation depends on the data observed: if the data observed supports values of \(\mathbf{p}\) such that the covariance matrix becomes near-singular, then the frequency of rare haplotypes may be underestimated.
### Synthetic data with shared haplotype frequencies
To evaluate our three proposed methods, we first compare their statistical and computational performance with AEML and HIPPO when applied to synthetic datasets where all pools within a dataset share the same haplotype frequencies. We use the default parameters and settings when running AEML and HIPPO according to programs provided by Pirinen (2009). For all MCMC methods, we run 5 chains for each method. Different MCMC methods require different chain lengths to reach convergence. For this example, having 500 burn-in iterations and 500 inference iterations per chain is sufficient for our proposed methods, as NUTS uses gradient information of the posterior
Figure 3: Statistical performance of point estimates \(\hat{\mathbf{p}}\) across 25 synthetic datasets where pools share the same true haplotype frequencies, \(\mathbf{p}^{\text{true}}\). The errors \(\hat{p}_{h}-p_{h}^{\text{true}}\) are plotted against each true haplotype frequency \(p_{h}^{\text{true}}\). The size of each point is scaled by the pool size, \(N\). The average (over 25 datasets) TVD between true haplotype frequencies and point estimates is shown in the bottom right of each plot.
to produce chains with low autocorrelation. On the other hand, Pirinen (2009) recommends \(5\times 10^{5}\) iterations per chain for HIPPO as it produces chains with higher autocorrelation. We report the _effective sample size_ (ESS), which estimates the equivalent number of independent samples such that the information provided by that many independent samples is equivalent to that of the MCMC samples. In order to compare ESS across methods, we thin each chain to 500 samples per chain, regardless of the MCMC method that produced it. For LC-Sampling, the parameter \(C\) from Algorithms 1 acts as a thinning factor, which we set to \(C=5(n_{1}+\cdots+n_{N})\).
We simulate 5 sets of haplotype frequencies \(\mathbf{p}^{\text{true}}\) over \(M=3\) markers from the distribution \(\text{Dir}(0.4,\ldots,0.4)\), which induces some sparsity in \(\mathbf{p}^{\text{true}}\). For each \(\mathbf{p}^{\text{true}}\), we in turn simulate 5 datasets (each with \(N=20\) pools) where the pool size is set to \(n=20,40,60,80,100\), giving a total of 25 datasets. Latent haplotype counts are sampled according to the frequencies \(\mathbf{p}^{\text{true}}\). The number of distinct haplotypes in each of our simulated datasets range between 6 and 8. All \(H=8\) possible haplotypes are used as our input haplotypes.
We compare the following point estimates: the posterior means under MCMC-Exact, MCMC-Approx, LC-Sampling, HIPPO, and the MLE under AEML. We measure the distance between a point estimate \(\hat{\mathbf{p}}\) and the true frequencies \(\mathbf{p}^{\text{true}}\) by the _total variation distance_ (TVD):
\[\text{TVD}(\hat{\mathbf{p}},\mathbf{p}^{\text{true}})\coloneqq\frac{1}{2} \sum_{h=1}^{2^{M}}\lvert\hat{p}_{h}-p_{h}^{\text{true}}\rvert. \tag{20}\]
TVD can be interpreted as the probability mass redistributed to turn one haplotype distribution into another. In general, the summation in (20) is taken not only over the input haplotypes but all possible haplotypes, as the true distribution may include haplotypes absent from the input haplotypes, e.g. when partition ligation (Section 2.3) is used to determine the input haplotypes.
Figure 4: (a) Computational wall times and (b) boxplots of ESS for haplotype frequencies \(\{p_{h}\}_{1\leq h\leq H}\), across all datasets against the number of samples per pool for Bayesian methods applied to 25 synthetic datasets. Each boxplot corresponds to the haplotype frequencies over 5 datasets with the same pool size.
In Figure 3, we report the TVDs between the true frequencies and each point estimate, and plot the errors \(\hat{p}_{h}-p_{h}^{\text{true}}\) for each haplotype \(h\) against the true haplotype frequencies. The results for our proposed methods (top row) are very similar. There is a diagonal on the left end of all plots, corresponding to \(\hat{p}_{h}\approx 0.02\) for our three proposed methods, and \(\hat{p}_{h}\approx 0\) for AEML and HIPPO. The average TVDs under AEML and HIPPO are larger, indicating less accurate inference. As seen in Section 3.1, the approximate likelihood can become unbounded when some haplotype frequencies are zero, which may explain the diagonal around \(\hat{p}_{h}\approx 0\) for the maximum likelihood method AEML. On the other hand, HIPPO may remove rare haplotypes from the list of input haplotypes during MCMC, which is equivalent to setting their frequencies to zero.
To check if uncertainty is adequately accounted by the Bayesian methods, we report the coverage of (equal-tail) credible intervals of the haplotype frequencies for the synthetic datasets are shown in Figure 5(a). The coverage of a \(x\%\) credible interval is the proportion of haplotypes present in the population whose \(x\%\) credible interval contains the corresponding true frequency. Our proposed methods give credible interval coverages that are close to the corresponding credible levels. The close agreement between MCMC-Exact and LC-Sampling is an indication that both methods produce the same posterior. The coverage for HIPPO is lower than expected, which is likely due to the removal of rare haplotypes during MCMC.
Out of the compared methods, AEML is the fastest, taking less than 1 second for each dataset. We report in Figure 4(a) the runtimes (wall time) for the Bayesian methods, including any preprocessing steps (e.g. enumerating feasible sets for MCMC-Exact). The time taken by MCMC-Exact increases rapidly with pool size as the feasible sets get larger. There is considerable variation in runtime across datasets of the same pool size as the runtime is sensitive to the size of the feasible sets. The computational complexity of LC-Sampling is roughly linear with respect to pool size. The runtimes of MCMC-Approx and HIPPO are fairly insensitive to pool size, with MCMC-Approx being less than an order of magnitude slower than HIPPO. In Figure 4(b), we show boxplots of the ESS of haplotype frequencies, grouped by the pool size of each dataset. The ESS under MCMC
Figure 5: Coverage of credible intervals for haplotype frequencies across (a) 25 synthetic datasets, (b) 100 datasets simulated based on 1KGP data. Input haplotypes that are absent from the population are excluded.
Exact and MCMC-Approximate are comparable, whereas the ESS under LC-Sampling decreases as pool size increases. Although HIPPO is the fastest Bayesian method, its ESS has the largest variation. In the worst case, its minimum ESS is close to the number of chains, indicating that chains are stuck in different modes of the posterior.
### Simulated haplotype data from 1000 Genomes Project
We also compare our approach with existing methods based on data simulated with haplotype frequencies extracted from the 1000 Genomes Project (1KGP) (The 1000 Genomes Project Consortium et al., 2015). We use 190 unrelated haplotype samples of the CEU population (Utah residents with ancestry from Northern and Western Europe) for the region ENm010 on chromosome 7. This population and genetic region has been analysed by previous literature in haplotype inference for pooled genetic data (Kirkpatrick et al., 2007; Pirinen et al., 2008; Pirinen, 2009; Gasbarra et al., 2011). Following Gasbarra et al. (2011), we select the first 800 SNPs of the ENm010 region such that adjacent SNPs are separated by at least 100 base pairs. We construct 100 datasets by segmenting this sequence of 800 SNPs into \(M=8\) SNPs (i.e. markers) per dataset. Each dataset consists of \(N=20\) pools, each with 50 haplotypes sampled with replacement from the 190 haplotype samples extracted from 1KGP. We exclude MCMC-Exact as the number of input haplotypes for some datasets is too large for feasible sets to be enumerated within reasonable time.
The number of haplotypes present in each dataset ranges between 3 to 12, considerably smaller than \(2^{8}=256\). We apply partition ligation (Section 2.3) to obtain a list of input haplotypes for each dataset, which is used for all inference methods except for HIPPO, as HIPPO samples the list of input haplotypes as part of its MCMC procedure. For each dataset, the number of input haplotypes obtained from partition ligation ranges between 13 and 40. This implies that many input haplotypes have a true frequency of 0. We specify a sparser prior \(\mathbf{p}\sim\text{Dir}(0.1,\ldots,0.1)\) for MCMC-Approx and LC-Sampling. For HIPPO, we keep the default Dirichlet concentration
Figure 6: (a) TVD between point estimates (MLE for AEML, posterior mean for others) and the true haplotype frequencies across 100 datasets simulated based on data from 1KGP. (b) ESS of haplotype frequencies across 100 datasets simulated based on data from 1KGP. Only haplotypes determined by partition ligation are included in the plot.
of \(\alpha=10^{-5}\), which is recommended (Pirinen, 2009) as HIPPO implicitly considers all possible haplotypes. Out of the 100 lists produced by partition ligation, 43 of them included all haplotypes that are truly present. The sum of frequencies of haplotypes missed by partition ligation for each dataset averages to 0.0066, with the maximum frequency of such a haplotype being 0.0368. Since the number of input haplotypes for MCMC-Approx and LC-Sampling is not too large, we keep the same number of MCMC iterations for MCMC-Approx and LC-Sampling from Section 3.2. However, HIPPO implicitly considers all 256 haplotypes, so we increase the number of MCMC iterations per chain from \(5\times 10^{5}\) to \(2.5\times 10^{6}\).
The distributions of TVDs (20) across the 100 datasets between the true haplotype frequencies and point estimates under each method are shown in Figure 6(a). AEML performs poorly on some datasets (TVD close to 1), possibly due to errors introduced by the normal approximation. The TVDs for the Bayesian methods are comparable, with LC-Sampling having a slightly lighter right tail. The average runtime for MCMC-Approx, LC-Sampling, AEML, and HIPPO are 2.2 minutes, 7.1 minutes, 0.2 minutes, and 6.3 minutes respectively. In Figure 6(b), we show for each Bayesian method the ESS distribution of the frequencies of the haplotypes determined by partition ligation. Overall, chains from LC-Sampling exhibit the least autocorrelation. The Markov chains for MCMC-Approx and HIPPO become stuck at different modes for some haplotypes, as indicated by ESS values around 10. We find that for some datasets, there are multiple modes that are associated with comparable probability mass, see Figure 7 for a representative example. We note that the true frequency may or may not coincide with one of the modes. For this example, LC-Sampling and MCMC-Approx identify modes at similar frequencies, but the densities can be significantly different between methods. Posteriors under HIPPO are omitted as the inference model is different. Trace plots (Figure A1) of these haplotype frequencies reveal that LC-Sampling and MCMC-Approx are able to switch efficiently between modes, whereas HIPPO tends to be stuck in one mode for a large number of iterations.
The coverage of credible intervals for all Bayesian methods are less than ideal (Figure 5(b)), indicating that uncertainty is underestimated. For MCMC-Approx and LC-Sampling, the deterioration of coverage relative to Figure 5(a) is attributed to the credible intervals not accounting for
Figure 7: Multimodal posterior distributions of selected haplotype frequencies from dataset 3 (based on 1KGP data). The posteriors under LC-Sampling and MCMC-Approx are shown as solid and dashed curves respectively; the true frequency is indicated by the vertical dotted line.
the uncertainty due to the input haplotype lists obtained via partition ligation. For example, the credible interval for a haplotype that is present in the population but missed by partition ligation is exactly zero, regardless of the credible level. Out of all Bayesian methods, the underestimation of uncertainty is least severe for LC-Sampling.
### Synthetic time-series data
As an demonstration of how our methods extend to a hierarchical setting, we perform inference for a latent multinomial GP model applied to time-series data, as introduced in Section 2.4. To generate data, we simulate time-varying frequencies of \(H=8\) haplotypes over \(M=3\) markers from a differential equation system. We then simulate haplotype count data over \(N=30\) time points with pool sizes of \(n=50\) from a Dirichlet-multinomial distribution, and take the allele counts of each marker as the observed data. A Dirichlet-multinomial distribution is used to simulate overdispersion, whereas the inference model accounts for overdispersion through a white noise kernel (see (17)). The intention behind this mis-specification is to check whether our inference is robust against the overdispersion model. Details of the simulation and the complete inference model are given in Appendix C.
We perform inference using our three proposed methods. Since the hierarchical model introduces correlations between model parameters, we increase the number of MCMC iterations performed (Table S1). LC-Sampling requires more iterations as there is strong dependence between \(\mathbf{z}_{i}\) and \(\mathbf{p}_{i}\). Figure 8(a) shows that despite running LC-Sampling for 20 times more inference iterations, its
Figure 8: (a) Boxplots of ESS for haplotype frequencies \(\{p_{ih}\}_{1\leq i\leq N,1\leq h\leq H}\) under each proposed method for the time-series example. (b) Posterior predictive distribution of haplotype frequencies under MCMC-Exact. The dashed and solid curves correspond to the true frequencies used for data simulation and the posterior mean respectively. Bands show 95% credible intervals.
MCMC output has lower ESS than MCMC-Exact and MCMC-Approx. Nevertheless, we did not encounter any MCMC convergence issues for the time-series data.
In Figure 8(b), we plot the posterior predictive distribution under MCMC-Exact. There is general agreement between the posterior means and the true haplotype frequencies, with the caveat that the posterior accounts for noise, but the true frequencies are not perturbed by noise. We note that the credible intervals for the haplotypes in the bottom row of Figure 8(b) have wide credible intervals around \(t=10\). Closer inspection reveals that this is caused by posterior multimodality and parameter non-identifiability due to insufficient signal in the data (Appendix C). We report the posterior predictive distributions under MCMC-Approx and LC-Sampling in Figures A4 and A5, which are highly similar to that of MCMC-Exact.
## 4 Discussion
In this paper, we have developed two exact methods (MCMC-Exact and LC-Sampling) and an approximate method (MCMC-Approx) for Bayesian inference of haplotype frequencies given pooled genotype data under a latent multinomial model. The latent multinomial framework is suitable for handling incomplete reporting of genetic data, as full haplotype information is not always available. Furthermore, we illustrate how our methods can infer haplotype frequencies of multiple related populations with a hierarchical model. Existing statistical methods either have only been applied to small pool sizes (\(n\leq 20\)), or rely on approximations. However, approximate methods may give unreliable inference when applied to real data. We instead recommend the use of MCMC-Exact for problems that are small enough where enumerating feasible sets is practical, and LC-Sampling for larger problems.
Out of our proposed methods, MCMC-Approx is the fastest as its runtime is relatively insensitive to pool size (Figure 4). However, its performance is less consistent than the exact methods -- we find good agreement between the results from MCMC-Approx and LC-Sampling only for our synthetic data examples (Section 3.2 and 3.4). For datasets simulated from real genetic data (Section 3.3), there are 8 markers per dataset, but only 3 to 12 haplotypes that are truly present in each dataset. Thus, some datasets have markers with highly correlated allele counts, resulting in near-singular covariance matrices where the approximate likelihood has a larger curvature than the exact likelihood (see Figure 2). This explains why for MCMC-Approx, the Markov chains do not converge in some cases, and uncertainty is more severely underestimated compared to the exact method LC-Sampling. This is also a likely reason for why AEML, a maximum likelihood method, fails on some of these datasets. We speculate that normal approximation methods may be reliable if one is confident that all haplotype frequencies are nonzero, but further investigation into this is needed.
Turning to exact methods, we find that the enumeration method MCMC-Exact does not scale well with pool size, as the size of the feasible set grows rapidly. LC-Sampling addresses this issue by sampling Markov chains over the feasible set, without resorting to approximations. The
computational savings come from exploring only a subset of the feasible set that is likely to produce the observed data. The parameter \(C\) (Algorithm 1) gives us control over how the runtime of LC-Sampling scales. However, LC-Sampling produces Markov chains that exhibit more autocorrelation, especially as pool size increases (Figure 4(b)). The reason for this is twofold: the number of latent count values for MCMC to explore becomes greater, and the conditional posterior (9) from which the frequencies are sampled becomes more influenced by the likelihood than the prior. The posterior samples of the frequencies become more dependent on the latent counts, thereby increasing autocorrelation. For the time-series example, LC-Sampling also gave the lowest ESS, as the alternating updates of strongly dependent variables \(\mathbf{z}_{i}\) and \(\mathbf{p}_{i}\) (\(i=1,\ldots,N\)) give rise to greater autocorrelation (Hills Smith, 1992).
Interestingly, MCMC-Approx gives lower ESS than LC-Sampling for datasets simulated based on real genetic data. One explanation is that MCMC-Approx overestimates the density at some posterior modes (see bottom right plot of Figure 2), which makes it more difficult for a chain to switch between modes. Multimodal posteriors are notoriously difficult for MCMC methods to sample. When faced with a multimodal posterior, a single chain produced by HIPPO may not switch between modes even after millions of iterations (Figure A1). To address this, Pirinen (2009) proposed to only keep the chain whose posterior mean has the highest posterior density. This is sensible if most of the posterior mass is concentrated around one sharp mode. Unfortunately, this is not the case, as multimodal posteriors often have modes with comparable posterior mass, e.g. Figure 7 and Figures A6-A8. Keeping only one chain that is stuck at the global mode does not properly account for uncertainty. Moreover, it is possible that the true frequencies may not even occur near the global mode. We also note that maximum likelihood methods that optimise towards a single mode, such as AEML, would fail to account for uncertainty across multiple modes. In contrast, exact Bayesian methods are able to produce inference that is robust against multimodality.
In comparison to HIPPO, our proposed methods give more reliable estimates of uncertainty (Figure 5), and give smaller errors in the case where all input haplotypes are known (Figure 3). However, our proposed methods may miss some haplotypes if the input list is determined via partition ligation, which occurred for 57 out of the 100 1KGP datasets. Nevertheless, our posterior means still achieve TVDs that are no worse than HIPPO. A potential alternative is to replace the MCMC-Approx subroutine in partition ligation with sparse optimisation methods for frequency estimation (Jajamovich et al., 2013; Zhou et al., 2019).
Other inference methods for latent multinomial models have been proposed in literature outside from haplotype inference. An alternative to the Markov basis we use in LC-Sampling is the dynamic Markov basis (Bonner et al., 2016; Hazelton et al., 2021), which determines proposal directions on-the-fly during MCMC. For large configuration matrices, a Markov basis may be too large to be practically computed, whereas a dynamic Markov basis uses a relatively small number of proposal directions that depend on the current value of the latent counts during MCMC. The method guarantees that the resulting Markov chain over latent counts is irreducible, but requires expert implementation (Zhang et al., 2019). We are also aware of the saddlepoint approximation as an
alternative to the normal approximation for the latent multinomial model (Zhang et al., 2019). However, we suspect that this approximation suffers from similar issues as MCMC-Approx, as it uses a Hessian matrix that shares similar structure with the covariance matrix used in the normal approximation (5).
Compared to existing approaches, the methods that we propose in this paper for haplotype inference from pooled genetic data are more widely applicable. The implementation of the existing methods AEML and HIPPO assume that the data consists of allele counts of each genetic marker. Our methods only require each count to correspond to a subset of the full haplotypes, and these subsets can vary across pools. For example, a study may report complete haplotype information on a subset of the genetic markers. Moreover, we have implemented our methods using the probabilistic programming library PyMC(Salvatier et al., 2016), such that the methods can be easily extended to hierarchical settings, as demonstrated in 3.4. In future work, we will apply our methods to spatiotemporal modelling of antimalarial drug resistance. In particular, we are interested in resistance against the antimalarialial sulfadoxine-pyrimethamine (SP) for the parasite _Plasmodium falciparum_. This resistance is characterised by specific mutations on the _dhfr_ and _dhps_ genes (Sibley et al., 2001), and reporting inconsistencies between genetic studies has been previously noted (Ebel et al., 2021). Our methods developed in this paper applied to a hierarchical model can readily handle such inconsistencies to produce predictive spatiotemporal maps for the prevalences of SP-resistant haplotypes.
## 5 Acknowledgements
J.A. Flegg's research is supported by the Australian Research Council (DP200100747, FT210100034) and the National Health and Medical Research Council (APP2019093).
## References
* 4ti2 team (n.d.) 4ti2--a software package for algebraic, geometric and combinatorial problems on linear spaces.
* Bonner et al. (2016) Bonner, S. J., Schofield, M. R., Noren, P., & Price, S. J. (2016). Extending the latent multinomial model with complex error processes and dynamic Markov bases. _The Annals of Applied Statistics_, _10_(1).
* Diaconis & Sturmfels (1998) Diaconis, P., & Sturmfels, B. (1998). Algebraic algorithms for sampling from conditional distributions. _The Annals of Statistics_, _26_(1).
* Ebel et al. (2021) Ebel, E. R., Reis, F., Petrov, D. A., & Beleza, S. (2021). Historical trends and new surveillance of Plasmodium falciparum drug resistance markers in Angola. _Malaria Journal_, _20_(1), 175.
* Gasbarra et al. (2011) Gasbarra, D., Kulathinal, S., Pirinen, M., & Sillanpaa, M. J. (2011). Estimating haplotype frequencies by combining data from large DNA pools with database information. _IEEE/ACM Transactions on Computational Biology and Bioinformatics_, _8_(1), 36-44.
* Gershman et al. (2016)
Hazelton, M. L., Mcveagh, M. R., & van Brunt, B. (2021). Geometrically aware dynamic Markov bases for statistical linear inverse problems. _Biometrika_, _108_(3), 609-626.
* Hills & Smith (1992) Hills, S. E., & Smith, A. F. M. (1992). Parameterization issues in Bayesian inference (with discussion). In J. M. Bernardo, J. O. Berger, A. P. Dawid, & A. F. M. Smith (Eds.), _Bayesian Statistics 4_ (pp. 227-246). Oxford University Press.
* Hoffman & Gelman (2014) Hoffman, M. D., & Gelman, A. (2014). The no-u-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. _Journal of Machine Learning Research_, _15_(47), 1593-1623.
* Iliadis et al. (2012) Iliadis, A., Anastassiou, D., & Wang, X. (2012). Fast and accurate haplotype frequency estimation for large haplotype vectors from pooled DNA data. _BMC Genetics_, _13_(1), 94.
* Ito et al. (2003) Ito, T., Chiku, S., Inoue, E., Tomita, M., Morisaki, T.,... Kamatani, N. (2003). Estimation of haplotype frequencies, linkage-disequilibrium measures, and combination of haplotype copies in each pool by use of pooled dna data. _The American Journal of Human Genetics_, _72_(2), 384-398.
* Jajamovich et al. (2013) Jajamovich, G. H., Iliadis, A., Anastassiou, D., & Wang, X. (2013). Maximum-parsimony haplotype frequencies inference based on a joint constrained sparse representation of pooled DNA. _BMC Bioinformatics_, _14_(1), 270.
* Kirkpatrick et al. (2007) Kirkpatrick, B., Armendariz, C. S., Karp, R. M., & Halperin, E. (2007). HAPLOPOOL: Improving haplotype frequency estimation through DNA pools and phylogenetic modeling. _Bioinformatics (Oxford, England)_, _23_(22), 3048-3055.
* Kuk et al. (2009) Kuk, A. Y. C., Zhang, H., & Yang, Y. (2009). Computationally feasible estimation of haplotype frequencies from pooled DNA with and without Hardy-Weinberg equilibrium. _Bioinformatics_, _25_(3), 379-386.
* Link et al. (2010) Link, W. A., Yoshizaki, J., Bailey, L. L., & Pollock, K. H. (2010). Uncovering a latent multinomial: Analysis of mark-recapture data with misidentification. _Biometrics_, _66_(1), 178-185.
* Liu (1994) Liu, J. S. (1994). The collapsed Gibbs sampler in Bayesian computations with applications to a gene regulation problem. _Journal of the American Statistical Association_, _89_(427), 958-966.
* Niu et al. (2002) Niu, T., Qin, Z. S., Xu, X., & Liu, J. S. (2002). Bayesian haplotype inference for multiple linked single-nucleotide polymorphisms. _The American Journal of Human Genetics_, _70_(1), 157-169.
* Patil et al. (2001) Patil, N., Berno, A. J., Hinds, D. A., Barrett, W. A., Doshi, J. M., Hacker, C. R., et al. (2001). Blocks of limited haplotype diversity revealed by high-resolution scanning of human chromosome 21. _Science (New York, N.Y.)_, _294_(5547), 1719-1723.
* Pirinen (2009) Pirinen, M. (2009). Estimating population haplotype frequencies from pooled SNP data using incomplete database information. _Bioinformatics_, _25_(24), 3296-3302.
* Pirinen et al. (2008) Pirinen, M., Kulathinal, S., Gasbarra, D., & Sillanpaa, M. J. (2008). Estimating population haplotype frequencies from pooled DNA samples using PHASE algorithm. _Genetics Research_, _90_(6), 509-524.
* Salvatier et al. (2016) Salvatier, J., Wiecki, T. V., & Fonnesbeck, C. (2016). Probabilistic programming in python using PyMC3. _PeerJ Computer Science_, \(2\), e55.
* Salvatier et al. (2012)
Schofield, M. R., & Bonner, S. J. (2015). Connecting the latent multinomial: Connecting the latent multinomial. _Biometrics_, _71_(4), 1070-1080.
* Sibley et al. (2001) Sibley, C. H., Hyde, J. E., Sims, P. F., Plowe, C. V., Kublin, J. G., et al. (2001). Pyrimethamine-sulfadoxine resistance in Plasmodium falciparum: What next? _Trends in Parasitology_, _17_(12), 582-588.
* Stephens & Donnelly (2003) Stephens, M., & Donnelly, P. (2003). A comparison of bayesian methods for haplotype reconstruction from population genotype data. _The American Journal of Human Genetics_, _73_(5), 1162-1169.
* Tam et al. (2019) Tam, V., Patel, N., Turcotte, M., Bosse, Y., Pare, G., & Meyre, D. (2019). Benefits and limitations of genome-wide association studies [Number: 8 Publisher: Nature Publishing Group]. _Nature Reviews Genetics_, _20_(8), 467-484.
* The 1000 Genomes Project Consortium, Auton, A., Abecasis, G. R., Altshuler, D. M., Durbin, R. M., Abecasis, G. R., et al. (2015). A global reference for human genetic variation. _Nature_, _526_(7571), 68-74.
* Wright (2005) Wright, A. F. (2005). Genetic variation: Polymorphisms and mutations. In _Encyclopedia of Life Sciences_. John Wiley & Sons, Ltd.
* Zhang et al. (2008) Zhang, H., Yang, H.-C., & Yang, Y. (2008). PoooL: An efficient method for estimating haplotype frequencies from large DNA pools. _Bioinformatics_, _24_(17), 1942-1948.
* Zhang et al. (2019) Zhang, W., Bravington, M. V., & Fewster, R. M. (2019). Fast likelihood-based inference for latent count models using the saddlepoint approximation. _Biometrics_, _75_(3), 723-733.
* Zhou et al. (2019) Zhou, Y., Zhang, H., & Yang, Y. (2019). Cshap: Efficient haplotype frequency estimation based on sparse representation. _Bioinformatics_, _35_(16), 2827-2833.
## Appendix A Algorithm for finding the feasible set
In this section, we describe a branch-and-bound algorithm for solving \(\mathbf{Az}=\mathbf{y}\) over nonnegative integers \(z_{1},\ldots,z_{H}\) given a binary matrix \(\mathbf{A}\in\{0,1\}^{R\times H}\) and nonnegative integers \(y_{1},\ldots,y_{R}\). Note that the index \(i\) from the main text is dropped for conciseness here. We assume that the condition \(z_{1}+\cdots+z_{H}=n\) is encoded in the linear system \(\mathbf{Az}=\mathbf{y}\), and that the configuration matrix \(\mathbf{A}\) is of full row rank (see Section 2.2 of main text). Since \(\mathbf{A}\) is of full row rank, we can find \(R\) columns of \(\mathbf{A}\) that are linearly independent. Without loss of generality, we rearrange the columns of \(\mathbf{A}\) such that these \(R\) linearly independent columns are the last \(R\) columns, denoted as \(\mathbf{A}_{H-R+1:H}\). Since \(\mathbf{y}=\mathbf{A}_{1:H-R}\mathbf{z}_{1:H-R}+\mathbf{A}_{H-R+1:H}\mathbf{z }_{H-R+1:H}\), it follows that
\[\mathbf{z}_{H-R+1:H}=\mathbf{A}_{H-R+1:H}^{-1}(\mathbf{y}-\mathbf{A}_{1:H-R} \mathbf{z}_{1:H-R}), \tag{21}\]
where \(\mathbf{A}_{1:H-R},\mathbf{z}_{1:H-R},\mathbf{z}_{H-R+1:H}\) denotes the first \(H-R\) columns of \(\mathbf{A}\), the first \(H-R\) entries of \(\mathbf{z}\), the last \(R\) entries of \(\mathbf{z}\) respectively. To find all solutions to the system, we perform a branch-and-bound search to find all possible values of \(z_{1},\ldots,z_{H-R}\). Starting from \(h=1\), the algorithm branches on an interval of possible values for \(z_{h}\) and increments \(h\) whenever a branch is travelled down. If this succeeds until \(h=H-R\), we then find the last \(R\) entries of \(\mathbf{z}\) by using (21). If the result consists of nonnegative integers, we accept \(\mathbf{z}\) as a solution to \(\mathbf{Az}=\mathbf{y}\). We then backtrack the search path (decrementing \(h\)), and explore all other branches to find other solutions. The search is made efficient by finding lower and upper bounds for \(z_{h}\) based on the values of \(z_{1},\ldots,z_{h-1}\) when branching on the value of \(z_{h}\) for \(h=1,\ldots,H\).
Before the search procedure, we first determine preliminary lower bounds \(l_{h}\) and upper bounds \(u_{h}\) for each entry \(z_{h}\) that are satisfied by all nonnegative integer solutions to \(\mathbf{Az}=\mathbf{y}\). A simple choice is to set
\[l_{h}=0,\qquad u_{h}=\min_{r=1,\ldots,R}\{a_{r,h}y_{r}+(1-a_{r,h})(n-y_{r})\}. \tag{22}\]
The lower bound is trivial, whereas the upper bound is true because the \(r\)-th equation in the system implies that \(z_{h}\leq y_{r}\) if \(a_{r,h}=1\), or \(z_{h}\leq n-y_{r}\) if \(a_{r,h}=0\). For each \(h=1,\ldots,H\), we now seek to derive bounds for \(z_{h}\) using the values of \(z_{1},\ldots,z_{h-1}\) along the current search path. For any fixed \(r\) and \(h\), we have
\[z_{h} =l_{h}+(z_{h}-l_{h})\] (23) \[\leq l_{h}+\sum_{h^{\prime}=h}^{H}\mathds{1}(a_{r,h^{\prime}}=a_{ r,h})(z_{h^{\prime}}-l_{h^{\prime}})\] \[=\begin{cases}y_{r}-\sum_{h^{\prime}=1}^{h-1}a_{r,h^{\prime}}z_{ h^{\prime}}-\sum_{h^{\prime}=h+1}^{H}\!\!a_{r,h^{\prime}}l_{h^{\prime}}&\text{if }a_{r,h}=1,\\ n-y_{r}-\sum_{h^{\prime}=1}^{h-1}(1-a_{r,h^{\prime}})z_{h^{\prime}}-\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The inequality holds since \(z_{h^{\prime}}\geq l_{h^{\prime}}\), while the last equality holds because of \(y_{r}=\sum_{h^{\prime}=1}^{H}a_{r,h^{\prime}}z_{h^{\prime}}\) and \(n-y_{r}=\sum_{h^{\prime}=1}^{H}(1-a_{r,h^{\prime}})z_{h^{\prime}}\). We define
\[U_{1}(r;h,z_{1},\ldots z_{h-1}) =y_{r}-\sum_{h^{\prime}=1}^{h-1}a_{r,h^{\prime}}z_{h^{\prime}}- \sum_{h^{\prime}=h+1}^{H}a_{r,h^{\prime}}l_{h^{\prime}},\] \[U_{0}(r;h,z_{1},\ldots z_{h-1}) =n-y_{r}-\sum_{h^{\prime}=1}^{h-1}(1-a_{r,h^{\prime}})z_{h^{ \prime}}-\sum_{h^{\prime}=h+1}^{H}(1-a_{r,h^{\prime}})l_{h^{\prime}},\]
to write the inequality in (23) more concisely as \(z_{h}\leq U_{a_{r,h}}(r;h,z_{1},\ldots z_{h-1})\). We similarly define
\[L_{1}(r;h,z_{1},\ldots z_{h-1}) =y_{r}-\sum_{h^{\prime}=1}^{h-1}a_{r,h^{\prime}}z_{h^{\prime}}- \sum_{h^{\prime}=h+1}^{H}\!\!a_{r,h^{\prime}}u_{h^{\prime}},\] \[L_{0}(r;h,z_{1},\ldots z_{h-1}) =n-y_{r}-\sum_{h^{\prime}=1}^{h-1}(1-a_{r,h^{\prime}})z_{h^{ \prime}}-\sum_{h^{\prime}=h+1}^{H}(1-a_{r,h^{\prime}})u_{h^{\prime}},\]
to obtain the inequality \(z_{h}\geq L_{a_{r,h}}(r;h,z_{1},\ldots z_{h-1})\).
The branch-and-bound algorithm is given in Algorithm A1. The values for \(U_{1},U_{0},L_{1},L_{0}\) are initialised in lines 4-8, where \(h=1\) and \(r=1,\ldots,R\). Given the values of \(z_{1},\ldots,z_{h-1}\) on the current search path, the algorithms finds lower and upper bounds for \(z_{h}\) in lines 11-12 using the inequality \(L_{a_{r,h}}(r;h,z_{1},\ldots z_{h-1})\leq z_{h}\leq U_{a_{r,h}}(r;h,z_{1}, \ldots z_{h-1})\) over \(r=1,\ldots,R\). The branching occurs in lines 19-24, where \(U_{1},U_{0},L_{1},L_{0}\) are updated based on the chosen value of \(z_{h}\).
If the actual range of values that \(z_{h}\) can take is much narrower than the interval \([l_{h},u_{h}]\) as defined in (22), it may be computationally more efficient to find the actual minimum and maximum values that \(z_{h}\) can take, i.e. setting
\[l_{h} =\min\{z_{h}:\mathbf{Az}=\mathbf{y},z_{1}\geq 0,\ldots,z_{H}\geq 0\}, \tag{24}\] \[u_{h} =\max\{z_{h}:\mathbf{Az}=\mathbf{y},z_{1}\geq 0,\ldots,z_{H}\geq 0\}.\]
for each \(h=1,\ldots,H\). These optimisation problems can be solved using integer linear programming. This introduces a computational overhead before the branch-and-bound search, but prunes the search space as \(\mathbf{z}\) would have tighter bounds.
## Appendix B Multimodality example from 1000 Genomes Project
In Figure 7 of the main text, we give an example of posterior multimodality when fitting the latent multinomial model to a dataset simulated based on genetic data from the 1000 Genomes Project. The trace plots of the corresponding haplotype frequencies are given in Figure A1. Note that a thinning factor of 4500 is applied for HIPPO. For this example, LC-Sampling exhibits the best MCMC mixing, followed by MCMC-Approx. HIPPO produces Markov chains that are stuck at different local modes for a long duration. In row 4, one of the chains neglects a haplotype with true frequency 0.13. In rows 2 and 5, the the support of each chain consists of a short interval close to zero and a longer interval away from zero, yet the true value is barely covered by the longer interval. The poor mixing of HIPPO chains may lead to inaccurate estimation. These conclusions drawn from our visual inspection of the trace plots are consistent with the lowest ESS of the haplotype frequencies under each method: 371 for MCMC-Approx, 515 for LC-Sampling, and 10 for HIPPO.
## Appendix C Additional details for time-series modelling
We use a custom system of differential equations to simulate time-series of haplotype frequencies. The system is analogous to the continuous-time model of haploid selection expounded by Hartl (2020), but extended for multiple haplotypes instead of two genotypes. Consider a population of malaria parasites each with one of \(H=8\) possible haplotypes over 3 markers. For each \(h=1,\ldots,H\), the number of parasites with haplotype \(h\) at time \(t\) is \(N_{h}(t)\). We define the frequency of haplotype \(h\) to be \(\tilde{p}_{h}(t)\coloneqq N_{h}(t)/\sum_{h^{\prime}=1}^{H}N_{h^{\prime}}(t)\). Assuming exponential growth, we have \(N_{h}^{\prime}(t)=r_{h}(t)N_{h}(t)\), where \(r_{h}(t)\) is a time-varying intrinsic growth rate for haplotype \(h\), which we interpret as a measure of relative fitness, e.g. a drug-resistant haplotype has a higher fitness relative to a drug-sensitive haplotype after exposure to the drug. We set each \(r_{h}(t)\) to be a sum of \(D=4\) sigmoid functions:
\[r_{h}(t)=\alpha_{h,0}+\sum_{d=1}^{D}\frac{\alpha_{h,d}-\alpha_{h,d-1}}{1+\exp( -(t-c_{h,d})/\gamma_{h,d})}. \tag{25}\]
The \(d\)-th sigmoid (\(d=1,\ldots,D\)) for haplotype \(h\) suggests some change in its relative fitness due to epidemiology or drug usage, which the changepoint occuring at \(t=c_{h,d}\). We also impose the constraint \(c_{h,1}<\cdots<c_{h,D}\). The coefficient \(\gamma_{h,d}\) (\(d=1,\ldots,D\)) controls how quickly the change at \(c_{h,d}\) occurs, whereas the coefficient \(\alpha_{h,d}\) (\(d=0,\ldots,D\)) is the steady-state relative fitness between changepoints \(c_{h,d}\) and \(c_{h,d+1}\), where we define \(c_{h,0}=0\) as the start point and \(c_{h,D}=20\) as the end
point. The coefficients of the sigmoid functions are sampled as follows:
\[c_{h,d} \sim\text{Uniform}(0,20) \text{for }h=1,\ldots,H,\,d=1,2,3,4 \tag{26}\] \[\gamma_{h,d} \sim\text{Uniform}(0.2,2.0) \text{for }h=1,\ldots,H,\,d=1,2,3,4\] (27) \[\alpha_{h,d} \sim\text{N}(0,1/(c_{h,d+1}-c_{h,d})^{2}) \text{for }h=1,\ldots,H,\,d=0,1,2,3,4 \tag{28}\]
For each fixed \(h=1,\ldots,H\), we reorder \(\{c_{h,d}\}_{d=1}^{D}\) such that the sequence is in increasing order. The normal standard deviation in (28) is inversely proportional to the distance between changepoints to discourage dramatic growth in \(N_{h}(t)\) between two changepoints that are far apart. We choose the starting values \(N_{h}(0)\) such that the median values of \(N_{h}(t)\) over \(t\in[0,20]\) are equal across \(h=1,\ldots,H\).
We find that the resulting trends of \(\tilde{p}_{h}(t)\) following the simulation above may be uninteresting depending on the random generation. For example, a haplotype may completely dominate the population, or too many haplotypes exhibit very little variation over time. To counter this, we carry out 100 simulations where \(|\frac{d}{dt}\tilde{p}_{h}(t)|<1\) for all \(t\in[0,20]\) (avoid domination), and select the simulation with the most temporal variation for generating the synthetic time-series count data. We quantify temporal variation using the heuristic
\[\sum_{h=1}^{H}\sum_{t^{\prime}=5}^{14}\lvert\tilde{p}_{h}(t^{\prime}+1)-\tilde {p}_{h}(t^{\prime})\rvert. \tag{29}\]
The selected simulation is shown in Figure A2, along with the \(N=30\) latent counts \(\{\mathbf{z}_{i}\}_{i=1}^{N}\) divided by the pool size \(n=50\). The latent counts are overdispersed counts following the Dirichlet
multinomial distribution
\[\mathbf{z}_{i}\sim\text{DirMult}(50,(200p_{1}(t_{i}),\ldots,200p_{H}(t_{i}))), \text{for }i=1,\ldots,N, \tag{30}\]
where \(t_{i}=0.66i-0.23\) (\(i=1,\ldots,30\)) are equally spaced time points. The Dirichlet-multinomial distribution chosen has the same mean as \(\text{Mult}(50,(p_{1}(t_{i}),\ldots,p_{H}(t_{i})))\), but with \(24\%\) larger variance. Finally, the observed data are the allele counts of each marker across the time points \(t_{1},\ldots,t_{N}\), which is shown in Figure A3.
For the latent multinomial GP model, we first define the haplotype frequencies \(\mathbf{p}_{1},\ldots,\mathbf{p}_{N}\) as a softmax transformation of Gaussian processes \(f_{1},\ldots,f_{H}\) observed at time points \(\mathbf{t}\coloneqq(t_{1},\ldots,t_{N})\):
\[p_{ih}=\frac{\exp(f_{h}(t_{i}))}{\exp(f_{1}(t_{i}))+\cdots+\exp(f_{H}(t_{i}))} \text{for }i=1,\ldots,N,\,h=1,\ldots,H. \tag{31}\]
Following Section 2.4 of the main text, we choose the mean function of the \(h\)-th GP to be a constant \(\mu_{h}\), and the covariance function of the \(h\)-th GP to be the sum of a rational quadratic kernel and a white noise kernel,
\[c_{h}(t_{i},t_{i^{\prime}})=s_{h}^{2}\bigg{(}1+\frac{(t_{i}-t_{i^{\prime}})^{2 }}{2\tau_{h}^{2}}\bigg{)}^{-1}\!\!+\sigma^{2}\mathds{1}(i=i^{\prime}), \tag{32}\]
where \(c_{h}(t_{i},t_{i^{\prime}})\) is the \((i,i^{\prime})\)-th entry of the covariance matrix \(\mathbf{C}_{h}(\mathbf{t},\mathbf{t})\) for \(f_{h}(\mathbf{t})\coloneqq(f_{h}(t_{1}),\ldots,f_{h}(t_{N}))\), \(\tau_{h}\) is the timescale, \(s_{h}\) is the temporal standard deviation, \(\sigma\) is the noise standard deviation, and \(\mathds{1}(\cdot)\) is the indicator function. The full inference model is as follows:
\[\mathbf{y}_{i} =\mathbf{A}_{i}\mathbf{z}_{i} \text{for }i=1,\ldots N, \tag{33}\] \[\mathbf{z}_{i}\mid\mathbf{p}_{i} \sim\text{Mult}(n_{i},\mathbf{p}_{i}) \text{for }i=1,\ldots,N,\] (34) \[f_{h}(\mathbf{t})\mid\mu_{h},s_{h},\tau_{h},\sigma \sim\text{N}(\mu_{h}\mathbf{1}_{N},\mathbf{C}_{h}(\mathbf{t}, \mathbf{t})) i,i^{\prime}=1,\ldots,N,\,h=1,\ldots,H,\] (35) \[\boldsymbol{\mu} \sim\text{N}\bigg{(}\mathbf{0}_{H},2^{2}\left(\mathbf{I}_{H}- \frac{1}{H}\mathbf{J}_{H}\right)\bigg{)}\,,\] (36) \[s_{h} \sim\text{InverseGamma}(3,3) \text{for }h=1,\ldots,H,\] (37) \[\tau_{h} \sim\text{InverseGamma}(3,5) \text{for }h=1,\ldots,H,\] (38) \[\sigma \sim\text{InverseGamma}(3,1), \tag{39}\]
Figure A3: Synthetic time-series data in the form of allele counts for 3 markers.
where \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{h})\), \(\mathbf{1}_{N}\) is a vector of \(N\) ones, \(\mathbf{0}_{H}\) is a vector of \(H\) zeros, \(\mathbf{I}_{H}\) is the \(H\times H\) identity matrix, and \(\mathbf{J}_{H}\) is a \(H\times H\) matrix of ones. Note that if all entries of \(\mu_{h}\) across \(h=1,\ldots,H\) are incremented by the same value, this keeps the values of \(\mathbf{p}_{1},\ldots,\mathbf{p}_{N}\) unchanged. To improve identifiability of \(\mathbf{\mu}\), we impose a sum-to-zero constraint \(\mu_{1}+\cdots+\mu_{H}=0\) through the covariance matrix in (36). For the nonnegative hyperparameters, we choose inverse gamma priors (37)-(39) as they suppress zero and infinity. The choice of parameters for the hyperpriors (36)-(39) are informed by the range of probable values for each hyperparameter. Specifically, the following events each have a \(0.99\) prior probability of occurring:
\[-5.15 <\mu_{h}<5.15 \text{for }h=1,\ldots,H,\] \[0.32 <s_{h}<8.85 \text{for }h=1,\ldots,H,\] \[0.54 <\tau_{h}<14.52 \text{for }h=1,\ldots,H,\] \[0.11 <\sigma<2.90.\]
We perform inference using NUTS for MCMC-Exact and MCMC-Approx, and Algorithm 2 (main text) for LC-Sampling. We report the number of MCMC iterations used and the computational wall time for each method in Table A1. Since the hierarchical model introduces correlations between model parameters, we increase the number of MCMC iterations performed. LC-Sampling requires more iterations as there is strong dependence between \(\mathbf{z}_{i}\) and \(\mathbf{p}_{i}\). We set the value of \(C_{i}\) from Algorithm 2 to \(C_{i}=10n_{i}\). We thin the number of LC-Sampling inference samples to \(1000\) per chain for the ESS comparison to be fair.
To sample from the posterior predictive distribution of the haplotype frequency at any time \(t\), we first sample the conditional normal distributions \(f_{h}(t)\mid f_{h}(\mathbf{t}),\mu_{h},s_{h},\tau_{h},\sigma\) for each posterior sample of \(\{f_{h}(\mathbf{t}),\mu_{h},s_{h},\tau_{h},\sigma\}\) over \(h=1,\ldots,H\), then apply the softmax transformation to obtain
\[p_{h}^{\text{pred}}(t)=\frac{\exp(f_{h}(t))}{\exp(f_{1}(t))+\cdots+\exp(f_{H}( t))} \text{for }h=1,\ldots,H. \tag{40}\]
The summaries of the univariate posterior predictive distributions for MCMC-Exact, MCMC-Approx, and LC-Sampling are shown in Figure 8 (main text), Figure A4, and Figure A5 respectively. For the haplotypes \(001\), \(101\), \(011\), \(111\), there is multimodality in the posterior. As an example, we show the joint posterior distributions for these haplotypes at \(t=10\) in Figures A6-A8. For the joint distributions of haplotypes \(101/011\) and haplotypes \(001/111\), we observe a sharp mode near
the origin (sparse frequencies), and a second mode with lower density and wider spread where the frequencies are away from zero. However, these two modes have comparable posterior mass as the posterior mean is located between the two modes. The other four joint distributions are characterised by a diagonal ridge. This suggests that we are able to infer the frequency of partial haplotypes where one of the first two markers does not have a specified allele (e.g. the partial haplotype?01, which is 001 and 101 combined). However, there is a non-identifiability issue as there is insufficient signal in the data to infer the frequencies of the full haplotypes.
Figure A4: Posterior predictive summary of haplotype frequencies under MCMC-Approx. The dashed and solid curves correspond to the true frequencies used for data simulation and the posterior mean respectively. Bands show 95% credible intervals.
Figure A5: Posterior predictive summary of haplotype frequencies under LC-Sampling. The dashed and solid curves correspond to the true frequencies used for data simulation and the posterior mean respectively. Bands show 95% credible intervals.
Figure A7: Joint posterior distributions under MCMC-Approx of selected haplotype frequencies from the time-series example that show multimodality. The red cross and the black dot correspond to the posterior mean and the true frequencies respectively. |
2309.08572 | Simulating Neutral Atom Quantum Systems with Tensor Network States | In this paper, we describe a tensor network simulation of a neutral atom
quantum system under the presence of noise, while introducing a new
purity-preserving truncation technique that compromises between the simplicity
of the matrix product state and the positivity of the matrix product density
operator. We apply this simulation to a near-optimized iteration of the quantum
approximate optimization algorithm on a transverse field Ising model in order
to investigate the influence of large system sizes on the performance of the
algorithm. We find that while circuits with a large number of qubits fail more
often under noise that depletes the qubit population, their outputs on a
successful measurement are just as robust under Rydberg atom dissipation or
qubit dephasing as smaller systems. However, such circuits might not perform as
well under coherent multi-qubit errors such as Rydberg atom crosstalk. We also
find that the optimized parameters are especially robust to noise, suggesting
that a noisier quantum system can be used to find the optimal parameters before
switching to a cleaner system for measurements of observables. | James Allen, Matthew Otten, Stephen Gray, Bryan K. Clark | 2023-09-15T17:38:37Z | http://arxiv.org/abs/2309.08572v1 | # Simulating Neutral Atom Quantum Systems with Tensor Network States
###### Abstract
While abstract models of quantum computation assume a closed system of two-level states, practical quantum devices inevitably couple to the environment in some way, creating sources of noise. Understanding the tolerance to noise of specific quantum algorithms run on specific devices is important for determining the feasibility of quantum computing in the current noisy intermediate scale quantum era. Of particular interest is understanding the noise sensitivity of these devices as more qubits are added to the system. Classical simulations are a useful tool to understand the effects of this noise, but direct classical simulations of open quantum systems are burdened by an exponentially growing cost in the number of qubits and a large local Hilbert space dimension. For one dimensional, shallow circuits, using tensor networks can replace this exponential cost with a linear one and simulate far wider systems than what would normally be available. In this paper, we describe a tensor network simulation of a neutral atom quantum system under the presence of noise, while introducing a new purity-preserving truncation technique that compromises between the simplicity of the matrix product state and the positivity of the matrix product density operator. We apply this simulation to a near-optimized iteration of the quantum approximate optimization algorithm on a transverse field Ising model in order to investigate the influence of large system sizes on the performance of the algorithm. We find that while circuits with a large number of qubits fail more often under noise that depletes the qubit population, their outputs on a successful measurement are just as robust under Rydberg atom dissipation or qubit dephasing as smaller systems. However, such circuits might not perform as well under coherent multi-qubit errors such as Rydberg atom crosstalk. We also find that the optimized parameters are especially robust to noise, suggesting that a noisier quantum system can be used to find the optimal parameters before switching to a cleaner system for measurements of observables.
## I Introduction
An ideal quantum computer is decoupled from the environment so as to minimize the effects of noise coming from sources such as dephasing and dissipation. Unfortunately, in practice such separation is difficult because quantum circuit operations necessarily couple the system with an outside source. Realistic quantum devices are best thought of as open quantum systems that interact with various sources of environmental noise. This limits the extent to which quantum circuits can be operated, both in terms of circuit size and depth, before breaking down.
Given the current generation of noisy quantum computers, a wide array of quantum algorithms have been developed, such as the variational quantum eigensolver (VQE) and the quantum approximate optimization algorithm (QAOA),[1; 2; 3; 4] which need low circuit depth and hopefully limited coherence. It is still unclear, though, how even in this shallow depth circuit regime, the effect of realistic noise influences the output, such as variational energies and optimized parameters, of these algorithms, especially as we scale circuits to larger system sizes. Therefore, to facilitate our understanding and characterization of quantum devices, we would like to simulate them as best we can using classical algorithms.
Simulations can play an important role in determining the effect of noise on quantum devices. Unfortunately, simulating quantum devices classically is exponentially difficult in general, a feature which is essential to the algorithmic strength of quantum computing. These simulations are even more difficult in the case of open quantum systems where a single wavefunction cannot represent the full state. A traditional approach which directly represents the entire density matrix of the system becomes inefficient very quickly for state-of-the-art sizes such as Sycamore's 53 qubit system,[5] and stochastic approaches are limited by a poorly scaling signal-to-noise cost.[6] For a quantum system based on neutral atom arrays, the qubit count can reach even higher, with systems up to 100 qubits being implemented.[7]
Due to their ability to simulate circuits with a very large number of qubits (albeit at low depth), tensor network states (TNS) are particularly well equipped to study the scaling of noise-based errors with system size.[8] Tensor networks are most frequently used to represent a wavefunction. However, with an open quantum system we need to create a TNS that represents the density matrix. Besides the increased computational burden from the extra physical dimensions, this introduces another problem: enforcing the positivity of the density matrix with a TNS is nontrivial.
In this work, our focus is two-fold. First, we develop and validate a new tensor network approach to approximately enforce positivity of the density matrix when simulating open quantum systems. The most naive rep
resentation of the density matrix - a vectorized Matrix Product Operator (MPO) - can be modified to enforce its positivity, following the Matrix Product Density Operator (MPDO) scheme.[9] While this type of tensor network is well-behaved for simple channels, it struggles to implement circuit operations that combine the channels of multiple time steps together. This limits the ability of the truncation algorithm to find the most accurate approximate forms - in fact, we found that it did not perform as well as the naive MPO in our simulations. Instead, we have devised an efficient compromise, the Purity-Preserving Truncation (PPT) algorithm, which keeps the purity of the density matrix constant after each truncation, limiting the maximum negativity of the system.
Second, we have used the PPT, combined with an efficient massively parallel code (see Appendix C) to determine the effect of noise on a neutral atom simulation of the QAOA. We find that even in the face of non-trivial dissipation and dephasing noise, both the energy and optimized parameters found in a QAOA simulation of a transverse field Ising model (TFIM) are quite accurate even as the system size grows to 80 qubits. However, this is conditioned on a successful measurement, i.e. one without a qubit in a dark state. While the measured observable values when the system returns a result seem largely insensitive to noise, the probability of an unsuccessful measurement increases with both noise and system size resulting in the need for many more shots to achieve a similar result. We also find that certain coherent effects, such as crosstalk between qubits, can influence the circuit in a way that creates compounding errors over system size, making it difficult to operate large, accurate circuits under these errors.
The rest of the paper is as follows. In Section II, we introduce the dynamics of the neutral atom array, the specific quantum system that we will be simulating.[10] In Section III we outline our MPO simulation approach, explaining the PPT algorithm and showing that it performs better than the bare MPO and MPDO in a heavily truncated Random Circuit Sampling (RCS) algorithm. In Section IV, we apply our new machinery on a near-optimized iteration of a QAOA circuit, where the circuit parameters have been optimized to create a ground state wavefunction of the TFIM. In Section IVa, we demonstrate that under most sources of error, the VQE iteration's final evaluated energy depends only on error strength and not on system size, although coherent errors caused by Rydberg atom crosstalk might create a system size dependence. In Section IVb, we also consider the possibility of the algorithm selecting a spurious value due to errors, and find that this is also mostly independent of system size in the same way as the energy measurement. Moreover, we find that the parameter optimization tends to be more robust to the noise than the energy measurement. Our work opens a new approach for the simulation of large, open quantum systems and validates the efficacy of QAOA algorithms on neutral atom devices at the noisy intermediate scale.
## II Lindblad master equation for neutral atom arrays
In this work, we focus on modeling a neutral atom array. The neutral atom array is a system for implementing quantum circuits which is well suited for a large number of qubits, with some current systems composed of hundreds of qubits.[11] We focus in this work on a one-dimensional geometry, where each atom contains a two-level computational subspace \(|0\rangle,|1\rangle\) and a high energy Rydberg state \(|r\rangle\) as well as an additional set of dark states \(|d\rangle\).
In a neutral atom array, entanglement between nearest-neighbor sites can be created via the Rydberg blockade (Fig. 1a),[1] where two neighboring qubits are temporarily promoted to Rydberg states that repulsively interact with each other. In one such scheme, the two active qubits experience simultaneous pulses under a Hamiltonian
\[H_{p}(t)=H_{1}(t)\otimes I_{2}+I_{1}\otimes H_{2}(t)+B|rr\rangle\langle rr| \tag{1}\]
where \(B\) is the Rydberg blockade strength, and \(H_{1},H_{2}\) are the single-qubit components of the Hamiltonian. These components include a Rabi frequency \(\Omega_{i}(t)\) which promotes a \(|1\rangle\) qubit to the Rydberg state \(|r\rangle\) and a Rydberg detuning \(\Delta_{i}(t)\),
\[H_{i}=\frac{\Omega_{i}(t)}{2}\big{[}|r\rangle\langle 1|+\text{h.c.}\big{]}+ \Delta_{i}(t)|r\rangle\langle r|. \tag{2}\]
For all the systems considered in this paper, each pulse is identical, so \(\Omega_{1}(t)=\Omega_{2}(t)\) and \(\Delta_{1}(t)=\Delta_{2}(t)\).
In any quantum system, coupling between the system and environment introduces noise degrees of freedom that must be accounted for. Provided the coupling is weak enough and the environment is memoryless, we can model the evolution of the reduced density matrix of the system \(\rho\) with the Lindblad Master Equation (LME),
\[\frac{\text{d}\rho(t)}{\text{d}t} \equiv-i\mathcal{L}[\rho]\] \[=-i[H_{p}(t),\rho(t)]+\sum_{i}L_{i}\rho(t)L_{i}^{\dagger}-\frac{1 }{2}\{L_{i}^{\dagger}L_{i},\rho(t)\}. \tag{3}\]
Here \(L_{i}\) are jump operators representing different noise sources in the neutral atom array. One such source is Rydberg atom dissipation. The Rydberg states have a finite lifetime and can decay to either the qubit states or arbitrary dark states \(|d\rangle\) that represent any reachable atomic levels. These dark states cease to interact with the rest of the system and we assume that measuring an atom in a dark state counts as a failure of the entire circuit. The jump operator for this mechanism is
\[L_{j}=\sqrt{\gamma_{diss}b_{j}}|j\rangle\langle r| \tag{4}\]
for branching ratios \(b_{0},b_{1},b_{d}\), and overall dissipation strength \(\gamma_{diss}\). If the Zeeman shift between qubit energy levels fluctuates, there is another jump operator to represent qubit dephasing,[12; 13]
\[L_{deph}=\frac{\gamma_{deph}}{\sqrt{2}}\big{(}|0\rangle\langle 0|-|1\rangle \langle 1|\big{)}. \tag{5}\]
The key operation in a quantum circuit is a universal two qubit gate - we will focus on simulating a CZ gate in this paper. To measure the quality of these gates, we use a metric based on a combined arithmetic-geometric mean fidelity, as introduced in.[14] We investigate two types of pulses. The first type of pulse we consider is an Adiabatic Rapid Passage (ARP) pulse (Fig. 1c inset).[1] This pulse attempts to use the Rydberg blockade to prevent simultaneous excitations of both sites into the Rydberg states. Due to a \(\pi\) phase shift acquired by the site after it enters and exits the Rydberg states, this pulse will create a CZ gate in the limit of infinite blockade strength and no noise.
With a realistic[10] blockade strength \(B_{0}=2\pi\times 500\)MHz, time period \(T=0.54\mu s\) and dissipation \(\gamma_{diss}=0.001T^{-1}\), we calculated a Bell fidelity of 0.989, with most of the inaccuracy coming from phase errors in the unitary operator caused by finite blockade strength (Fig. 1b).
The second type of pulse uses a Gaussian profile for \(\Omega(t)\), and a \(\Delta(t)\) which is constant as a function of time. (Fig. 1e inset).[13] Unlike in the ARP pulse, the application of the Gaussian pulse to a single qubit does not complete a full \(|1\rangle\rightarrow|r\rangle\rightarrow|1\rangle\) oscillation. Therefore at infinite blockade strength, a \(\pi\) phase shift doesn't happen; instead one must select a specific finite blockade strength which in combination with the single qubit rotation generates a CZ. The Gaussian unitary varies more rapidly over blockade strength compared to the ARP pulse, so fluctuations in the blockade strength create more significant errors. However, we no longer need large blockade strengths to create an accurate unitary operator. For blockade strengths as low as \(2\pi\times 60\)MHz (Fig. 1d) we can create a unitary with a combined infidelity of \(6\times 10^{-8}\), provided there are no additional sources of noise.
## III Matrix product operator representation
In this section we will describe how to efficiently represent a large number of neutral atom qubits using tensor networks. We will also introduce a new way of maintaining the physicality of an approximate representation of the system under the time evolution of a quantum circuit.
The wavefunction of a one-dimensional chain of \(N\) qubits can be represented by a matrix product state (MPS), a string of rank 3 tensors (except for the edges which are rank 2), with each tensor representing an individual site entangled with its neighbors through virtual/bond dimensions (Fig. 2a),
\[|\psi\rangle_{\sigma_{1}...\sigma_{N}}=A^{1}_{\sigma_{1}i_{1}}A^{2}_{\sigma_{ 2}i_{1}i_{2}}...A^{N}_{\sigma_{N}i_{N-1}} \tag{6}\]
where \(\sigma_{j}\) is a physical dimension and \(i_{j}\) is the bond dimension. Likewise, a density matrix can be represented as a matrix product operator (MPO), where each tensor contains two physical dimensions (Fig. 2b),
\[\rho_{\sigma_{1}...\sigma_{N},\sigma^{\prime}_{1}...\sigma^{\prime}_{N}}=B^{ 1}_{\sigma_{1}\sigma^{\prime}_{1}i_{1}}B^{2}_{\sigma_{2}\sigma^{\prime}_{2}i _{1}i_{2}}...B^{N}_{\sigma_{N}\sigma^{\prime}_{N}i_{N-1}}. \tag{7}\]
The MPS representation of the wavefunction requires \(O(Nd^{2}D^{2})\) values, where \(d\) is the physical dimension and \(D\) is the bond dimension. Unlike the statevector representation, which requires \(O(d^{N})\) values, the MPS representation only grows linearly in \(N\), so it becomes the more tractable representation for any quantum state where the required bond dimension is not expected to be too high. This difference is even more important for the density matrix, where each site requires two physical indices, giving a \(O(d^{2N})\) cost in statevectors and \(O(Nd^{4}D^{2})\) in MPOs. This harsher scaling makes open quantum system density matrix simulations difficult for systems beyond 12 sites:[15] for example, a 15 site system with the same three levels as the neutral atom array requires approximately 4PB of RAM using a naive implementation.
### MPO form of the Noisy CZ Gate
We can convert the density matrix into an MPS by vectorizing[8] the forward and backward physical indices at each site, \(\sigma_{i}\sigma^{\prime}_{i}\rightarrow\eta_{i}\). After that, we want to evaluate the time evolution channel of a pulse on the neutral atom array,
\[\rho(t+T)=\mathcal{C}_{T}(\Omega,\Delta,B)[\rho(t)] \tag{8}\]
with duration \(T\), Rabi frequency \(\Omega(t)\), detuning \(\Delta(t)\) and Rydberg blockade \(B\) in terms of a vectorized MPO. We split the time evolution into small steps of duration \(\tau\ll T\) and attempt to find an approximate form for the small time step channel
\[\mathcal{C}_{\tau}(t)=e^{-i\int_{t}^{t+\tau}\mathcal{L}(t)\mathrm{d}t}. \tag{9}\]
With \(O\left(\tau^{2}\right)\) errors we can decompose the Lindbladian (3) into each individual component and apply the time-evolved form of the components separately. If \(\rho\) is in a vectorized form, we can interpret channel actions \(A\rho B\) as an operator \((A\otimes B)(\rho)\) acting on the forward and backward physical indices of \(\rho\). In this notation,
\[\mathcal{C}_{\tau}(t)[\rho(t)]= e^{-i\tau H_{p}(t)\otimes I}e^{i\tau\otimes H_{p}(t)}\] \[\bigg{(}\prod_{j}e^{L_{i}\otimes L_{i}^{\dagger}-\frac{1}{2}(L_{i }^{\dagger}L_{i}\otimes I+I\otimes L_{i}^{\dagger}L_{i})}\bigg{)}(\rho(t)). \tag{10}\]
Note that we wish to evaluate the exponential of each component analytically, instead of taking a Taylor series
approximation like \(e^{\tau\hat{O}}\approx I+\tau\hat{O}\). This is to make sure the approximate channel remains CPTP. The pulse Hamiltonian is further broken down into its respective components
\[e^{-i\tau B_{\tau}(t)}= \left(e^{-i\tau\left[\frac{1}{2}\hat{O}_{1}(t)(|r\rangle\langle 1|+ \text{h.c.})+\frac{1}{2}\Delta_{1}(t)|r\rangle\langle r|\right]}\otimes I_{2}\right)\] \[\left(\text{site }1\leftrightarrow\text{site }2\right)\left(e^{-i \tau B|rr\rangle\langle rr|}\right) \tag{11}\]
where (\(\text{site }1\leftrightarrow\text{site }2\)) refers to the first term on the RHS with sites \(1\) and \(2\) exchanged. Each single site operator \(\hat{O}_{\sigma,\sigma^{\prime}}\) can be represented as a rank \(2\) tensor acting on that site's physical index \(\sigma_{i}\). The two-site operator \(e^{-i\tau B|rr\rangle\langle rr|}\) becomes a rank \(4\) tensor acting on both \(\sigma_{1}\) and \(\sigma_{2}\). This is the only component of the time evolution channel that entangles the sites.
None of the jump operators we cover in this paper operate on multiple sites at once. However, they do operate on the forward and backward physical indices of the density matrix at the same time. We can write these channels as an operator that acts on the combined vectorized index \(\eta_{i}\) at a particular site. With all of these channel components combined, the time evolution channel \(\mathcal{C}_{\tau}(t)\) in the vectorized picture is a rank \(4\) tensor that acts on the vectorized index of two neighboring sites.
At this point, we can time evolve a vectorized density matrix by applying this channel for each time step. This is accomplished by first multiplying together every tensor involved (Fig. 2c, first step). The resulting tensor now contains the physical indices for two neighboring qubits. We use singular value decomposition (SVD, Fig. 2c second step) to split the two qubit degrees of freedom into separate tensors. This process also creates a new bond index between the tensors and a diagonal matrix \(\Lambda\) on the bond. Generally, the dimension of this bond index will be larger than the previous bond dimension, but we can truncate it back to the original dimension by removing the least significant diagonal elements in \(\Lambda\). This process is most efficient when the MPS has been canonized, with its center at one of the active sites of the gate.[16] Canonization is a gauge transformation that creates a specific center \(c\) in the MPS such that the contraction of all sites around that center reduces to the identity,
\[\sum_{i_{k},k<c:\ 1}\sum_{\eta_{l},l<c}B^{1}_{\eta_{l}i_{1} \ldots}B^{c\perp 1}_{\eta_{c-1}i_{c}i_{c-1}}\sum_{i^{\prime}_{k},k<c-1}B^{1 \dagger}_{\eta_{1}i^{\prime}_{1}}\ldots B^{c\perp 1\dagger}_{\eta_{c-1}i^{\prime}_{ c-2}i^{\prime}_{c-1}}\] \[\ \ \ \ =\delta_{i_{c-1}i^{\prime}_{c-1}} \tag{12}\] \[\sum_{i_{k},k>c}\sum_{\eta_{l},l>c}B^{N}_{\eta_{k}i_{N}}...B^{c+ 1}_{\eta_{c+1}i_{c}i_{c+1}}\sum_{i^{\prime}_{k},k>c}B^{N\dagger}_{\eta_{k}i^{ \prime}_{N}}...B^{c+1\dagger}_{\eta_{c+1}i^{\prime}_{c}i^{\prime}_{c+1}}\] \[\ \ \ =\delta_{i_{c}i^{\prime}_{c}}. \tag{13}\]
The argument in [16] is based on the L2-norm of the MPS being well-behaved, so for a density matrix MPS, as long as the purity is relatively close to \(1\) (which is the case for any light source of noise), canonization will yield a similar increase in truncation efficiency.
Each application costs a time \(O(d^{2}D^{3})\) where \(d\) is the physical dimension and \(D\) the bond dimension of the vectorized density matrix MPS. As many of the frequencies used in the pulse are very large (particularly
Figure 1: (a) Rydberg blockade mechanism. A pulse \(\Omega(t)\) promotes a \(|1\rangle\) qubit state to a Rydberg state \(|r\rangle\) with detuning \(\Delta(t)\); the two neighboring Rydberg states experience an extra blockade interaction \(B\). (b,c) ARP gate fidelity as a function of (b) blockade strength with dissipation \(\gamma_{diss}=0\), and (c) dissipation with blockade \(B=2\pi\times 10000\)MHz. \(\Omega(t)\) and \(\Delta(t)\) follow the blue and green curves of the inset in (c) respectively: \(\Omega(t)=\Omega_{max}\big{[}e^{-(t-t_{0})^{4}/v^{4}}-a\big{]}/(1-a)\) with pulse width parameter \(v=0.175T\), \(t_{0}=T/4\) and \(a=e^{-(t_{0}/v)^{4}}\), while \(\Delta(t)\) follows a split cosine. We use, following Saffman et al,[10]\(T=0.54\mu\)s, \(\Omega_{max}=17\)MHz and \(\Delta_{max}=23\)MHz. (d,e) Gaussian gate fidelity as a function of (d) blockade strength with no dissipation, and (e) dissipation with blockade \(B=2\pi\times 60\)MHz. \(\Omega(t)\) and \(\Delta(t)\) follow the blue and green curves of the inset in (e) respectively: \(\Omega(t)=\Omega_{max}\big{[}e^{-(t-t_{0})^{2}/v^{2}}-a\big{]}/(1-a)\) with pulse width parameter \(v=0.1T\), \(t_{0}=T/2\) and \(a=e^{-(t_{0}/v)^{2}}\), while \(\Delta(t)\) is a constant. We use, following Robicheaux et al,[13]\(T=2.165\mu\)s, \(\Omega_{max}=17\)MHz and \(\Delta=-14.7\)MHz. In both pulses, the branching ratios for dissipation are taken as \(b_{0}=b_{1}=1/16,b_{d}=7/8\).
the Rydberg blockade), the required time step for accurate simulation is very small, requiring the application of thousands of time step channels for a single pulse. Applying each channel one-by-one to the MPS would be unnecessarily expensive. Instead, we integrate the time evolution channel for the entire pulse directly before applying it to the MPS. In a vectorized picture, this is simply a matter of multiplying all the time step channels together. Once the full time evolution channel of a CZ gate has been assembled, it does not have to be re-evaluated unless one of the dissipation or Hamiltonian parameters change and can be copied onto any CZ gate that appears in the quantum circuit we want to simulate.
The advantage of this method is that we can significantly reduce the amount of times we have to apply an operator directly onto the density matrix, but the disadvantage is that the form of the time evolution channel becomes more complicated to evaluate if it acts on more than two sites. This is the case for a circuit with significant global Rydberg atom crosstalk error.
### MPO form of the Noisy CZ Gate with crosstalk
In this section we will cover a specific source of coherent error that will be introduced to some (but not all) of the circuit simulations in the rest of this paper. When a pulse excites a qubit to the Rydberg states, it leaves a residual population in that state. If the residual population does not fully decay before the next pulse, there can be unwanted crosstalk between the Rydberg population of the target sites of the pulse and residual populations in neighboring sites. In order to add Rydberg atom crosstalk to our time evolution channel, we need to include the global blockade term
\[H_{b}=\sum_{i=1}^{N-1}B|rr\rangle\langle rr|_{i,i+1}\bigotimes_{ \begin{subarray}{c}j=1\\ j\neq\{i,i+1\}\end{subarray}}^{N}I_{j} \tag{14}\]
to our time evolution channel. In all our simulations of unwanted crosstalk, we assume that there is no noise to combine with the crosstalk, so we time evolve under the Hamiltonian instead of the Lindbladian. All blockade terms commute with each other and the only blockade terms that do not commute with the rest of the Hamiltonian are those that have some overlap with the active sites, i.e. the active site blockade and the nearest neighbor blockade terms. All other terms can be applied as their own local operator onto the MPS, independent of the rest of the MPO.
The blockade between an active site and its nearest non-active neighbor can be interpreted as a shift in the effective detuning of that site, \(\Delta\rightarrow\Delta+B\), conditioned on whether the non-active neighbor is in a Rydberg state or not. Thus, we can write the gate with nearest-neighbor blockade \(\mathcal{C}^{NN}_{i,i+1}\) as a combination of unmodified two-site gates \(\mathcal{C}_{i,i+1}(\Delta_{i},\Delta_{i+1})\),
\[\begin{split}\mathcal{C}^{NN}_{i,i+1}=&(I-|r \rangle\langle r|)_{i-1}\otimes(I-|r\rangle\langle r|)_{i+2}\otimes\mathcal{C }_{i,i+1}(\Delta_{i},\Delta_{i+1})\\ +&(|r\rangle\langle r|)_{i-1}\otimes(I-|r\rangle \langle r|)_{i+2}\otimes\mathcal{C}_{i,i+1}(\Delta_{i}+B,\Delta_{i+1})\\ +&(I-|r\rangle\langle r|)_{i-1}\otimes(|r\rangle \langle r|)_{i+2}\otimes\mathcal{C}_{i,i+1}(\Delta_{i},\Delta_{i+1}+B)\\ +&(|r\rangle\langle r|)_{i-1}\otimes(|r\rangle \langle r|)_{i+2}\otimes\mathcal{C}_{i,i+1}(\Delta_{i}+B,\Delta_{i+1}+B).\end{split} \tag{15}\]
This tensor gives us the form of the time evolution operator for the active sites and their nearest neighbors, which combined with the two-site blockade operators on the other sites gives the full time evolution of the system as a string of tensors. This assumes that each gate is being applied sequentially, which is not necessarily the case in a real circuit. We could also apply each gate in the same layer simultaneously. However, this would result in a far less tractable MPO, as we would no longer have commutativity of the blockade terms.
### Matrix Product Density Operators
While the LME by itself preserves the positivity of the density matrix, positivity is not an inherent quality of the MPS. Therefore, if the density matrix is truncated, there is a possibility of introducing negative eigenvalues to the system. In the following sections we will describe two ways to alleviate this negativity problem.
The traditional method, described in this section, is a tensor network representation of the density matrix that enforces its positivity at all times.[9] This representation comes from the diagonalized form of the density matrix as a sum over wavefunction projectors,
\[\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|. \tag{16}\]
Each term in this sum can be represented as the outer product of an MPS and its complex conjugate. Instead of accumulating a potentially exponential amount of
Figure 2: a) Wavefunction/MPS. b) Operators or Density Matrix/MPO. c) A potential scheme for truncating an MPS while applying a two-site operator. Arrows on the MPS denote a canonization direction. All operator and active site tensors are first combined then result into two sites with SVD. This also produces a diagonal tensor \(\Lambda\) between the sites containing the singular values of the squared density matrix. The truncated density matrix is then obtained by removing the lowest singular values of \(\Lambda\).
terms, we can distribute the summation over an extra internal dimension for every site, creating the Matrix Product Density Operator (Fig. 3a). The drawback of this representation is the higher computational cost due to the extra index, and the increased difficulty of applying new gates to the density matrix and truncating it. Unlike in many other works,[9; 17] the gate and noise operators are simulated at the pulse level, and occur simultaneously. While the individual time steps can be separated into entangling and noise components, the need to integrate each time step into a single channel, which was caused by the LME parameters requiring large frequencies and short time steps, prevents the gate from being separated this way. Instead, the tensor representing the integrated time evolution channel must be split along its bond and Kraus direction at the same time. In Appendix B we have described such a scheme in the spirit of the Moses move used in isometric tensor networks,[18; 19] where the split along the Kraus direction is attempted first and then adjusted to optimize its site-wise separation of information. However, such a splitting of information is inefficient when the operators responsible for entanglement and noise do not commute with the other terms of the LME, as is the case for all the noise models we consider in this paper. In addition, applying the time evolution channel to an MPDO affects both its bond and inner dimension at the same time. The fact that four indices must be truncated simultaneously, as opposed to a conventional MPS truncation which only addresses one bond at a time, makes it difficult to concentrate information onto any individual site through a canonization-like scheme.
### Purity-Preserving Truncation
There is a compromise between the simplicity of the vectorized MPO and the representational faithfulness of the MPDO - we keep the density matrix as a vectorized MPO but ensure that all truncations of the density matrix do not change its purity \(\text{tr}(\rho^{2})\). Specifically, we define
\[\xi=\frac{\text{tr}(\rho^{2})}{\text{tr}(\rho)^{2}}. \tag{17}\]
In Purity-Preserving Truncation (PPT), after the truncation of the bond's singular values \(\Lambda_{i}\) to a smaller set \(\Lambda_{i}^{\prime}\equiv P_{T}\Lambda_{i}\) (where \(P_{T}\) is the partial projection that truncates the least significant values), we modify the truncated singular values to \(\tilde{\Lambda}_{i}\), the closest set of values that keeps \(\xi\) constant. This is not significantly harder than regular truncation because all terms in the fraction can be represented as polynomial functions of \(\Lambda_{i}\) and the environment tensors \(T_{i}\), \(P_{ij}\) (Fig. 3b),
\[\text{tr}(\rho)=\sum_{i}T_{i}\Lambda_{i}, \tag{18}\] \[\text{tr}(\rho^{2})=\sum_{i}P_{ij}\Lambda_{i}\Lambda_{j}. \tag{19}\]
Once we have determined \(T_{i}\) and \(P_{ij}\), finding the vector \(\tilde{\Lambda}_{i}\) which is closest to the vector of original singular values \(\Lambda_{i}\) becomes a constrained optimization problem. If the density matrix is canonized, this problem becomes even simpler, because \(P_{ij}\) is the identity matrix. Then \(\tilde{\Lambda}_{i}\) must satisfy
\[\xi=\frac{|\Lambda|^{2}}{\left(\sum_{i}T_{i}\Lambda_{i}\right)^{2}}\overset{!} {=}\frac{|\tilde{\Lambda}|^{2}}{\left(\sum_{i}T_{i}^{\prime}\tilde{\Lambda}_{ i}\right)^{2}} \tag{20}\]
where \(T^{\prime}\equiv P_{T}T\) is the truncated trace environment and \(|\tilde{\Lambda}|^{2}\) is the squared norm of \(\tilde{\Lambda}\) treated as a vector. If we define \(\phi\) as the angle between \(\Lambda\) and \(T\), and \(\theta\) as the angle between \(\tilde{\Lambda}\) and \(T^{\prime}\), then we must have
\[\frac{\sec^{2}\phi}{|T|^{2}}=\frac{\sec^{2}\theta}{|T^{\prime}|^{ 2}}, \tag{21}\] \[\theta=\cos^{-1}\left(\frac{|T|}{|T^{\prime}|}\cos\phi\right). \tag{22}\]
To satisfy this bound while maximizing the overlap with both the original singular values \(\Lambda_{i}\) and its truncation \(\Lambda_{i}^{\prime}\), the new singular values \(\tilde{\Lambda}_{i}\) should become the orthogonal projection of \(\Lambda_{i}^{\prime}\) onto the cone of constant angle \(\theta\) around \(T_{i}^{\prime}\). If \(\sigma\) is the angle between the original truncated values \(\Lambda_{i}^{\prime}\) and \(T_{i}^{\prime}\), we have
\[\tilde{\Lambda}_{i}=\Lambda_{i}^{\prime}+\bigg{(}\frac{\tan\theta}{\tan\sigma }-1\bigg{)}\bigg{(}\Lambda_{i}^{\prime}-\frac{|\Lambda^{\prime}|}{|T^{\prime} |}T_{i}^{\prime}\cos\sigma\bigg{)}. \tag{23}\]
One caveat is that this bound is not necessarily always achievable. We see from Eq. (22) that if \(|T^{\prime}|<|T|\cos\phi\), then there is no angle \(\theta\) to satisfy the condition. This is equivalent to the possibility that the original trace environment \(T\), when projected onto the span of the single vector \(\Lambda\), is longer than its projection \(T^{\prime}\) onto the \(D\) environment components corresponding to the largest singular values, where \(D\) is the maximum bond dimension. As \(D\) increases, this becomes more unlikely. In practice, we only find this occurring under heavy truncation. In these cases, setting \(\theta=0\) is the best that we can achieve.
Figure 3: (a) MPDO Ansatz. The \(k_{i}\) indices are the inner dimensions of the tensor network, playing a similar role to the Kraus indices found in completely positive quantum channels. (b) Setup of the Purity-Preserving Truncation algorithm. Since the \(\Lambda\) tensor is diagonal, the trace can be treated as the inner product of two vectors β the \(\Lambda\) tensorβs diagonal entries and its environment.
### Random Circuit Benchmarking of Density Matrix Ansatzes
We use a random circuit architecture to compare the performance of each ansatz. Each layer consists of CZ gates surrounded by Haar random 1-site unitaries applied to every site in an even-odd pattern (Fig. 4a). Given a sufficiently small noise, this circuit should eventually be intractible for any classical simulation, with a complexity growing exponentially in depth.
We first test the positivity of the system with and without PPT under a heavily truncated RCS iteration. We see that PPT does not force the density matrix to be positive, but it does result in less negative eigenvalues for a minor computational cost (Fig. 4b-c). In particular, for the 12-site random circuit of Fig. 4, the minimum eigenvalue of the tensor network with PPT appears bounded at \(\lambda_{min}\approx-0.2\). Given a mostly pure density matrix with maximum eigenvalue \(\lambda_{max}\), we can have a general bound of \(\lambda_{\min}\geq-\sqrt{\text{tr}\rho^{2}-\lambda_{max}^{2}}\), however this does not appear to be enough to explain the plateau behavior of the minimum eigenvalue.
We then compare the random circuit fidelity between the simulated output of the circuit under noise and the ideal output without noise. This should take the form of an exponential decay in the circuit depth with a coefficient dependent on the noise factors.[5; 20] Deviations in this exponential decay represent the breakdown of the classical ansatz as too much information gets truncated. Fig. 4d demonstrates the differences between the performance of the non-positive MPO, with and without PPT, as well as the MPDO ansatz. The MPO with PPT appears to be the most stable ansatz under RCS fidelity. The MPDO fidelity decreases much more rapidly than the expected exponential decay once its bond dimension becomes saturated, with the size of the inner dimension having little effect on this deviation (Appendix A). On the other hand, the MPO without PPT drifts above the expected exponential once its bond dimension saturates. This is because it is no longer reporting a fidelity - the negativity increases the value of \(\text{tr}(\rho)^{2}\), which in turn increases the overlap of \(\rho\) with the ideal wavefunction. This can even cause the reported fidelity to exceed 1 for deeper circuits.
## IV A Candidate Circuit: QAOA Iteration
We study the ability of the quantum system to use QAOA to generate the ground state of a transverse field Ising model,
\[H_{TFIM}=-J\underbrace{(\sum_{\langle ij\rangle}S_{i}^{z}S_{j}^{z}+b\sum_{i}S_{ i}^{z})}_{H_{z}}-g\underbrace{\sum_{i}S_{i}^{x}}_{H_{t}} \tag{24}\]
as one needs to do in a variational quantum eigensolver.
The QAOA ansatz[21] alternates \(k\) times between time evolution of the various terms of Hamiltonian \(H_{z}\) and \(H_{t}\) with weights \(\alpha_{k}\) and \(\beta_{k}\) respectively. With exact noiseless gates the state \(|\psi\rangle\) would evolve under the
Figure 4: Simulated random circuit fidelity over layers of a 12 site quantum circuit over 20 samples, with a maximum bond dimension of 64 and a maximum inner dimension of 4, if applicable. (a): Diagram of the random circuit. Each gate \(U\) is an independent random 2x2 unitary chosen from the Haar measure, acting on the qubit states. (b): Negativity comparisons of the non-positive MPO with and without PPT. The negativity \(\lambda_{min}\) is the most negative eigenvalue of the density matrix after each layer of the random circuit, determined through DMRG. Without PPT, these eigenvalues increase rapidly, while with PPT they are bounded in absolute value from above. (c): Average wall time required to compute each layer of the random circuit with and without PPT and MPDOs on a single node personal computer. Using PPT introduces a small overhead to the time cost that becomes insignificant for MPS bond dimensions of 64 or above. Given that MPSβs typically only become difficult to run at bond dimensions in the thousands, this is a minor cost for most circuits. (d): Random circuit fidelity between systems with and without PPT, including MPDO results. The fidelity is expected to maintain a consistent exponential decay, which is obeyed most closely by the highest bond dimension MPO with PPT β in other circuits we see a sharp deviation from the initial exponential once the bond dimension saturates.
quantum circuit as
\[|\psi\rangle\rightarrow\prod_{k=1}^{K}e^{i\alpha_{k}H_{z}}e^{i\beta_{k}H_{t}}| \psi_{0}\rangle. \tag{25}\]
The final energy of \(|\psi\rangle\) is measured with the Hamiltonian in Equation (24). We use TFIM parameters \(J=1,b=0.2\) and \(g=1.2\) for all our circuits.
A QAOA iteration therefore consists of single site \(e^{i\beta_{k}H_{z}}\) gates and two site \(e^{i\alpha_{k}H_{z}}\) gates. We assume the single site gates are comparably easy to perform in a noiseless manner and focus on simulating the two site gates. We can create an arbitrary \(e^{i\alpha_{k}H_{z}}\) operation from pairs of CZ gates as follows:
\[\mathrm{CNOT}_{ij}=I_{i}\otimes H_{j}\cdot\mathrm{CZ}_{ij}\cdot I _{i}\otimes H_{j} \tag{26}\] \[e^{-iJ\alpha_{k}S_{l}^{z}S_{j}^{z}}=\mathrm{CNOT}_{ij}\cdot I_{i }\otimes e^{-iJ\alpha_{k}S_{l}^{z}}\cdot\mathrm{CNOT}_{ij}\] (27) \[e^{-iJ\alpha_{k}H_{z}}=\bigg{(}\prod_{\langle ij\rangle}e^{-iJ \alpha_{k}S_{l}^{z}S_{j}^{z}}\bigg{)}\prod_{l}e^{-ibJ\alpha_{k}S_{l}^{z}}. \tag{28}\]
Each CZ gate is a copy of the one constructed by the LME, Equation (3). Since all terms in the final product (28) commute with each other, our simulation of the QAOA iteration is as follows (Fig. 5). We first assemble each two-site term in Equation (28) using Equation (27). We then apply each term to the circuit, as well as the single site terms. Finally we apply the layer of transverse field gates \(e^{i\beta_{k}H_{t}}\) for each site \(l\).
We simulate the QAOA iteration using realistic noise sources and Rydberg blockade using a vectorized MPO with Purity-Preserving Truncation and a maximum bond dimension of 768, over different system sizes, with \(K=8\) layers, with the initial state \(|\psi_{0}\rangle=\bigotimes_{i=1}^{N}|+x\rangle\) as a product of positive eigenstates of the Pauli X operator. The most expensive calculations were run on the Argonne National Laboratory Computing Resource Center (LCRC) using the distributed-memory Cyclops Tensor Framework (CTF, Appendix C).
For each system size, we use the same parameters \(\alpha_{j},\beta_{j}\) optimized classically on a 10-site system (Table 1). On a larger number of sites, these parameters will produce an energy that is close to, but not exactly, the ground state energy.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(k\) & \(\alpha_{k}\) & \(\beta_{k}\) \\ \hline \hline
1 & 0.11076513 & 0.75428624 \\ \hline
2 & 0.2771272 & 0.73016842 \\ \hline
3 & 0.36282021 & 0.7096901 \\ \hline
4 & 0.40618171 & 0.68739375 \\ \hline
5 & 0.43256044 & 0.65733871 \\ \hline
6 & 0.44492256 & 0.60978220 \\ \hline
7 & 0.42887337 & 0.51570246 \\ \hline
8 & 0.3225842 & 0.19145101 \\ \hline \end{tabular}
\end{table}
Table 1: Classically optimized time evolution parameters from Eq. (25) for a 10-site, 8-layer TFIM.
Figure 5: (a): A circuit diagram of the first two layers of a QAOA iteration for a 4-qubit system. In each layer, blocks of the entangling component \(H_{z}\) of the TFIM Hamiltonian are applied to the circuit for each nearest-neighbor pair, followed by single site operators representing the transverse field component \(H_{t}\). (b): Each two-site block is composed of two CZ gates simulated on the pulse level, as well as multiple single site operators, following Equation (27).
Figure 6: Final energy of a QAOA iteration with optimized parameters under noise. (a): Absolute energy per site over Rydberg atom dissipation. The energy has very little dependence on dissipation. (b): Relative energy per site \((E-E_{0})/E_{0}\), where \(E_{0}\) is the energy per site of the noiseless circuit. The relative energy decreases with dissipation but has very little dependence on the system size. (c,d): Absolute (c) and relative (d) energy per site over qubit dephasing; the system size dependence is very small. (e): Trace of the qubit component of the density matrix over system size and dissipation.
### QAOA Energy in the Presence of Noise
We investigate the performance of the QAOA iteration over two types of incoherent noise: dissipation of the Rydberg population and dephasing of the individual qubit states (Fig. 6). We first fix the Rydberg blockade at \(2\pi\times 60\)MHz and adjust the dissipation from 0 to 0.1. We note that the accuracy of the energy gets worse as we increase both the dissipation and dephasing although is more strongly affected by dephasing. Interestingly, the energy per site has almost no system size dependence suggesting that the errors due to both noise sources do not accumulate as the system gets larger. This suggests that a neutral atom experiment could successfully measure the QAOA energy for large systems even in the face of significant dissipation. Unfortunately, while the energy eventually measured by the circuit is independent of system size, the number of circuit iterations required to measure the energy will increase for larger systems, due to the decrease in the qubit population of the density matrix caused by Rydberg atom dissipation (dephasing is irrelevant here as it does not affect the dark states). This creates a larger chance of errant population in the Rydberg atom and dark states, which would make the energy measurement invalid. For large enough system sizes, this makes the energy difficult to evaluate, even if the energy would theoretically be accurate if one were lucky enough to measure it. We can see this effect by looking at the trace of the Rydberg/dark state components of the density matrix and its deviation from 0 as seen in Fig. 6e. The accumulation of non-qubit population is a site-wise independent process - the overall qubit trace of the density matrix is an exponential in dissipation and system size,
\[\mathrm{tr}_{q}(\rho)=e^{-0.1556\gamma_{diss}N}. \tag{29}\]
We also simulated the influence of a possible coherent error within the system, that of unwanted Rydberg atom crosstalk. The Rydberg blockade can be problematic if there is a residual Rydberg atom population on sites neighboring those where a gate is being applied, as they can interfere with the dynamics of the gate. These residual populations are normally too small to have an observable effect during the normal execution of a circuit, so we introduce a post-promotion term
\[\hat{A}_{PP}=e^{-i\delta_{PP}(|1\rangle\langle r|+|r\rangle\langle 1|)} \tag{30}\]
to increase the Rydberg atom population after a gate is applied. Fig. 7 shows the effects on the relative energy error of this post-promotion, with and without any crosstalk between sites. Post-promotion introduces an error in the relative energy per site that increases with both post-promotion strength \(\delta_{PP}\) and system size. However, the effects of introducing crosstalk are not only minor, the crosstalk even appears to cause a slight improvement in the energy (Fig. 7b).
### QAOA Parameter Optimization over errors
In addition to measuring the robustness of the noisy neutral atom device with respect to the energy, we would also like to understand if the parameters \(\alpha_{k},\beta_{k}\), one would find during optimization of the QAOA process are robust to noise. To study this, we start with a circuit that is near-optimized, by fixing all \(\alpha_{i}\) and \(\beta_{i}\) parameters at the pre-optimized values except for one \(\alpha_{3}\), which we vary by a multiplicative phase factor \(p\), and then measure the circuit energy over that phase factor (Fig. 8a-b). As the noise factor \(\gamma\) is increased, the optimal phase factor should vary according to some function \(p(\gamma)\). We can measure the overall degree to which this phase factor changes, \(\frac{\mathrm{d}p}{\mathrm{d}\gamma}\).
The dissipation of the system changes the optimal value of \(\alpha_{3}\) to a much lesser degree than it changes the final energy. For example, at a dephasing of 0.01 with 60 qubits, the optimal \(\alpha_{3}\) parameter decreases by a factor of \(8.6\times 10^{-3}\); this error in \(\alpha_{3}\) would affect the final energy per site by \(9.8\times 10^{-5}\). The effect on the energy from the changed parameter is much smaller then the energy error measured due to dephasing which, at this level of dephasing, is at 0.18. Therefore, the energy errors induced by selecting an improper optimization parameter are not the main source of inaccuracy.
This suggests that if we only want to find the optimal parameters of this model, we do not need a very clean system. Therefore, we can consider a protocol where we use a cheap, noisy system to determine the optimal parameters of the circuit, then use a cleaner, more expensive circuit to measure the expectation value of operators over the optimized wavefunction. Provided
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(N\) & \(\frac{\mathrm{d}p}{\mathrm{d}\gamma_{diss}}\) & \(\frac{\mathrm{d}p}{\mathrm{d}\gamma_{dpp}}\) & \(\frac{\mathrm{d}p}{\mathrm{d}\delta_{PP}}\) (no x-talk) & \(\frac{\mathrm{d}p}{\mathrm{d}\delta_{PP}}\) (with x-talk) \\ \hline
20 & 4.963\(\times 10^{-3}\) & -0.8133 & 0.4152 & 0.4569 \\ \hline
40 & 4.910\(\times 10^{-3}\) & -0.7955 & 0.6536 & 0.7768 \\ \hline
60 & 6.594\(\times 10^{-3}\) & -0.8008 & 0.8487 & 1.022 \\ \hline
80 & 6.926\(\times 10^{-3}\) & -0.7959 & 0.9976 & 1.241 \\ \hline \end{tabular}
\end{table}
Table 2: Error in the \(\alpha_{3}\) parameter that minimizes the energy over noise for different system sizes.
Figure 7: Relative energy per site over Rydberg atom post-promotion, both without (a) and with (b) crosstalk. Introducing crosstalk in a sequential pattern of gates creates an error that is dependent on both post-promotion level and system size.
that the higher noise does not reduce the qubit population of the density matrix to the extent that most circuits are immediately rejected, this can save time on the QAOA optimization process.
There is no discernible dependence of the optimization error \(\frac{\mathrm{d}p}{\mathrm{d}\gamma}\) with system size for incoherent noise types or environmental effects encoded by the LME (Fig. 8c-f, Table 2). For Rydberg post-promotion, however, the slope increases with system size without crosstalk and, unlike the relative energy measurements, becomes even worse with crosstalk. Therefore, larger systems will be proportionally more difficult to optimize parameters over in the presence of this error.
## V Outlook
We have constructed a tensor-network based pulse-level simulation of large-scale, one-dimensional neutral atom arrays, using vectorized MPO's to represent the density matrix and two-site gate created by integrating a Linblad Master Equation with the Rydberg blockade. We have developed a new algorithmic approach, the PPT, to help maintain the physicality of the density matrix as a quantum circuit is acted on it. We have benchmarked the PPT and found that it is more efficient then MPDO while having only a minimal number of negative eigenvalues in the resulting density matrix. In practice, because we can go to larger bond dimensions than an MPDO at similar computational cost, we are closest to the ground truth using PPT. We proceeded to then use this machinery to simulate the pulse-level dynamics of open quantum circuits on a neutral atom array for QAOA on a transverse field Ising model. We find that at fixed depth there is little to no dependence on the system size of the accuracy of the circuit under noisy errors, although there is a non-trivial increase in the failure rate under such errors. Under coherent errors, there is a possible decrease in accuracy (specifically final energy and the correct optimization parameters) as the number of qubits is increased.
If we extrapolate our findings to systems of arbitrary size, the failure rate of a QAOA iteration under a dissipation of \(0.001T^{-1}\) such as in [10] will yield a qubit trace of \(\mathrm{tr}_{q}(\rho)=e^{-1.556\times 10^{-4}N}\). At \(N=200\) this would result in a 3% trace error. The expected error in relative energy is extremely minor at only \(-3.3\times 10^{-5}\), and a near-optimized system would find a parameter error of order \(10^{-5}\). Dephasing is a much more significant error - under a dephasing of \(0.001T^{-1}\), the relative energy error becomes \(0.011\), and the parameter error becomes order \(10^{-3}\). This is possibly due to the limited amount of time in which Rydberg atom populations are large enough that dissipation is allowed to act. This suggests that it would be possible to run QAOA iterations under this noise model with relatively low error and failure rates.
Given the harsher scaling of coherent errors, an interesting open question is whether coherent errors are generically the dominant error source in open quantum devices. In fact, even for noise sources such as dissipation there is a coherent and incoherent piece and it is plausible that even in dissipation, the coherent piece is driving errors (see Appendix D for a discussion of this).
While we mainly focused on parameters that were close to the optimal value, it remains to determine how
Figure 8: Final absolute (a) or relative error (b) energy of a QAOA iteration at 60 sites and various levels of dephasing, where the circuit gates differ from the pre-optimized minimum parameters by a phase factor applied to \(\alpha_{3}\). The vertical lines in the graph are located at the phase factor that gives the minimal energy for each circuit (the difference in the optimal phase factor over each circuit is not visible at this scale of the graph). (c-f): Errors \(p-1\) in the energy-minimizing phase factor \(p\) of \(\alpha_{3}\) estimated by noisy gates, over the relevant noise parameter and system size. The noise parameters are dissipation (c), dephasing (d), and Rydberg Post-Promotion without crosstalk (e) and with crosstalk (f).
the QAOA behaves under noise at any stage of optimization, including completely random starting parameters and semi-optimized parameters. This would mainly be useful for problems where even the roughest optimization is classically unfeasible, which does not apply to the current TFIM.
The ability to determine the effect of errors on realistic quantum devices at scale with respect to interesting algorithms is important to making progress in the field of quantum computers. We have demonstrated a particular example of this in neutral atoms systems and believe our new techniques are an important step forward for future applications.
## VI Acknowledgements
We acknowledge useful discussions with Mark Saffman, Martin Suchara and Xiaoyu Jiang. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers. We gratefully acknowledge the computing resources provided on Be-pop, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.
|
2309.06577 | Efficient Finite Initialization for Tensorized Neural Networks | We present a novel method for initializing layers of tensorized neural
networks in a way that avoids the explosion of the parameters of the matrix it
emulates. The method is intended for layers with a high number of nodes in
which there is a connection to the input or output of all or most of the nodes,
we cannot or do not want to store/calculate all the elements of the represented
layer and they follow a smooth distribution. This method is equally applicable
to normalize general tensor networks in which we want to avoid overflows.
The core of this method is the use of the Frobenius norm and the partial
lineal entrywise norm of reduced forms of the layer in an iterative partial
form, so that it has to be finite and within a certain range. These norms are
efficient to compute, fully or partially for most cases of interest. In
addition, the method benefits from the reuse of intermediate calculations. We
apply the method to different layers and check its performance. We create a
Python function to run it on an arbitrary layer, available in a Jupyter
Notebook in the i3BQuantum repository:
https://github.com/i3BQuantumTeam/Q4Real/blob/e07c827651ef16bcf74590ab965ea3985143f891/Quantum-Inspired%20Variational%20Methods/TN_Normalizer.ipynb | Alejandro Mata Ali, IΓ±igo Perez Delgado, Marina Ristol Roura, Aitor Moreno Fdez. de Leceta | 2023-09-11T08:05:09Z | http://arxiv.org/abs/2309.06577v3 | # Efficient Finite Initialization for Tensorized Neural Networks
###### Abstract
We present a novel method for initializing layers of tensorized neural networks in a way that avoids the explosion of the parameters of the matrix it emulates. The method is intended for layers with a high number of nodes in which there is a connection to the input or output of all or most of the nodes.
The core of this method is the use of the Frobenius norm of this layer in an iterative partial form, so that it has to be finite and within a certain range. This norm is efficient to compute, fully or partially for most cases of interest. We apply the method to different layers and check its performance. We create a Python function to run it on an arbitrary layer, available in a Jupyter Notebook in the i3BQ quantum repository github.com/i3BQantumTeam/Q4Real
## 1 Introduction
Deep neural networks in the world of machine learning are widely used to obtain good results for use cases in industry, research and various fields. However, for highly complex cases we have the problem of needing a large number of parameters, with very large layers of neurons, which can be seen as having to apply very large matrices. In the literature it has been extensively studied to reduce the number of parameters in various ways, such as decomposing matrices into tensor networks [1] or directly training the tensor network itself [2][3][4] (Fig. 1).
Our focus of analysis will be on methods where a tensor network is generated to model the layer tensor and trained directly, rather than using a full matrix. For example, when we try to use tensorized physics informed neural networks to solve differential equations for big industrial cases, as the heat equation of an engine or fluids in a turbine. In this case the initialization problem is often encountered, which we will see in the next section. If we initialize the elements of each tensor with a certain distribution, when we contract the tensor network to obtain the tensor it represents, some of its elements are too large (infinite) or too small (null) for the computer.
We want to eliminate precisely these problems. A first proposal could be to contract the tensor network and eliminate these elements. However, in certain very large layers we cannot store all the tensor elements in memory, so we need another way.
One way is to re-initialize the tensor network by changing a distribution with better hyperparameters, changing the mean and standard deviation. Nevertheless, many of these methodologies are not easy to apply or are not efficient at all.
Our method consists of iteratively calculating the Frobenius norm for different sections of the tensor network until a condition is met, when we
Figure 1: Arbitrary tensor network layer.
divide all the parameters of the tensor network by the calculated factor in a particular way. This allows us to gradually make the Frobenius norm of the layer tend to the number we want, without having to repeatedly re-initialize.
This method is remarkably interesting for hierarchical tree form layers, especially in Tensor Train (TT), Tensor Train Matrix (TT-M) and Projected Entangled Pair States (PEPS). This can also be used in other methods with tensor networks, such as combinatorial optimization, to determine hyperparameters, and it can be combined with other initialization methods.
## 2 Description of the problem
When we have a tensor network of \(N\) nodes, we will have that the elements of the tensor representing the tensor network are given by the sum of a set of values, each given by the product of \(N\) elements of the different nodes. If we look at the case of a TT layer, as in Fig. 2.a, the shape of the elements of the layer is given as
\[T_{ijklm}=\sum_{nopq}T_{in}^{0}T_{njo}^{1}T_{ojp}^{2}T_{pjq}^{3}T_{qm}^{4}. \tag{1}\]
We see that for 5 indices in the tensor we have to multiply 5 tensor elements, but in the general case with \(N\) indices we have
\[T_{ioi_{1}\ldots i_{N-1}}=\sum_{j_{0}j_{1}\cdots j_{N-2}}T_{i_{0}j_{0}}^{0}T_{ j_{0}i_{1}j_{1}}^{1}\ldots T_{j_{N-2}i_{N-1}}^{N-1}, \tag{2}\]
multiplying \(N\) elements of the tensors \(T^{i}\) to obtain the element of the tensor \(T\).
To exemplify the problem we want to solve, let's think about this. For a general case with bond dimension \(b\), the dimension of the index that is contracted between each pair of nodes, \(N\) nodes and \(a\) constant elements of the nodes we would have
\[T_{i_{0}i_{1}\ldots i_{N-1}}=a^{N}b^{N-1}. \tag{3}\]
We can see that with 20 nodes, an element value of 1.5 and a link dimension of 10, the final tensor elements would be 3.3 10\({}^{22}\). This is a very large element for a good initialization. However, if we were to divide the values of these tensors by that number, we could arrive at a case where \(a^{N}\) was a number too small for our computer to store and we would get a 0 in all the elements.
Footnote 1: We have to use the tensor network to compute the tensor network.
This problem is exacerbated by the number of nodes in the layer, since each one is a product. Moreover, we cannot simply calculate these tensor elements for cases with many physical indices, the output indices. This is because the number of values to be held in memory increases exponentially with the number of indices.
## 3 Tensor network initialization protocol
Our protocol is based on the use of partial Frobenius norms in order to normalize the total Frobenius norm of the resulting tensor.
The Frobenius norm of a matrix is given by the equation
\[||A||_{F}=\sqrt{\sum_{ij}|a_{ij}|^{2}}=\sqrt{\mathrm{Tr}(A^{\dagger}A)}. \tag{4}\]
In a tensor network, this would be to contract the layer with a copy of itself, so that each physical index is connected to the equivalent of its copy. We can see some examples in Fig. 3.
Figure 3: Square of the Frobenius norm calculated to a) Tensor Train layer. b) Tensor Train Matrix layer. c) PEPS layer.
Figure 2: a) Tensor Train layer with 5 indices. b) Tensor Train Matrix layer with 10 indices. c) PEPS layer with 9 indices.
The contraction of this tensor network is equivalent to the Frobenius norm of the matrix it represents. In addition, it can be computed without the need to calculate the elements of the represented matrix, using only the elements of the nodes.
The Frobenius norm is an indicator that serves to regularize layers of a model [5] and gives an estimate of the order of magnitude of the matrix elements. This can be seen if, with a more or less homogeneous distribution of elements, the norm in Eq. (4) will be of the order of \(\sqrt{nm}\ a_{00}\) for a \(n\times m\) matrix.
To avoid that the elements of the layer are too big or too small, and therefore we have too big or too small outputs in the initialization, we will normalize these elements so that the norm of the tensor is a number that we choose, for example 1. This prevents our highest element from being higher than this value, while taking advantage of the localized distribution of values to ensure that the smallest value is not too small.
Still, for an \(n\times m\) matrix we will be summing \(nm\) values, so we should adjust the norm to be proportional to \(nm\), the size of the problem, and thus not decrease the magnitude of the values with it.
For this purpose we define what we will call the partial square norm of the tensor network.
### Partial square norm of the tensor network
Throughout this section we will assume that we can consistently sort the nodes of a tensor network so that they form a single growing chain.
\(p||\mathcal{A}||_{n,N}\), the partial square norm at \(n\) nodes of a tensor network \(\mathcal{A}\) with N nodes, will be defined as the norm of the tensor network \(\mathcal{A}_{n}\) defined by the first \(n\) nodes of \(\mathcal{A}\).
To get an idea of what this partial square norm is, we will exemplify it with a simple case, a tensor train layer. We will consider the tensor network in Fig. 4, whose nodes are sorted.
As we can see, in this case we would only have to do the same process as when calculating the total norm of the total tensor network, but stopping at step \(n\) and contracting the bond index of the two final tensors of the chain.
We can see in the following Fig. 5 and 6 how the partial square norm would be for a TT-Matrix layer and for a PEPS layer.
This calculation can be extended to general tensor networks easily, as long as we have a consistent ordering of the nodes.
### Initialization protocol
If we have a tensor network \(\mathcal{A}\), representing a \(n_{A}\times m_{A}\) matrix whose Frobenius norm \(||\mathcal{A}||_{F}\) is infinite, zero or outside a certain range of values, we will want to normalize the elements of our tensor network so that the norm \(||\mathcal{B}||_{F}\) of the new tensor network \(\mathcal{B}\) is equal to a certain number. From here we will assume it is \(F=n_{A}m_{A}\), but we will see that another number can easily be chosen. To normalize the norm of the \(\mathcal{A}\) tensor with \(N\) nodes, we will only have to divide the elements of each of its nodes by \(||\mathcal{A}||_{F}^{1/N}\).
Since we cannot divide the elements by 0 or infinity, we will use the following logic. If the total norm is infinite (zero), there will exist a partial square norm of \(n\) nodes whose value is finite and non-zero such that the partial square norm of \(n+1\) nodes is infinite (zero). This is because each step we add a new node to the partial square norm, we multiply by a new value, so
Figure 4: a) Tensor Train layer with 5 nodes. b) Partial square norm at 1 node. c) Partial square norm at 2 nodes. d) Partial square norm at 3 nodes.
Figure 5: a) Tensor Train Matrix layer with 5 nodes. b) Partial square norm at 1 node. c) Partial square norm at 2 nodes. d) Partial square norm at 3 nodes.
infinity (zero) will appear after a certain number of nodes, being the partial square norm with one node less a valid number to divide by.
The idea is to iteratively normalize the norm little by little so that we eventually achieve full normalization.
We want a tensor network \(\mathcal{B}\) with Frobenius norm \(F\), with \(N\) nodes and we set as tolerance range \((a,b)\). The protocol to follow would be:
1. We initialize the node tensors with some initialization method. We recommend random initialization with a Gaussian distribution of a constant standard deviation (not greater than 0.5) and a constant mean neither too high nor too low and positive.
2. We calculate the norm \(||\mathcal{A}||_{F}\). If it is finite and non-zero, we divide each element of each node by \(\left(\frac{||\mathcal{A}||_{F}}{F}\right)^{1/N}\) and we have the \(\mathcal{B}\) we want. Otherwise, we continue.
3. We calculate \({}^{p}||\mathcal{A}||_{1,N}\), the partial square norm for 1 node of \(\mathcal{A}\). 1. If it is infinite, we divide each element of nodes of \(\mathcal{A}\) by \((10(1+\xi))^{1/2N}\), being \(\xi\) a random number between 0 and 1 and we return to step 2. 2. If it is zero, we divide each element of nodes of \(\mathcal{A}\) by \((0.1/(1+\xi))^{1/2N}\) and we return to step 2. 3. Otherwise, we save this value as \({}^{p}||\mathcal{A}||_{1,N}\) and continue.
4. For \(n\in[2,N-1]\) we calculate \({}^{p}||\mathcal{A}||_{n,N}\), the partial square norm for \(n\) nodes of \(\mathcal{A}\). 1. If it is infinite or zero, we divide each element of nodes of \(\mathcal{A}\) by \((^{p}||\mathcal{A}||_{n-1,N})^{\frac{1}{2N}}\) and we return to step 2. 2. If it is finite, but bigger than \(b\) or smaller than \(a\), we divide each element of nodes of \(\mathcal{A}\) by \((^{p}||\mathcal{A}||_{n,N})^{\frac{1}{2N}}\) and we return to step 2. 3. Otherwise, we continue.
5. If no partial square norm is out of range, infinite or zero, we divide each element of nodes of \(\mathcal{A}\) by \((^{p}||\mathcal{A}||_{N-1,N})^{\frac{1}{2N}}\) and we return to step 2.
We repeat the cycle until we reach a stop condition, which will be to have repeated a certain maximum number of iterations. If we reach that point, the protocol will have failed and we will have two options. The first is to change the order of the nodes, so that other structures are checked. The second is to reinitialize with other hyperparameters in the nodes.
The purpose of using a random factor in case of divergence in the partial norm with 1 node is that, not knowing the real value by which we should divide, we rescale by an order of magnitude. However, to avoid possible infinite rescaling loops, we add a variability factor so that we cannot get stuck.
## 4 Results
We ran the initialization with TT and TT-Matrix layers of different sizes and physical dimensions \(p\), and checked how many steps were needed for normalization. We use a value of 1 for the mean and a value of 0.5 for the standard deviation. For the TT layer we choose \(F=p^{2N}\) and for the TT-Matrix layer we choose \(F=p^{N}\). For the TT layer this is the number of elements we have and for the TT-Matrix layer it is its root, a number we take for convergence purposes. Our tolerance range is \((F\ 10^{-3},F\ 10^{3})\). We can see in Figs. 7, 8 and 9 the result for the TT layer and for the TT-Matrix layer.
We can see in Fig. 7 that the scaling with \(N\) is linear for different \(p\). Fig. 8 shows that the scaling is logarithmic with \(p\), similar to the scaling with \(b\) in Fig. 9. In all cases, the TT-Matrix layer normalization requires more steps than the TT layer normalization.
Figure 6: a) PEPS layer with 9 nodes. b) Partial square norm at 1 node. c) Partial square norm at 2 nodes. d) Partial square norm at 5 nodes.
## 5 Other applications
So far we have seen the application for tensorized neural networks, but this method can be useful for more methods.
Whenever we have a method where we have to contract a tensor network and the non-zero elements of the tensors that form it are of the same order of magnitude we can perform this method. This can be helpful in cases where we do not want the absolute scale of the final tensor elements, but we want to observe a relative scale between them.
An example would be the simulation of imaginary time evolution processes where we want to see which is the state with minimum energy and not the energy it has. However, this energy could be recovered if we perform the method, we save the scale factor by which we are multiplying the elements of the tensor network, and we multiply the values of the resulting tensor network by this factor. This can be interesting because the different factors can be multiplied keeping their order of magnitude apart, so that we do not have overflows.
## 6 Conclusions
We have developed a method to successfully initialize a layer of tensorized neural networks by using the partial computations of their Frobenius norms. We have also applied it to different layers and seen its scaling.
A possible future line of research could be to investigate how to reduce the number of steps to be performed. Another could be to study the scaling of complexity with increasing size of each of the different types of existing layers. We could also apply it to the methods mentioned in Sec. 5 for example in combinatorial optimization[6] to determine the appropriate decay factor.
## Acknowledgement
The research leading to this paper has received funding from the Q4Real project (Quantum Computing for Real Industries), HAZITEK 2022, no. ZE-2022/00033.
|
2309.00106 | In-situ Thermophysical Measurement of Flowing Molten Chloride Salt Using
Modulated Photothermal Radiometry | Molten salts are a leading candidate for high-temperature heat transfer
fluids (HTFs) for thermal energy storage and conversion systems in concentrated
solar power (CSP) and nuclear energy power plants. The ability to probe molten
salt thermal transport properties in both stationary and flowing status is
important for the evaluation of their heat transfer performance under realistic
operational conditions, including the temperature range and potential
degradation due to corrosion and contamination. However, accurate thermal
transport properties are usually challenging to obtain even for stagnant molten
salts due to different sources of errors from convection, radiation, and
corrosion, let alone flowing ones. To the best of authors' knowledge, there is
no available in-situ technique for measuring flowing molten salt thermal
conductivity. Here, we report the first in-situ flowing molten salt thermal
conductivity measurement using modulated photothermal radiometry (MPR). We
could successfully perform the first in-situ thermal conductivity measurement
of flowing molten $NaCl-KCl-MgCl_2$ in the typical operating temperature (520
and 580 $^oC$) with flow velocities ranging from around 0.3 to 1.0 $m$$s^-1$.
The relative change of the molten salt thermal conductivity was measured.
Gnielinski's correlation was also used to estimate the heat transfer
coefficient h of the flowing $NaCl-KCl-MgCl_2$ in the given experimental
condition. The work showed the potential of the MPR technique serving as an
in-situ diagnostics tool to evaluate the heat transfer performance of flowing
molten salts and other high-temperature HTFs. | Ka Man Chung, Ye Zhang, Jian Zeng, Fouad Haddad, Sarath Reddy Adapa, Tianshi Feng, Peiwen Li, Renkun Chen | 2023-08-31T19:54:59Z | http://arxiv.org/abs/2309.00106v1 | _In-situ_ Thermophysical Measurement of Flowing Molten Chloride Salt Using Modulated Photothermal Radiometry
###### Abstract
Molten salts are a leading candidate for high-temperature heat transfer fluids (HTFs) for thermal energy storage and conversion systems in concentrated solar power (CSP) and nuclear energy power plants. The ability to probe molten salt thermal transport properties in both stationary and flowing status is important for the evaluation of their heat transfer performance under realistic operational conditions, including the temperature range and potential degradation due to corrosion and contamination. However, accurate thermal transport properties are usually challenging to obtain even for stagnant molten salts due to different sources of errors from convection, radiation, |
2309.10315 | On (co-)morphisms of $n$-Lie-Rinehart algebras with applications to
Nambu-Poisson manifolds | In this paper, we give a unified description of morphisms and comorphisms of
$n$-Lie-Rinehart algebras. We show that these morphisms and comorphisms can be
regarded as two subalgebras of the $\psi$-sum of $n$-Lie-Rinehart algebras. We
also provide similar descriptions for morphisms and comorphisms of $n$-Lie
algebroids. It is proved that the category of vector bundles with Nambu-Poisson
structures of rank $n$ and the category of their dual bundles with $n$-Lie
algebroid structures of rank $n$ are equivalent to each other. | Yanhui Bi, Zhixiong Chen, Tao Zhang | 2023-09-19T04:53:42Z | http://arxiv.org/abs/2309.10315v1 | # On (co-)morphisms of \(n\)-Lie-Rinehart algebras with applications to Nambu-Poisson manifolds
###### Abstract
In this paper, we give a unified description of morphisms and comorphisms of \(n\)-Lie-Rinehart algebras. We show that these morphisms and comorphisms can be regarded as two subalgebras of the \(\psi\)-sum of \(n\)-Lie-Rinehart algebras. We also provide similar descriptions for morphisms and comorphisms of \(n\)-Lie algebroids. It is proved that the category of vector bundles with Nambu-Poisson structures of rank \(n\) and the category of their dual bundles with \(n\)-Lie algebroid structures of rank \(n\) are equivalent to each other.
0
Footnote 0: The research is supported by the National Natural Science Foundation of China (NSFC) grants 11961049(Bi and Zhang), and 11601219(Bi and Zhang), and by the Key Project of Jiangxi Natural Science Foundation grant 20232ACB201004(Bi, Chen and Zhang).
Keywords: \(n\)-Lie-Rinehart algebras; Leibniz-Rinehart algebras; morphisms; comorphisms; Nambu-Poisson structures; \(n\)-Lie algebroids.
## 1 Introduction
The \(n\)-Lie-Rinehart algebra is a generalized structure of Lie-Rinehart algebras. It consists of a quadruple \((\mathbf{E},[\,\cdot\,,\cdots\,,\cdot\,],\rho,\mathcal{A})\), where \(\mathcal{A}\) is a commutative algebra, \(\mathbf{E}\) is an \(\mathcal{A}\)-module and \((\mathbf{E},[\,\cdot\,,\cdots\,,\cdot\,])\) is an \(n\)-Lie algebra, \(\rho:\wedge^{n-1}\mathbf{E}\rightarrow\mathrm{Der}(\mathcal{A})\) (called the anchor of \(\mathbf{E}\)), satisfying some compatibility conditions, see Definition 3.1. The concept of an \(n\)-Lie algebroid is a generalization of Lie algebroids, which carries an \(n\)-Lie-Rinehart algebra structure in the differential geometric context. In [22], the author give a category equivalence between vector bundles with Poisson structures and their dual bundles with Lie algebroid structures. In general, there is no one-to-one correspondence between a linear Nambu-Poisson structure on \(E\) and a Filippov \(n\)-algebroid structure on \(E^{*}\), see [3]. However, it is pointed out in [4] that there exists an one-to-one correspondence between a linear Nambu-Poisson structure of rank \(n\) on \(E\) and a Filippov \(n\)-algebroid structure of rank \(n\) on \(E^{*}\). It is natural to ask whether there is a category equivalence between the category of vector bundles with Nambu-Poisson structures of rank \(n\) and the category of their dual bundles with \(n\)-Lie algebroid structures of rank \(n\).
In order to obtain the equivalent relation between vector bundles with Nambu-Poisson structures of rank \(n\) and their dual bundles with \(n\)-Lie algebroid structures of rank \(n\), we define morphisms and comorphisms of \(n\)-Lie algebroids. Using morphisms and comorphisms of \(n\)-Lie algebroids, we prove
the category equivalence relationship between them. Thanks to the morphisms and comorphisms of Lie-Rinehart algebras introduced by Z. Chen and Z. J. Liu [6], we obtained the equivalence in this \(n\)-Lie-Rinehart algebra setting.
Lie-Rinehart algebras can be found in [6, 18, 19]. The \(n\)-Lie algebra first introduced by Filippov [10]. The \(n\)-Lie algebra is very different form Lie algebras by the multilinear \(n\)-bracket. The \(n\)-Lie algebra [2, 10], the Nambu-Poisson structure [1, 3, 9, 11, 21, 23, 26] and the \(n\)-Lie algebroid play important roles in mathematics and physics [11]. The \(n\)-Lie algebroid first is defined by Grabowski and Marmo [11]. Recently, Hassine e.t. studied the representation, cohomology, abelian extension theory of \(n\)-Lie-Rinehart algebras in [12]. In particular, in that paper, the homomorphism \(\Psi:(\mathbf{E},\mathcal{A})\rightarrow(\mathbf{F},\mathcal{A})\) of \(n\)-Lie-Rinehart algebras is over the same base algebra \(\mathcal{A}\). In this paper, we introduce the concepts of morphisms and comorpisms from \((\mathbf{E},\mathcal{A})\) to \((\mathbf{F},\mathcal{B})\) over the different base algebra which is not given in [12].
The first main aim of this paper is to show that morphisms and comorpisms of \(n\)-Lie-Rinehart algebras can be unified via restriction theory. The conclusion is that both of them are subalgebras in an \(n\)-Lie-Rinehart algebra called the \(\psi\)-sum of \(n\)-Lie-Rinehart algebras. The second main aim of this paper is to show that there is a category equivalence between vector bundles with Nambu-Poisson structures of rank \(n\) and their dual bundles with \(n\)-Lie algebroid structures of rank \(n\). We obtain the result that morphisms (comorphisms) of \(n\)-Lie-Rinehart algebras can be seen as comorphisms (morphism) of \(n\)-Lie algebroids (see, Examples (5.4) and (5.12)). As applications, we also study Nambu-Poisson submanifolds using Nambu-Poisson relations and results obtained above.
The paper is organized as follows. In section 2, we recall the notions of \(n\)-Lie algebras, \(n\)-Lie algebroids and Nambu-Poisson manifolds. In section 3, we recall the notions of \(n\)-Lie-Rinehart algebras and Leibniz-Rinehart algebras. We introduce the restricting \(n\)-Lie-Rinehart algebras and Leibniz algebras. Applying this process, we obtain the \(\psi\)-sum of \(n\)-Lie-Rinehart algebras and Leibniz-Rinehart algebras. In section 4, we give two kinds of morphisms of \(n\)-Lie-Rinehart algebras. For the comorphism of \(n\)-Lie-Rinehart algebras, we give Proposition 4.8. It provides a relationship between comorphisms of \(n\)-Lie-Rinehart algebras and differential operators of degree \(n-1\) on \(\wedge_{\mathcal{A}}^{n-1}\mathbf{E}_{\mathcal{A}}^{*}\). The principal objective in this section is the proof of our main result (Theorem 4.9) in this paper. It provides a picture of the relationship of the two different morphisms, where their graphs turn out to be two subalgebras of the \(\psi\)-sum with respect to a given algebra morphism \(\psi\). In section 5, we recall the notion of comorphisms of vector bundles, and then we give the definition of comorphisms of \(n\)-Lie algebroids. We give a category equivalence between \(\mathcal{VB}_{Nambu}\) and \(\mathcal{LA}^{\vee}\), see Theorem 5.6. In Section 6, we introduce the notion of Nambu-Poisson submanifolds. We recall the coisotropic submanifold of Nambu-Poisson manifolds. As an application of coisotropic submanifolds, we give the notion of Nambu-Poisson relations. We use the Nambu-Poisson relation to prove the fact that there is a category equivalence between \(\mathcal{VB}_{Nambu}^{\vee}\) and \(\mathcal{LA}\), see Theorem 6.10.
## 2 Preliminaries
In this section, we recall the definitions and some notations of \(n\)-Lie algebras, Leibniz-Rinehart algebras, \(n\)-Lie algebroids and Nambu-Poisson manifolds. Let \(\mathcal{A}\) is a commutative associative algebra over \(K\), where \(K\) is the number field \(\mathbb{R}\) or \(\mathbb{C}\).
**Definition 2.1** ([10]).: _An \(n\)-Lie algebra is a vector space \(V\) equipped with an \(n\)-ary totally skew-symmetric multilinear map (called the \(n\)-bracket) \([\cdot,\cdots,\cdot]:\wedge^{n-1}V\to V\) such that for all \(X_{1},\cdots,X_{n-1}\), \(Y_{1},\cdots,Y_{n}\in V\),_
\[[X_{1},\cdots,X_{n-1},[Y_{1},\cdots,Y_{n}]]=\sum_{i=1}^{n}[Y_{1},\cdots,Y_{i-1 },[X_{1},\cdots,X_{n-1},Y_{i}],Y_{i+1},\cdots,Y_{n}]. \tag{1}\]
The elements in \(\wedge^{n-1}V\) are called fundamental elements. On \(\wedge^{n-1}V\), these is a new non-symmetric
bracket \([\cdot\,,\cdot\,]_{\wedge^{n-1}V}\) given by
\[[x,y]_{\wedge^{n-1}V}=\sum_{i=1}^{n-1}Y_{1}\wedge\cdots Y_{i-1}\wedge[X_{1}, \cdots,X_{n-1},Y_{i}]\wedge Y_{i+1}\wedge\cdots\wedge Y_{n-1}, \tag{2}\]
for all \(x=X_{1}\wedge\cdots\wedge X_{n-1}\) and \(y=Y_{1}\wedge\cdots\wedge Y_{n-1}\). This \((\wedge^{n-1}V,[\cdot\,,\cdot\,]_{\wedge^{n-1}V})\) is a Leibniz algebra, see [26].
**Definition 2.2**.: _A representation of an \(n\)-Lie algebra \((V,[\cdot\,,\cdots\,,])\) on a vector space \(W\) is a multilinear map \(\rho:\wedge^{n-1}V\to\mathrm{gl}(W)\) such that for all \(X_{1},\cdots,X_{n-1},Y_{1},\cdots,Y_{n}\in V\), we have_
\[[\rho(X_{1},\cdots,X_{n-1}),\rho(Y_{1},\cdots,Y_{n-1})] = \sum_{i=1}^{n-1}\rho(Y_{1},\cdots,Y_{i-1},[X_{1},\cdots,X_{n-1}, Y_{i}],Y_{i+1},\cdots,Y_{n-1}), \tag{3}\] \[\rho(X_{1},\cdots,X_{n-2},[Y_{1},\cdots,Y_{n}]) = \sum_{i=1}^{n}(-1)^{n-i}\rho(Y_{1},\cdots,\widehat{Y_{i}},\cdots,Y_{n})\circ\rho(X_{1},\cdots,X_{n-2},Y_{i}).\]
(4)
**Definition 2.3** ([12]).: _A Leibniz-Rinehart algebra over \(\mathcal{A}\) is a tuple \((E,[\cdot\,,\cdot\,],\rho,\mathcal{A})\), where \(\mathcal{A}\) is a commutative algebra, \(E\) is an \(\mathcal{A}\)-module, \([\cdot\,,\cdot\,]:E\times E\to E\) is a bilinear map and the anchor map \(\rho:E\to Der(\mathcal{A})\) satisfying the following conditions:_
1. _The pair_ \((E,[\cdot\,,\cdot\,])\) _is a Leibniz algebra;_
2. \(\rho([X,Y])=\rho(X)\rho(Y)-\rho(Y)\rho(X)\)_;_
3. \(\rho\)_(aX)=a_\(\rho(X)\)_;_
4. \([X,aY]=[X,Y]+\rho(X)(a)Y,\)__
_for all \(a\in\mathcal{A},X,Y\in E\)._
**Definition 2.4** ([11]).: _An \(n\)-Lie algebroid is a vector bundle \(A\to M\) equipped with an \(n\)-bracket \([\cdot\,,\cdots\,,\cdot]\) on the section spaces \(\Gamma(A)\) of \(A\) and a vector bundle map \(\rho:\wedge^{n-1}A\to TM\) over a manifold \(M\), called the anchor of the \(n\)-Lie algebroid, such that_
1. \((\Gamma(A),[\cdot\,,\cdots\,,\cdot])\) _is an_ \(n\)_-Lie algebra;_
2. _The anchor map_ \(\rho:\wedge^{n-1}A\to TM\) _satisfies the following relations:_ 1. \[[\rho(X_{1},\cdots,X_{n-1}),\rho(Y_{1},\cdots,Y_{n-1})]=\sum_{i}\rho(Y_{1}, \cdots,[X_{1},\cdots,X_{n-1},Y_{i}],\cdots,Y_{n-1})\] (5) 2. \[[X_{1},\cdots,X_{n-1},fX_{n}]=f[X_{1},\cdots,X_{n-1},X_{n}]+\rho(X_{1}, \cdots,X_{n-1})(f)X_{n}\] (6) _for all_ \(X_{i},Y_{i}\in\Gamma(A)\) _and_ \(f\in C^{\infty}(M)\)_._
Obviously, an \(n\)-Lie algebroid over a point is an \(n\)-Lie algebra.
**Example 2.5** ([11]).: _The tangent bundle \(T\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) has a structure of \(n\)-Lie algebroid uniquely determined by_
\[\left[\frac{\partial}{\partial x_{i_{1}}},\cdots,\frac{\partial}{\partial x_{i_{ n}}}\right]=0\]
_and an anchor map \(\rho:\wedge^{n-1}T\mathbb{R}^{m}\to T\mathbb{R}^{m}\) is given by \(dx_{1}\wedge\cdots\wedge dx_{n-1}\otimes\frac{\partial}{\partial x_{1}}\), where \(x_{1},\cdots,x_{n},\cdots,x_{m}\) are coordinates of \(\mathbb{R}^{m}\)._
**Remark 2.6**.: _For \(n>2\), the cotangent bundle \(T^{*}M\) is not an \(n\)-Lie algebroid, since the Jacobi identity is satisfied only for closed 1-forms._
**Definition 2.7** ([23]).: _A Nambu-Poisson manifold is a smooth manifold \(M\) equipped with a Nambu-bracket \(\{\cdot\,,\cdots,\cdot\}\) of orede \(n\) over \(M\) such that it is an \(n\)-multilinear mapping_
\[\{\cdot\,,\cdots,\cdot\}:C^{\infty}(M)\times\cdots\times C^{\infty}(M)\to C ^{\infty}(M)\]
_and satisfies the following relations:_
1. _Skew-symmetric,_ \(\{f_{1},\cdots,f_{n}\}=sign(\sigma)\{f_{\sigma_{1}},\cdots,f_{\sigma_{n}}\}, \ \forall\sigma\in\sum_{n}\)_;_
2. _Leibniz rule,_ \(\{f_{1},\cdots,gf_{n}\}=\{f_{1},\cdots,f_{n}\}g+\{f_{1},\cdots,g\}f_{n}\)_;_
3. _Fundamental identity,_ \[\{f_{1},\cdots,f_{n-1},\{g_{1},\cdots,g_{n}\}\}=\sum_{i=1}^{n}\{g_{1},\cdots,g _{i-1},\{f_{1},\cdots,f_{n-1},g_{i}\},\cdots,g_{n}\}.\]
_Here \(f_{i},g_{j},g\in C^{\infty}(M)\) and \(\sum_{n}\) is a permutation group of \(\{1,\cdots,n\}\)._
Note that Poisson manifolds are Nambu-Poisson manifolds of order 2.
## 3 The \(\psi\)-sum of \(n\)-Lie-Rinehart algebras
### \(n\)-Lie-Rinehart algebras
In the paper, we adopt the following notations from [6, 12].
**Definition 3.1** ([12]).: \((\mathbf{E},[\cdot\,,\cdots,\cdot],\rho,\mathcal{A})\) _is called an \(n\)-Lie-Rinehart algebra over \(\mathcal{A}\), where \(\mathbf{E}\) be an \(\mathcal{A}\)-module and an \(n\)-Lie algebra over \(K\) with a map \(\rho:\wedge^{n-1}\mathbf{E}\rightarrow\mathrm{Der}(\mathcal{A})\) (called the anchor of \(\mathbf{E}\)) such that_
1. _The map_ \(\rho\) _is a representation of_ \((\mathbf{E},[\cdot\,,\cdots,\cdot])\) _on_ \(\mathcal{A}\)_._
2. _The follows conditions are satisfied:_ \[\rho(aX_{1},X_{2},\cdots,X_{n-1}) = a\rho(X_{1},X_{2},\cdots,X_{n-1}),\] (7) \[[X_{1},\cdots,X_{n-1},aX_{n}] = a[X_{1},\cdots,X_{n-1},X_{n}]+\rho(X_{1},\cdots,X_{n-1})(a)X_{n},\] (8)
_for all \(X_{i}\in\mathbf{E},a\in\mathcal{A}\)._
From the first equation (7), we get \(\rho(X_{1},\cdots,aX_{i},\cdots,X_{n-1})=(-1)^{i-1}\rho(aX_{i},X_{1},\cdots,X_ {n-1})=(-1)^{i-1}a\rho(X_{i},X_{1},\cdots,X_{n-1})=a\rho(X_{1},\cdots,X_{i}, \cdots,X_{n-1}),\ \ \forall 1\leq i\leq n-1\).
We prefer to write
\[\rho(X_{1},\cdots,X_{n-1})(a)=[X_{1},\cdots,X_{n-1},a]:=(-1)^{n-i}[X_{1}, \cdots,X_{i-1},a,X_{i},X_{i+1},\cdots,X_{n-1},],\]
then the second Equation (8) is equivalent to
\[[X_{1},\cdots,X_{n-1},aX_{n}]=a[X_{1},\cdots,X_{n-1},X_{n}]+[X_{1},\cdots,X_{n-1},a]X_{n}.\]
Let \((\mathbf{E},[\,\cdot\,,\cdots,\cdot\,],\rho)\) be an \(n\)-Lie-Rinehart algebra and denote \(\wedge_{\mathcal{A}}^{n-1}\mathbf{E}\) by \(\mathcal{E}\). Define a linear map \(\widehat{\rho}:\mathcal{E}\rightarrow\mathrm{Der}(\mathcal{A})\) by
\[\widehat{\rho}(x)=\rho(X_{1},\cdots,X_{n-1}),\]
for \(x=X_{1}\wedge\cdots\wedge X_{n-1}\in\mathbf{E}\). Obviously, \(\mathcal{E}\) is also an \(\mathcal{A}\)-module.
**Proposition 3.2** ([12]).: _With the above notations, \((\mathcal{E},[\,\cdot\,,\cdot\,]_{\mathcal{E}},\widehat{\rho})\) is a Leibniz-Rinehart algebra over \(\mathcal{A}\), where the bracket is defined as in Equation (2)._
Given an \(n\)-Lie-Rinehart algebra \((\mathbf{E},[\,\cdot\,,\cdots,\cdot\,],\rho,\mathcal{A})\) and let \(\mathcal{I}\subsetneq\mathcal{A}\) be an ideal of \(\mathcal{A}\). Then, we define \(\mathbf{E}^{\mathcal{I}}\subseteq\mathbf{E}\) such that
\[\mathbf{E}^{\mathcal{I}}=\{X_{i},i=1,\cdots,n-1|[X_{1},\cdots,X_{n-1}, \mathcal{I}]\subseteq\mathcal{I}\},\]
which is exactly a submodule of \(\mathbf{E}\), and \((\mathbf{E}^{\mathcal{I}},[\,\cdot\,,\cdots,\cdot\,],\mathcal{A})\) is an \(n\)-Lie-Rinehart subalgebra of \(\mathbf{E}\):
\[[\mathbf{E}^{\mathcal{I}},\cdots,\mathbf{E}^{\mathcal{I}}]\subset\mathbf{E}^ {\mathcal{I}},\]
**Remark 3.3**.: _Obviously, when \(n=2\), \((\mathbf{E}^{\mathcal{I}},[\,\cdot\,,\cdot\,],\mathcal{A})\) is a Lie-Rinehart algebra._
We define \(\mathcal{E}^{\mathcal{I}}=\{x\in\mathcal{E}|[x,\mathcal{I}]_{\mathcal{E}} \subset\mathcal{I}\}\). Obviously, the subset \(\mathcal{E}^{\mathcal{I}}\) is a submodule of \(\mathcal{E}\), and \((\mathcal{E}^{\mathcal{I}},[\,\cdot\,,\cdot\,])\) is a Leibniz-Rinehart subalgebra.
Now, let
\[\mathcal{I}\mathbf{E}:=\{\sum_{i}a_{i}X_{i}|a_{i}\in\mathcal{I},X_{i}\in E\} \subset\mathbf{E}^{\mathcal{I}},\]
then we have \(\mathcal{I}\mathbf{E}\) is an ideal of \(\mathbf{E}^{\mathcal{I}}\). Using the terminology of quotient module allows us to extend the results of Z. Chen and Z. J. Liu [6] from Lie-Rinehart algebras to \(n\)-Lie-Rinehart algebras.
**Lemma 3.4**.: _The quotient \((\mathcal{A}/\mathcal{I})\)-module \(\mathbf{E}^{\mathcal{I}}/(\mathcal{I}\mathbf{E})\) is an \(n\)-Lie-Rinehart algebra._
Proof.: By \(\mathbf{E}^{\mathcal{I}}\) is a \(\mathcal{A}\)-module and \(\mathcal{A}(\mathcal{I}\mathbf{E})\subset\mathcal{I}\mathbf{E}\), \(\mathcal{I}\mathbf{E}^{\mathcal{I}}\subset\mathcal{I}\mathbf{E}\), we have that \(\mathbf{E}^{\mathcal{I}}/(\mathcal{I}\mathbf{E})\) is an \((\mathcal{A}/\mathcal{I})\)-module. Now we define the induced bracket by the following equations
\[[\overline{X_{1}},\cdots,\overline{X_{n}}]:=\overline{[X_{1},\cdots,X_{n}]}, \quad[\overline{X_{1}},\cdots,\overline{a}]:=\overline{[X_{1},\cdots,a]},\]
where \(\overline{X}=X+\mathcal{I}\mathbf{E}\) and \(\overline{a}=a+\mathcal{I}\). Then, we need only to prove that they are well-defined. In fact, we have
\[[\mathcal{I}\mathbf{E},\cdots,\mathcal{I}\mathbf{E},\mathcal{A}] \subset\mathcal{I},\] \[[\mathbf{E}^{\mathcal{I}},\cdots,\mathbf{E}^{\mathcal{I}}, \mathcal{I}] \subset\mathcal{I},\] \[[\mathbf{E}^{\mathcal{I}},\cdots,\mathbf{E}^{\mathcal{I}}, \mathcal{I}\mathbf{E}] \subset\mathcal{I}\mathbf{E}.\]
The first two formulas are obvious. For the last formula, notice that if \(X_{1},\cdots,X_{n-1}\in\mathbf{E}^{\mathcal{I}},aX_{n}\in\mathcal{I}\mathbf{E}\), where \(a\in\mathcal{I},X_{n}\in\mathbf{E}\), we have
\[[X_{1},\cdots,X_{n-1},aX_{n}]=a[X_{1},\cdots,X_{n-1},X_{n}]+[X_{1},\cdots,X_{n-1 },a]X_{n},\]
where the two terms in the right hand side of the above equation is in \(\mathcal{I}\mathbf{E}\) since \([X_{1},\cdots,X_{n-1},a]\in\mathcal{I}\). Therefore, we obtain that \((\mathbf{E}^{\mathcal{I}}/(\mathcal{I}\mathbf{E}),\mathcal{A}/\mathcal{I})\) inherits all structures of \((\mathbf{E},\mathcal{A})\).
Let \(\mathbf{E}\) be an \(\mathcal{A}\)-module and \(\mathcal{I}\) be an ideal of \(\mathcal{A}\). Then, \(\mathbf{E}/(\mathcal{I}\mathbf{E})\cong\mathbf{E}\otimes_{\mathcal{A}}( \mathcal{A}/\mathcal{I})\) as \(\mathcal{A}/\mathcal{I}\)-module, under the isomorphism \(\sigma:\overline{X}\to X\otimes_{\mathcal{A}}\overline{1}\). For the proof, see [6].
By the above results, we have the following Lemma.
**Lemma 3.5**.: _Given an \(n\)-Lie-Rinehart algebra \(({\bf E},{\cal A})\) and an ideal \({\cal I}\subset{\cal A}\). Then under the isomorphism \(\sigma\) defined above, we have that \({\bf E}^{\cal I}/({\cal I}{\bf E})\cong{\bf E}^{\cal I}\otimes_{\cal A}({\cal A }/{\cal I})\) and the latter has the induced \(n\)-Lie-Rinehart algebra structure (over \({\cal A}/{\cal I}\)) defined by_
\[[X_{1}\otimes_{\cal A}\overline{a_{1}},\cdots,X_{n-1}\otimes_{ \cal A}\overline{a_{n-1}},\overline{a_{n}}]=\overline{a_{1}\cdots a_{n-1}}[X_{ 1},\cdots,X_{n-1},a_{n}]\] \[[X_{1}\otimes_{\cal A}\overline{a_{1}},\cdots,X_{n-1}\otimes_{ \cal A}\overline{a_{n-1}},X_{n}\otimes_{\cal A}\overline{a_{n}}]=[X_{1},\cdots,X_{n}]\otimes_{\cal A}\overline{a_{1}\cdots a_{n}}\] \[+\sum_{i=1}^{n}(-1)^{n-i}X_{i}\otimes_{\cal A}\overline{a_{1} \cdots\widehat{a_{i}}\cdots a_{n}}[X_{1},\cdots\widehat{X_{i}},\cdots,X_{n},a _{i}],\]
_for all \(X_{i}\in{\bf E}^{\cal I},a_{i}\in{\cal A}\)._
_Proof_. By the isomorphism map \(\sigma\), we have that the image of \({\bf E}^{\cal I}/({\cal I}{\bf E})\) is exactly is \({\bf E}^{\cal I}\otimes_{\cal A}({\cal A}/{\cal I})\). Therefore, we define the bracket in \({\bf E}^{\cal I}\otimes_{\cal A}({\cal A}/{\cal I})\),simply given by
\[[X_{1}\otimes_{\cal A}\overline{1},\cdots,X_{n-1}\otimes_{\cal A }\overline{1},\overline{a_{n}}]=\overline{[X_{1},\cdots,X_{n-1},a_{n}]}\] \[[X_{1}\otimes_{\cal A}\overline{1},\cdots,X_{n-1}\otimes_{\cal A }\overline{1},X_{n}\otimes_{\cal A}\overline{1}]=[X_{1},\cdots,X_{n}]\otimes_ {\cal A}\overline{1}.\]
\(\blacksquare\)
**Definition 3.6**.: _We denote_
\[{\bf E}_{\cal I}={\bf E}^{\cal I}\otimes_{\cal A}({\cal A}/{\cal I})={\bf E}^ {\cal I}/({\cal I}{\bf E})\]
_and call \(({\bf E}_{\cal I},[\cdot,\cdots,\cdot]_{{\bf E}_{\cal I}},{\cal A}/{\cal I})\) the \({\cal I}\)-restriction of an \(n\)-Lie-Rinehart algebra \(({\bf E},[\cdot\,,\cdots,\cdot]_{{\bf E}},{\cal A})\) with respect to the ideal \({\cal I}\subset{\cal A}\)._
### The \(\psi\)-sum
Similar as the fact that two Lie-Rinehart algebras \(({\bf E},[\cdot\,,\cdot]_{E},{\cal A})\) and \(({\bf F},[\cdot\,,\cdot]_{F},{\cal B})\) define their direct sum to be an \({\cal A}\otimes{\cal B}\)-module \(({\bf E}\otimes{\cal B})\oplus({\cal A}\otimes{\bf F})\), we also define direct sum of two \(n\)-Lie-Rinehart algebras \(({\bf E},[\cdot\,,\cdots,\cdot]_{E},{\cal A})\) and \(({\bf F},[\cdot\,,\cdots,\cdot]_{F},{\cal B})\) such that it is an \({\cal A}\otimes{\cal B}\)-module \(({\bf E}\otimes{\cal B})\oplus({\cal A}\otimes{\bf F})\). Then, we have the direct sum with an \(n\)-Lie-Rinehart algebra structure.
**Proposition 3.7**.: \(({\bf E}\otimes{\cal B})\oplus({\cal A}\otimes{\bf F})\) _is an \(n\)-Lie-Rinehart algebra, where the bracket and anchor map \(\rho\) are given by the following rules:_
\[\rho((X_{1}\otimes b_{1}+a_{1}\otimes Y_{1})\wedge\cdots\wedge(X_ {n-1}\otimes b_{n-1}+a_{n-1}\otimes Y_{n-1}))(a_{n}\otimes b_{n})\] \[= [X_{1}\otimes b_{1}+a_{1}\otimes Y_{1},\cdots,X_{n-1}\otimes b_{n -1}+a_{n-1}\otimes Y_{n-1},a_{n}\otimes b_{n}]\] \[= [X_{1},\cdots,X_{n-1},a_{n}]_{\bf E}\otimes b_{1}\cdots b_{n}+a_{ 1}\cdots a_{n}\otimes[Y_{1},\cdots,Y_{n-1},b_{n}]_{\bf F};\]
\[[X_{1}\otimes b_{1}+a_{1}\otimes Y_{1},\cdots,X_{n}\otimes b_{n }+a_{n}\otimes Y_{n}]\] \[= [X_{1},\cdots,X_{n}]_{\bf E}\otimes b_{1}\cdots b_{n}+a_{1}\cdots a _{n}\otimes[Y_{1},\cdots,Y_{n}]_{\bf F}\] \[+\sum_{i=1}^{n}(-1)^{n-i}[X_{1},\cdots\widehat{X_{i}},\cdots,X_{n },a_{i}]_{\bf E}\otimes b_{1}\cdots\widehat{b_{i}}\cdots b_{n}Y_{i}\] \[+\sum_{i=1}^{n}(-1)^{n-i}a_{1}\cdots\widehat{a_{i}}\cdots a_{n}X_ {i}\otimes[Y_{1},\cdots\widehat{Y_{i}},\cdots,Y_{n},b_{i}]_{\bf F},\]
_where \(a_{i}\in{\cal A},b_{i}\in{\cal B},X_{i}\in{\bf E},Y_{i}\in{\bf F}\)._
The proof of the above Proposition is by direct computations, so we omit the detail.
**Remark 3.8**.: _All the lemmas in subsection 3.1 hold for Leibniz-Rinehart algebras \(({\cal E},[\cdot\,,\cdot\,]_{\cal E},{\cal A})\). Therefore, given two Leibniz-Rinehart algebras \(({\cal E}=\wedge^{n-1}{\bf E},[\cdot\,,\cdot\,]_{\cal E},\widehat{\rho_{\cal E}})\) and \(({\cal F}=\wedge^{n-1}{\bf F},[\cdot\,,\cdot\,]_{\cal F},\widehat{\rho_{\cal F}})\), we define their direct sum to be the \({\cal A}\otimes{\cal B}\)-module \(({\cal E}\otimes{\cal B})\oplus({\cal A}\otimes{\cal F})\). Then, we have the direct sum with a new Leibniz-Rinehart algebra structure._
**Proposition 3.9**.: \(({\cal E}\otimes{\cal B})\oplus({\cal A}\otimes{\cal F})\) _is a Leibniz-Rinehart algebra, where the bracket and anchor map \(\rho\) are given by the following rules:_
\[[x_{1}\otimes b_{1}+a_{1}\otimes y_{1},a_{2}\otimes b_{2}]\] \[= [x_{1},a_{2}]_{\cal E}\otimes b_{1}b_{2}+a_{1}a_{2}\otimes[y_{1}, b_{2}]_{\cal F};\] \[[x_{1}\otimes b_{1}+a_{1}\otimes y_{1},x_{2}\otimes b_{2}+a_{2} \otimes y_{2}]\] \[= [x_{1},x_{2}]_{\cal E}\otimes b_{1}b_{2}+a_{1}a_{2}\otimes[y_{1}, y_{2}]_{\cal F}\] \[+[x_{1},a_{2}]_{\cal E}\otimes b_{1}y_{2}+a_{1}x_{2}\otimes[y_{1}, b_{2}]_{\cal F}\] \[+(-1)^{n-1}a_{2}x_{1}\otimes[y_{2},b_{1}]_{\cal F}+(-1)^{n-1}[x_{ 2},a_{1}]_{\cal E}\otimes b_{2}y_{1},\]
_for all \(a_{i}\in{\cal A},b_{i}\in{\cal B},x_{i}\in{\cal E},y_{i}\in{\cal F}\)._
There is a known fact that \({\cal B}\)-modules can be regarded as \({\cal A}\)-modules via an algebraic homomorphism \(\psi:{\cal A}\to{\cal B}\). Then we have the following map
\[\widetilde{\psi}:{\cal A}\otimes{\cal B}\to{\cal B},\quad a\otimes b\mapsto \psi(a)b.\]
Assume that \(\psi\) is surjective and \({\cal B}\cong({\cal A}\otimes{\cal B})/\ker\widetilde{\psi}\)
**Definition 3.10**.: _Let \({\bf H}=({\bf E}\otimes{\cal B})\oplus({\cal A}\otimes{\bf F})\) be a direct sum of two \(n\)-Lie-Rinehart algebras \(({\bf E},[\,\cdot\,,\cdots,\cdot\rfloor_{E},{\cal A})\) and \(({\bf F},[\,\cdot\,,\cdots,\cdot\rfloor_{F},{\cal B})\). Let \({\cal I}=\ker\widetilde{\psi}\) be an ideal. We denote \(({\bf H}_{\cal I},[\,\cdot\,,\cdots,\cdot\rfloor_{{\bf H}_{\cal I}},({\cal A} \otimes{\cal B})/\ker\widetilde{\psi}\cong{\cal B})\) by \(({\bf E}\oplus_{\psi}{\bf F},[\,\cdot\,,\cdots,\cdot\rfloor_{{\bf E}\oplus_{ \psi}{\bf F}},{\cal B})\) or simply \({\bf E}\oplus_{\psi}{\bf F}\) and call it the \(\psi\)-sum of \(({\bf E},[\,\cdot\,,\cdots,\cdot\rfloor_{{\bf E}},{\cal A})\) and \(({\bf F},[\,\cdot\,,\cdots,\cdot\rfloor_{{\bf F}},{\cal B})\) with respect to the morphism \(\psi\)._
The \(\psi\)-sum \({\bf E}\oplus_{\psi}{\bf F}\) is a \({\cal B}\)-submodule of \(({\bf E}\otimes_{\cal A}{\cal B})\oplus{\bf F}\). For the proof one can see [6].
**Remark 3.11**.: _For the Leibniz-Rinehart algebra \(({\cal E}\otimes{\cal B})\oplus({\cal A}\otimes{\cal F})\), we denote \(({\cal H}_{\cal I},[\,\cdot\,,\cdot\rfloor_{{\cal H}_{\cal I}},\widehat{ \rho_{\cal E}\ominus_{\psi}{\cal F}},{\cal A}\otimes{\cal B}/\ker\widetilde{ \psi}\cong{\cal B})\) by \({\cal E}\oplus_{\psi}{\cal F}\) and call it the \(\psi\)-sum of \(({\cal E},[\,\cdot\,,\cdot\rfloor_{\cal E},\widehat{\rho_{\cal E}},{\cal A})\) and \(({\cal F},[\,\cdot\,,\cdot\rfloor_{\cal F},\widehat{\rho_{\cal F}},{\cal B})\) with respect to the morphism \(\psi\), where \({\cal H}=({\cal E}\otimes{\cal B})\oplus({\cal A}\otimes{\cal F})\)._
By convention, we denote \({\mathfrak{X}}_{j}:=X_{(i_{j})}\otimes b_{(i_{j})}:=\sum_{i_{j}}X_{i_{j}} \otimes_{\cal A}b_{i_{j}}\in{\bf E}\otimes_{\cal A}{\cal B},\;\;\mbox{and}\;\;{ \mathfrak{Y}}_{j}:=a_{(i_{j})}\otimes Y_{(i_{j})}:=\sum_{i_{j}}a_{i_{j}}\otimes Y _{i_{j}}\in{\cal A}\otimes{\bf F},\;\;\forall j=1,\cdots,n.\)
**Theorem 3.12**.: _An element \({\mathfrak{X}}_{1}+Y_{1}\in({\bf E}\otimes_{\cal A}{\cal B})\oplus{\bf F}\) belongs to \({\bf E}\oplus_{\psi}{\bf F}\) if and only if_
\[\psi([X_{(i_{1})},\cdots,X_{(i_{n-1})},a])b_{(i_{1})}\cdots b_{(i_{n-1})}=[Y_{ 1},\cdots,Y_{n-1},\psi(a)], \tag{9}\]
_for all \(a\in{\cal A}\) and \({\mathfrak{X}}_{2}+{\mathfrak{Y}}_{2},\cdots,{\mathfrak{X}}_{n-1}+{\mathfrak{Y}} _{n-1}\in H^{\cal I}\)._
Proof.: If \({\mathfrak{X}}_{1}+{\mathfrak{Y}}_{1}\in{\bf H}^{\cal I}\), for all \({\mathfrak{X}}_{2}+{\mathfrak{Y}}_{2},\cdots,{\mathfrak{X}}_{n-1}+{\mathfrak{Y}} _{n-1}\in{\bf H}^{\cal I}\), we have \(\widetilde{\psi}[{\mathfrak{X}}_{1}+{\mathfrak{Y}}_{1},\cdots,{\mathfrak{X}}_{n- 1}+{\mathfrak{Y}}_{n-1},{\cal I}]=0\). An element \({\mathfrak{X}}_{1}+{\mathfrak{Y}}_{1}\in{\bf H}^{\cal I}\) if and only if
\[[{\mathfrak{X}}_{1}+{\mathfrak{Y}}_{1},\cdots,{\mathfrak{X}}_{n-1}+{ \mathfrak{Y}}_{n-1},a\otimes b-1\otimes\psi(a)b]\] \[= \psi([X_{(i_{1})},\cdots,X_{(i_{n-1})},a])b_{(i_{1})}\cdots b_{(i_{ n-1})}b+\psi(a_{(i_{1})}\cdots a_{(i_{n-1})}a)[Y_{(i_{1})},\cdots,Y_{(i_{n-1})},b]\] \[-\psi(a_{(i_{1})}\cdots a_{(i_{n-1})}1)[Y_{(i_{1})},\cdots,Y_{(i_{n-1 })},\psi(a)b]\] \[= \psi([X_{(i_{1})},\cdots,X_{(i_{n-1})},a])b_{(i_{1})}\cdots b_{(i_{ n-1})}b-\psi(a_{(i_{1})}\cdots a_{(i_{n-1})}1)[Y_{(i_{1})},\cdots,Y_{(i_{n-1})},\psi(a)]b\] \[= \psi([X_{(i_{1})},\cdots,X_{(i_{n-1})},a])b_{(i_{1})}\cdots b_{(i_{ n-1})}-[\psi(a_{(i_{1})})Y_{(i_{1})},\cdots,\psi(a_{(i_{n-1})})Y_{(i_{n-1})},\psi(a)])b\] \[= 0\]
holds for all \(a\in{\cal A},b\in{\cal B}\). The above equation shows that
\[\psi([X_{(i_{1})},\cdots,X_{(i_{n-1})},a])b_{(i_{1})}\cdots b_{(i_{n-1})}-[\psi(a_{( i_{1})})Y_{(i_{1})},\cdots,\psi(a_{(i_{n-1})})Y_{(i_{n-1})},\psi(a)]=0,\]
for all \(a\in{\cal A}\) and \({\mathfrak{X}}_{1}+{\mathfrak{Y}}_{1},\cdots,{\mathfrak{X}}_{n-1}+{\mathfrak{Y}}_{ n-1}\in{\bf H}^{\cal I}\).
On the other hand, by the definition of \(\psi\)-sum, we have
\[{\bf E}\oplus_{\psi}{\bf F}={\bf H}^{\cal I}\otimes_{{\cal A}\otimes{\cal B}}{ \cal B}\subset{\bf H}\otimes_{{\cal A}\otimes{\cal B}}{\cal B}\cong({\bf E} \otimes_{\cal A}{\cal B})\oplus{\bf F}.\]
Moreover, by Lemma 3.5 and Definition 3.6, an element in \({\bf E}\oplus_{\psi}{\bf F}\) can be written in the following form:
\[X_{(i_{j})}\otimes b_{(i_{j})}+\psi(a_{(i_{j})})Y_{(i_{j})}\]
where \(X_{(i_{j})}\otimes b_{(i_{j})}+a_{(i_{j})}\otimes Y_{(i_{j})}\in{\bf H}^{\cal I}\). The proof is finished.
By the above Lemma 3.5, it is easy to see that the expressions of \(n\)-bracket of \({\bf E}\oplus_{\psi}{\bf F}\) can be given in the following proposition.
**Proposition 3.13**.: _The structure maps of the \(n\)-Lie-Rinehart algebra \(({\bf E}\oplus_{\psi}{\bf F},[\cdot\,,\cdots,\cdot],{\bf E}\oplus_{\psi}{\bf F})\) are given by_
\[[{\mathfrak{X}}_{1}+Y_{1},\cdots,{\mathfrak{X}}_{n-1}+Y_{n-1},b]\] \[= [Y_{1},\cdots,Y_{n-1},b]_{{\bf F}};\] \[[{\mathfrak{X}}_{1}+Y_{1},\cdots,{\mathfrak{X}}_{n}+Y_{n}]\] \[= [X_{(i_{1})},\cdots,X_{(i_{n})}]_{{\bf E}}\otimes_{{\cal A}}b_{( i_{1})}\cdots b_{(i_{n})}+[Y_{1},\cdots,Y_{n}]_{{\bf F}}\] \[+\sum_{j=1}^{n}(-1)^{n+j}X_{(i_{j})}\otimes_{{\cal A}}[Y_{1}, \cdots,\widehat{Y}_{i},\cdots,b_{(i_{j})}]_{{\bf F}},\]
_for all \({\mathfrak{X}}_{1}+Y_{1},\cdots,{\mathfrak{X}}_{n}+Y_{n}\in{\bf E}\oplus_{\psi }{\bf F},b\in{\cal B}\)._
For Leibniz-Rinehart algebras, we have the following proposition:
**Proposition 3.14**.: _The structure maps of the Leibniz-Rinehart algebra \(({\cal E}\oplus_{\psi}{\cal F},[\cdot\,,\cdot\,],{\cal B})\) are given by_
\[[\sum_{i}x_{i}\otimes_{{\cal A}}b_{i}+y_{1},b]=[y_{1},b]_{{\cal F }};\] \[[\sum_{i}x_{i}\otimes_{{\cal A}}b_{i}+y_{1},\sum_{j}x_{j}\otimes_{ {\cal A}}b_{j}+y_{2}]=\sum_{i}\sum_{j}[x_{i},,x_{j}]_{{\cal E}}\otimes_{{\cal A }}b_{i}b_{j}+[y_{1},y_{2}]_{{\cal F}}\] \[+\sum_{j}x_{j}\otimes_{{\cal A}}[y_{1},b_{j}]_{{\cal F}}+(-1)^{n-1 }\sum_{i}x_{i}\otimes_{{\cal A}}[y_{2},b_{i}]_{{\cal E}},\]
_for all \(\sum_{i}x_{i}\otimes_{{\cal A}}b_{i}+y_{1},\cdots,\sum_{j}x_{j}\otimes_{{\cal A }}b_{j}+y_{2}\in{\cal E}\oplus_{\psi}{\cal F},b\in{\cal B}\)._
Let \((E_{1},M_{1},\rho_{1},[\cdot\,,\cdots,\cdot]_{1})\) be an \(n\)-Lie algebroid with the anchor map \(\rho_{1}:\wedge^{n-1}E_{1}\to TM_{1}\). We will denote the tangent vector \(\rho_{1}(X_{1}\wedge\cdots\wedge X_{n-1})\), \(X_{i}\in E_{1}\) by \([X_{1},\cdots,X_{n-1},\cdot]\).
**Example 3.15**.: _Given an \(n\)-Lie algebroid \((E_{1},M_{1},\rho_{1},[\cdot\,,\cdots,\cdot]_{1})\) and an embedded submanifold \(M_{2}\). Let \(i:M_{2}\to M_{1}\) be the inclusion. Consider the algebra \({\cal A}=C^{\infty}(M_{1})\) and its ideal \({\cal I}=\ker i^{*}=\{f\in{\cal A}|f|_{M_{2}}=0\}\). Denoting \({\bf E}=\Gamma(E_{1})\), we obtain an \(n\)-Lie-Rinehart algebra \(({\bf E},[\cdot\,,\cdots,\cdot]_{{\bf E}},\rho_{1})\) over \({\cal A}\). Then, we want to find the restriction of the \(n\)-Lie-Rinehart algebra \(({\bf E},[\cdot\,,\cdots,\cdot]_{{\bf E}},\rho_{1})\) with respect to \({\cal I}\). Obviously, for all \(\sigma_{1},\cdots,\sigma_{n-1}\in\Gamma(E_{1})\cap{\bf E}^{\cal I}\), we have_
\[\rho_{1}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})|_{M_{2}}\in TM_{2}.\]
_By using the definition of \({\cal I}{\bf E}\) and \({\cal I}\), we have_
\[{\cal I}{\bf E}=\{\sigma\in\Gamma(E_{1})|\sigma|_{M_{2}}=0\}.\]
_Therefore, the quotient algebra \({\bf E}_{\cal I}\) can be regarded as a subspace of the spaces of sections \(E_{1,M_{2}}:=\{e\in E_{1,p}|p\in M_{2}\}\) such that_
\[\rho_{1}(e_{1}\wedge\cdots\wedge e_{n-1})\in T_{p}M_{2},\ \ \forall e_{1}, \cdots,e_{n-1}\in{\bf E}_{\cal I}\]
_as an \({\cal A}/{\cal I}\cong C^{\infty}(M_{2})\)-module._
**Example 3.16**.: _Given two \(n\)-Lie algebras \(\mathbf{E}\) and \(\mathbf{F}\) over \(K\). The only \(K\)-algebraic homomorphism from \(K\) to \(K\) is the identity map \(id\), and the \(id\)-sum of \((\mathbf{E},[\cdot\,,\cdots,\cdot]_{\mathbf{E}},K)\) and \((\mathbf{F},[\cdot\,,\cdots,\cdot]_{\mathbf{F}},K)\) is exactly \((\mathbf{E}\oplus\mathbf{F},[\cdot\,,\cdots,\cdot]_{\mathbf{E}\oplus\mathbf{F }},K)\)._
**Example 3.17**.: _Given an \(n\)-Lie-Rinehart algebra \((\mathbf{E},[\cdot\,,\cdots,\cdot]_{\mathbf{E}},\rho_{\mathbf{E}})\) over \(\mathcal{A}\) and an \(n\)-Lie algebra \(\mathbf{F}\) over \(K\). Let \(\psi:\mathcal{A}\to K\) is an algebraic homomorphism. The \(\psi\)-sum of \(\mathbf{E}\) and \(\mathbf{F}\) is the set \(\mathbf{E}\oplus_{\psi}\mathbf{F}=\{X\otimes_{\mathcal{A}}1+Y|X\in E,Y\in F\}\) such that_
\[[X_{1},\cdots,X_{n-1},a]\in\ker\psi,\ \ \forall X_{1}\otimes_{\mathcal{A}}1+Y_{1},\cdots,X_{n-1}\otimes_{\mathcal{A}}1+Y_{n-1}\in\mathbf{E}\oplus_{\psi} \mathbf{F},a\in\mathcal{A}.\]
**Example 3.18**.: _Let \((E_{1},M,\rho_{1},[\cdot\,,\cdots,\cdot]_{1})\) and \((E_{2},N,\rho_{2},[\cdot\,,\cdots,\cdot]_{2})\) be two \(n\)-Lie algebroids. The direct sum is a bundle \(E_{1}\times E_{2}\) over \(M\times N\), with a bundle map \((\sigma_{p},\tau_{q})\to(p,q)\), where \((\sigma_{p},\tau_{q}),\sigma_{p}\in E_{1,p},\tau_{q}\in E_{2,q}\). We want to find the \(n\)-Lie algebroid structure of the sum direct \(E_{1}\times E_{2}\). The anchor map is_
\[((\sigma_{1})_{p}\wedge\cdots\wedge(\sigma_{n-1})_{p},(\tau_{1})_{q}\wedge \cdots\wedge(\tau_{n-1})_{q})\to(\rho_{1}((\sigma_{1})_{p}\wedge\cdots\wedge( \sigma_{n-1})_{p}),\rho_{2}((\tau_{1})_{q}\wedge\cdots\wedge(\tau_{n-1})_{q})).\]
_For some sections on \(E_{1}\times E_{2}\), we define the \(n\)-Lie bracket to be_
\[[(f_{1}\sigma_{1},g_{1}\tau_{1}),\cdots,(f_{n}\sigma_{n},g_{n} \tau_{n})]\] \[= (f_{1}\cdots f_{n}[\sigma_{1},\cdots,\sigma_{n}]_{1}+\sum_{i=1}^{ n}(-1)^{n-i}g_{1}\cdots\widehat{g_{i}}\cdots g_{n}[\tau_{1},\cdots,\widehat{ \tau_{i}},\cdots\tau_{n},f_{i}]_{1}\sigma_{i},\] \[g_{1}\cdots g_{n}[\tau_{1},\cdots,\tau_{n}]_{2}+\sum_{i=1}^{n}(-1 )^{n-i}f_{1}\cdots\widehat{f_{i}}\cdots f_{n}[\sigma_{1},\cdots,\widehat{ \sigma_{i}},\cdots\sigma_{n},f_{i}]_{2}\tau_{i}).\]
_Here \(f_{i}\in C^{\infty}(N),g_{i}\in C^{\infty}(M),\sigma_{i}\in\Gamma(E_{1}),\tau _{i}\in\Gamma(E_{2})\). We regard \(C^{\infty}(M\times N)\cong C^{\infty}(M)\otimes C^{\infty}(N)\) and \(\Gamma(E_{1}\times E_{2})\) as the \(C^{\infty}(M\times N)\)-module \(\Gamma(E_{1})\otimes C^{\infty}(N)\oplus\Gamma(E_{2})\otimes C^{\infty}(M)\)._
Consider a smooth map \(\phi:M\to N\), one has its graph
\[Gr(\phi)=\{(p,\phi(p))|p\in M\}\subset M\times N. \tag{10}\]
Therefore, the \(\phi\)-sum of \(E_{1}\) and \(E_{2}\) is the set
\[E_{1}\oplus_{\phi}E_{2}=\{((\sigma)_{p},(\tau)_{\phi(p)})\in E_{1}\oplus\phi^{ \dagger}E_{2}|\forall p\in M,(\sigma)_{p}\in(E_{1})_{p},(\tau)_{\phi(p)}\in(E _{2})_{\phi(p)}\} \tag{11}\]
such that
\[\phi_{*}\circ\rho_{1}((\sigma_{1})_{p}\wedge\cdots(\sigma_{n-1})_{p})=\rho_{2 }((\tau_{1})_{\phi(p)}\wedge\cdots(\tau_{n-1})_{\phi(p)}), \tag{12}\]
for all \(((\sigma_{i})_{p},(\tau_{i})_{\phi(p)})\in E_{1}\oplus_{\phi}E_{2},p\in M.\)
## 4 Morphisms and Comorphisms of \(n\)-Lie-Rinehart algebras
In this section, we introduce the concepts of morphisms and comorphisms of \(n\)-Lie-Rinehart algebras. The first one is a generalization of the homomorphism of \(n\)-Lie-Rinehart algebras defined in [12].
**Definition 4.1**.: _Given two \(n\)-Lie-Rinehart algebras \((\mathbf{E},[\cdot\,,\cdots,\cdot]_{\mathbf{E}},\mathcal{A})\) and \((\mathbf{F},[\cdot\,,\cdots,\cdot]_{\mathbf{F}},\mathcal{B})\). A morphism of \(n\)-Lie-Rinehart algebras from \(\mathbf{E}\) to \(\mathbf{F}\) is a pair \((\mathbf{E},[\cdot\,,\cdots,\cdot]_{\mathbf{E}},\mathcal{A})\stackrel{{ (\Psi,\psi)}}{{\rightarrow}}(\mathbf{F},[\cdot\,,\cdots,\cdot]_{\mathbf{F}}, \mathcal{B})\) such that_
1. \[\psi([X_{1},\cdots,X_{n-1},a]_{\mathbf{E}})=[\Psi(X_{1}),\cdots,\Psi(X_{n-1}), \psi(a)]_{\mathbf{F}},\ \ \forall X_{1},\cdots,X_{n-1}\in\mathbf{E},a\in\mathcal{A};\] (13)
2. \[\Psi([X_{1},\cdots,X_{n}]_{\mathbf{E}})=[\Psi(X_{1}),\cdots,\Psi(X_{n})]_{ \mathbf{F}},\ \ \forall X_{1},\cdots,X_{n}\in\mathbf{E}.\] (14)
_Here \(\psi:{\cal A}\to{\cal B}\) is an algebraic homomorphism and \(\Psi:{\bf E}\to{\bf F}\) is a map of \({\cal A}\)-modules (considering \({\cal B}\)-module as \({\cal A}\)-modules through \(\psi\)). In particular, if both \(\Psi\) and \(\psi\) are injective, we call \(({\bf E},[\cdot\,,\cdots\,,\cdot]_{\bf E},{\cal A})\) a subalgebra of \(({\bf F},[\cdot\,,\cdots\,,\cdot]_{\bf F},{\cal B})\)._
**Definition 4.2**.: _Given two \(n\)-Lie-Rinehart algebras \(({\bf E},[\cdot\,,\cdots\,,\cdot]_{\bf E},{\cal A})\) and \(({\bf F},[\cdot\,,\cdots\,,\cdot]_{\bf F},{\cal B})\). A comorphism of \(n\)-Lie-Rinehart algebras from \({\bf F}\) to \({\bf E}\) is a pair \(({\bf F},[\cdot\,,\cdots\,,\cdot]_{\bf F},{\cal B})\stackrel{{( \Psi,\psi)}}{{\rightleftarrows}}({\bf E},[\cdot\,,\cdots\,,\cdot]_{\bf E},{ \cal A})\) such that_
1. \[[Y_{1},\cdots\,,Y_{n-1},\psi(a)]_{\bf F}=\sum_{k_{1}}\cdots\sum_{k_{n-1}}b_{k_ {1}}\cdots b_{k_{n-1}}\psi[X_{k_{1}},\cdots\,,X_{k_{n-1}},a]_{\bf E},\ \forall a\in{\cal A},\] (15)
2. \[\Psi([Y_{1},\cdots\,,Y_{n}]_{\bf F})=[\Psi(Y_{1}),\cdots\,,\Psi(Y_{n})]_{\bf E},\] (16)
3. _for all_ \(Y_{1},\cdots,Y_{n}\in{\bf F}\) _and_ \(\Psi(Y_{i})=\sum_{k_{i}}X_{k_{i}}\otimes_{\cal A}b_{k_{i}}\)_, where_ \(1\leq i\leq n,X_{k_{i}}\in{\bf E},b_{k_{i}}\in{\cal B}\)_._
_Here \(\psi:{\cal A}\to{\cal B}\) is an algebraic homomorphism and \(\Psi:{\bf F}\to{\bf E}\otimes_{\cal A}{\cal B}\) is a map of \({\cal B}\)-modules. In particular, if \(\Psi\) is injective and \(\psi\) is surjective, we call \(({\bf F},[\cdot\,,\cdots\,,\cdot]_{\bf F},{\cal B})\) a co-subalgebra of \(({\bf E},[\cdot\,,\cdots\,,\cdot]_{\bf E},{\cal A})\)._
For Equation (16), we obtain the local expression
\[\Psi([Y_{1},\cdots\,,Y_{n}]_{\bf F}) = \sum_{k_{1}}\cdots\sum_{k_{n}}[X_{k_{1}},\cdots\,,X_{k_{n}}]_{ \bf E}\otimes_{\cal A}b_{k_{1}}\cdots b_{k_{n}}\] \[+ \sum_{i=1}^{n}\sum_{k_{i}}(-1)^{n-i}X_{k_{i}}\otimes_{\cal A}[Y_{ 1},\cdots\,,\widehat{Y}_{i},\cdots\,,Y_{n},b_{k_{i}}]_{\bf F}.\]
The following definitions illustrate morphism and comorphism of Leibniz-Rinehart algebras of this form \(({\cal F},[\cdot\,,\cdot]_{\cal F},{\cal B})\).
**Definition 4.3**.: _Given two Leibniz-Rinehart algebras \(({\cal E},[\cdot\,,\cdot]_{\cal E},{\cal A})\) and \(({\cal F},[\cdot\,,\cdot]_{\cal F},{\cal B})\). A morphism of Leibniz-Rinehart algebras from \({\cal E}\) to \({\cal F}\) is a pair \(({\cal E},[\cdot\,,\cdot]_{\cal E},{\cal A})\stackrel{{(\Psi, \psi)}}{{\rightleftarrows}}({\cal F},[\cdot\,,\cdot]_{\cal F},{\cal B})\) such that_
1. \[\psi([x,a]_{\cal E})=[\Psi(x),\psi(a)]_{\cal F},\ \forall x\in{\cal E},a\in{ \cal A};\] (17)
2. \[\Psi([x_{1},x_{2}]_{\cal E})=[\Psi(x_{1}),\Psi(x_{2})]_{\cal F},\ \forall x_{1},x_{2}\in{ \cal E}.\] (18)
_Here \(\psi:{\cal A}\to{\cal B}\) is an algebraic homomorphism and \(\Psi:{\cal E}\to{\cal F}\) is a map of \({\cal A}\)-modules (considering \({\cal B}\)-module as \({\cal A}\)-modules through \(\psi\))._
**Definition 4.4**.: _Given two Leibniz-Rinehart algebras \(({\cal E},[\cdot\,,\cdot]_{\cal E},{\cal A})\) and \(({\cal F},[\cdot\,,\cdot]_{\cal F},{\cal B})\). A comorphism of Leibniz-Rinehart algebras from \({\cal F}\) to \({\cal E}\) is a pair \(({\cal F},[\cdot\,,\cdot]_{\cal F},{\cal B})\stackrel{{(\Psi, \psi)}}{{\rightleftarrows}}({\cal E},[\cdot\,,\cdot]_{\cal E},{\cal A})\) such that_
1. \[[y,\psi(a)]_{\cal F}=\sum_{i}b_{i}\psi[x_{i},a]_{\cal E},\ \forall a\in{ \cal A},\] (19)
2. \[\Psi([y_{1},y_{2}]_{\cal F})=[\Psi(y_{1}),\Psi(y_{2})]_{\cal E},\] (20) _for all_ \(y_{1},y_{2}\in{\cal F}\) _and_ \(\Psi(y)=\sum_{i}x_{i}\otimes_{\cal A}b_{i}\)_, where_ \(1\leq i\leq n,x_{i}\in{\cal E},b_{i}\in{\cal B}\)_._
_Here \(\psi:{\cal A}\to{\cal B}\) is an algebraic homomorphism and \(\Psi:{\cal F}\to{\cal E}\otimes_{\cal A}{\cal B}\) is a map of \({\cal B}\)-modules._
**Remark 4.5**.: _Given a comorphism of \(n\)-Lie-Rinehart algebras \(({\bf F},[\,\cdot\,,\cdots,\cdot]_{\bf F},{\cal B})\stackrel{{(\Psi, \psi)}}{{\rightleftarrows}}({\bf E},[\cdot\,,\cdots,\cdot]_{\bf E},{\cal A})\), then we obtain a comorphism of Leibniz-Rinehart algebra \(({\cal F},[\,\cdot\,,\cdot]_{\cal F},{\cal B})\stackrel{{(\Psi, \psi)}}{{\rightleftarrows}}({\cal E},[\,\cdot\,,\cdot\,]_{\cal E},{\cal A})\). Conversely, let \(({\cal F},[\,\cdot\,,\cdot\,]_{\cal F},{\cal B})\stackrel{{(\Psi, \psi)}}{{\rightleftarrows}}({\cal E},[\,\cdot\,,\cdot\,]_{\cal E},{\cal A})\) be a comorphism of Leibniz-Rinehart algebra, then there exists a comorphism of \(n\)-Lie-Rinehart algebras \(({\bf F},[\,\cdot\,,\cdots,\cdot]_{\bf F},{\cal B})\stackrel{{( \Psi,\psi)}}{{\rightleftarrows}}({\bf E},[\cdot\,,\cdots,\cdot]_{\bf E},{ \cal A})\)._
**Proposition 4.6** ([6]).: _Let \({\bf E},{\bf F}\) be finitely generated projective \({\cal A}\), \({\cal B}\)-modules respectively, and \(\psi:{\cal A}\to{\cal B}\) be an algebraic homomorphism._
1. \({\bf E}\otimes_{\cal A}{\cal B}\) _is a finitely generated projective_ \({\cal B}\)_-module, and_ \[\begin{array}{rcl}({\bf E}\otimes_{\cal A}{\cal B})\wedge_{\cal A}({\bf E} \otimes_{\cal A}{\cal B})&=&(\wedge_{\cal A}^{2}{\bf E})\otimes_{\cal A}{\cal B }\\ &\cdots&&\\ \underbrace{({\bf E}\otimes_{\cal A}{\cal B})\wedge_{\cal A}\cdots\wedge_{\cal A }({\bf E}\otimes_{\cal A}{\cal B})}_{\mbox{$k$-copies}}&=&(\wedge_{\cal A}^{k} {\bf E})\otimes_{\cal A}{\cal B}.\end{array}\]
2. _The map_ \(I:{\bf E}\otimes_{\cal A}{\cal B}\to Hom_{\cal A}({\bf E}^{*}_{\cal A},{\cal B})\)_, sending each_ \(X\otimes_{\cal A}b\) _to_ \[I(X\otimes_{\cal A}b):\xi\to\psi(<\xi,X>)b,\ \ \forall\xi\in{\bf E}^{*}_{\cal A},b\in{\cal B},\] _is an isomorphism of_ \({\cal B}\)_-modules. Similarly, we have_ \[\begin{array}{rcl}(\wedge_{\cal A}^{2}{\bf E})\otimes_{\cal A}{\cal B }&\cong&Hom_{\cal A}(\wedge_{\cal A}^{2}{\bf E}^{*}_{\cal A},{\cal B})\\ &\cdots&&\\ (\wedge_{\cal A}^{k}{\bf E})\otimes_{\cal A}{\cal B}&\cong&Hom_{\cal A}( \wedge_{\cal A}^{k}{\bf E}^{*}_{\cal A},{\cal B}).\end{array}\]
3. _Let_ \(\Psi:{\bf F}\to{\bf E}\otimes_{\cal A}{\cal B}\) _be a_ \({\cal B}\)_-map. There is an induced_ \({\cal A}\)_-map_ \(\Psi^{*}:{\bf E}^{*}_{\cal A}\to{\bf F}^{*}_{\cal B}\)_, called the dual map of_ \(\Psi\)_, such that_ \[<\Psi^{*}(\xi),Y>=<I\circ\Psi(Y),\xi>,\ \ \forall\xi\in{\bf E}^{*}_{\cal A},Y\in{\bf F}.\]
4. _Let_ \(\overline{\Psi}:{\bf E}^{*}_{\cal A}\to{\bf F}^{*}_{\cal B}\) _be a map of_ \({\cal A}\)_-module. There is a unique_ \({\cal B}\)_-map_ \(\Psi:{\bf F}\to{\bf E}\otimes_{\cal A}{\cal B}\) _such that_ \(\Psi^{*}=\overline{\Psi}\)_,_ \((\Psi(Y)=I^{-1}\circ<\overline{\Psi}(\cdot\,),Y>\)_, for each_ \(Y\in{\bf F}\)_)._
The proof of Proposition 4.6 can see [6].
**Definition 4.7**.: _Let \({\bf E}\) be an \(n\)-Lie-Rinehart algebra, which is a finitely generated projective \({\cal A}\)-module. We define a differential operator \(d^{n-1}_{\bf E}:\wedge_{\cal A}^{k(n-1)}{\bf E}^{*}_{\cal A}\to\wedge_{\cal A }^{(k+1)(n-1)}{\bf E}^{*}_{\cal A}\) as follows:_
1. \(<d^{n-1}_{\bf E}a,x>=\widehat{\rho}(x)(a)=[x,a]_{\wedge_{\cal A}^{n-1}{\bf E}}\)_;_
2. \(<d^{n-1}_{\bf E}\xi,x\wedge_{\cal A}y>=\widehat{\rho}(x)<\xi,y>-\widehat{\rho} (y)<\xi,x>-<\xi,[x,y]_{\wedge_{\cal A}^{n-1}{\bf E}}>\)_;_
3. \[\begin{array}{rcl}&d^{n-1}_{\bf E}(\xi_{1}\wedge_{\cal A}\cdots\wedge_{ \cal A}\xi_{m})(x_{1}\wedge_{\cal A}\cdots\wedge_{\cal A}x_{m+1})\\ =&\sum_{i=1}^{m+1}(-1)^{i-1}\widehat{\rho}(x_{i})<\xi_{1}\wedge_{ \cal A}\cdots\wedge_{\cal A}\xi_{m},x_{1}\wedge_{\cal A}\cdots\wedge_{\cal A} \widehat{x_{i}}\wedge_{\cal A}\cdots\wedge_{\cal A}x_{m+1}>\\ &+\sum_{1\leq i<j\leq m+1}^{m+1}(-1)^{i+j}<\xi_{1}\wedge_{\cal A}\cdots \wedge_{\cal A}\xi_{m},\\ &[x_{i},x_{j}]_{\wedge_{\cal A}^{n-1}{\bf E}}\wedge_{\cal A}\cdots\wedge_{ \cal A}\widehat{x_{i}}\wedge_{\cal A}\cdots\wedge_{\cal A}\widehat{x_{j}} \wedge_{\cal A}\cdots\wedge_{\cal A}x_{m+1}>,\end{array}\]
_where \(a\in{\cal A},x_{1},\cdots,x_{m+1},y\in\wedge_{\cal A}^{n-1}{\bf E}_{\cal A}\) and \(\xi,\xi_{1},\cdots,\xi_{m}\in\wedge_{\cal A}^{n-1}{\bf E}_{\cal A}^{*}\)._
**Proposition 4.8**.: _Given two finitely generated projective \(n\)-Lie-Rinehart algebras \(({\bf E},{\cal A})\) and \(({\bf F},{\cal B})\) over the algebras \({\cal A}\) and \({\cal B}\) respectively. Then the following statements are equivalent_
* \(({\cal F},[\cdot\,,\cdot]_{\wedge_{\cal B}^{n-1}{\bf F}_{\cal F}={\cal F}},{ \cal B})\stackrel{{(\Psi,\psi)}}{{\rightleftarrows}}({\cal E},[ \cdot\,,\cdot\,,]_{\wedge_{\cal A}^{n-1}{\bf E}_{\cal A}:={\cal E}},{\cal A})\) _is a_ _comporphism of Leibniz-Rinehart algebras;_
* _the dual map_ \(\Psi^{*}:{\bf E}_{\cal A}^{*}\rightarrow{\bf F}_{\cal B}^{*}\) _satisfies_ \[d_{\bf F}^{n-1}\circ\Psi^{*}=\Psi^{*}\circ d_{\bf E}^{n-1},\ \mbox{as a map}\wedge_{\cal A}^{k(n-1)}{\bf E}_{\cal A}^{*} \rightarrow\wedge_{\cal A}^{(k+1)(n-1)}{\bf F}_{\cal A}^{*}\ \ (k\geq 0).\] (21) _Here we regard_ \(\Psi^{*}=\psi:\wedge_{\cal A}^{0}{\bf E}_{\cal A}^{*}={\cal A}\rightarrow\wedge _{\cal B}^{0}{\bf F}_{\cal B}^{*}={\cal B}\)_, and_ \(\Psi^{*}\) _naturally lifts to an_ \({\cal A}\)_-map_ \(\wedge_{\cal A}^{k(n-1)}{\bf E}_{\cal A}^{*}\rightarrow\wedge_{\cal B}^{k(n-1) }{\bf F}_{\cal B}^{*}\)_._
_Here the comorphism of Leibniz-Rinehart algebras is defined by Remark 4.5._
Proof.: Since the Equation \(<d_{\bf E}^{n-1}a,x>=\widehat{\rho}(x)(a)=[x,a]_{\wedge_{\cal A}^{n-1}{\bf E}}\), then the second statement is equivalent to the following two conditions:
* \(d_{\bf F}^{n-1}(\psi(a))=\Psi^{*}(d_{\bf E}^{n-1}a),\ \forall a\in{\cal A}\);
* \(d_{\bf F}^{n-1}(\Psi^{*}(\xi))=\Psi^{*}(d_{\bf E}^{n-1}(\xi)),\ \forall\xi\in\wedge_{\cal A}^{n-1}{\bf E}_{\cal A}^{*}\).
We now prove that these two conditions are equivalent to the first statement.
Let \(y\in\wedge_{\cal B}^{n-1}{\bf F}\) and \(\Psi(y)=\sum_{k}x_{k}\otimes_{\cal A}b_{k}\), for some \(x_{k}\in\wedge_{\cal A}^{n-1}{\bf E},b_{k}\in{\cal B}\). By the condition \(d_{\bf F}^{n-1}(\psi(a))=\Psi^{*}(d_{\bf E}^{n-1}a),\ \forall a\in{\cal A}\), we have
\[[y,\psi(a)]_{\wedge_{\cal B}^{n-1}{\bf F}} \tag{22}\] \[= <d_{\bf F}^{n-1}(\psi(a)),y>\] \[= <\Psi^{*}(d_{\bf E}^{n-1}(a)),y>\] \[= <d_{\bf E}^{n-1}(a),I(\Psi(y))>\] \[= \sum_{k}\psi<d_{\bf E}^{n-1}(a),x_{k}>b_{k}\] \[= \sum_{k}b_{k}\psi([x_{k},a]_{\wedge_{\cal A}^{n-1}{\bf E}}).\]
Let \(y_{1},y_{2}\in\wedge_{\cal B}^{n-1}{\bf F}\) and \(\Psi(y_{i})=\sum_{k_{i}}x_{k_{i}}\otimes_{\cal A}b_{k_{i}}\), for some \(x_{k_{i}}\in\wedge_{\cal A}^{n-1}{\bf E},b_{k_{i}}\in{\cal B},i=1,2\). Then we have
\[<\Psi^{*}(d_{\bf E}^{n-1}(\xi)),y_{1}\wedge_{\cal B}y_{2}> \tag{23}\] \[= <d_{\bf E}^{n-1}(\xi),I(\Psi(y_{1})\wedge_{\cal B}\Psi(y_{2}))>\] \[= <d_{\bf E}^{n-1}(\xi),I(\sum_{k_{1}k_{2}}(x_{k_{1}}\wedge_{\cal A }x_{k_{2}})\otimes_{\cal A}b_{k_{1}}b_{k_{2}})>\] \[= \sum_{k_{1}k_{2}}\psi(<d_{\bf E}^{n-1}(\xi),x_{k_{1}}\wedge_{\cal A }x_{k_{2}}>)b_{k_{1}}b_{k_{2}}>\] \[= \sum_{k_{1}k_{2}}\psi([x_{k_{1}},<\xi,x_{k_{2}}>]_{\wedge_{\cal A }^{n-1}{\bf E}}-[x_{k_{2}},<\xi,x_{k_{1}}>]_{\wedge_{\cal A}^{n-1}{\bf E}}-<\xi, [x_{k_{1}},x_{k_{2}}]_{\wedge_{\cal A}^{n-1}{\bf E}}>)b_{k_{1}}b_{k_{2}}\] \[= \sum_{k_{2}}<d_{\bf F}^{n-1}\circ\psi(<\xi,x_{k_{2}}>),y_{1}>b_{k_ {2}}-\sum_{k_{1}}<d_{\bf F}^{n-1}\circ\psi(<\xi,x_{k_{1}}>),y_{2}>b_{k_{1}}\] \[-\sum_{k_{1}k_{2}}\psi(<\xi,[x_{k_{1}},x_{k_{2}}]_{\wedge_{\cal A }^{n-1}{\bf E}}>)b_{k_{1}}b_{k_{2}}.\]
On the other hand, we have
\[<d_{\bf E}^{n-1}(\Psi^{*}(\xi)),y_{1}\wedge_{\cal B}y_{2}> \tag{24}\] \[= [y_{1},<\Psi^{*}\xi,y_{2}>]_{\wedge_{\cal B}^{n-1}{\bf F}}-[y_{2},< \Psi^{*}\xi,y_{1}>]_{\wedge_{\cal B}^{n-1}{\bf F}}-<\Psi^{*}\xi,[y_{1},y_{2}]_{ \wedge_{\cal B}^{n-1}{\bf F}}>\] \[= [y_{1},\sum_{k_{2}}\psi<\xi,x_{k_{2}}>b_{k_{2}}]_{\wedge_{\cal B}^ {n-1}{\bf F}}-[y_{2},\sum_{k_{1}}\psi<\xi,x_{k_{1}}>b_{k_{1}}]_{\wedge_{\cal B }^{n-1}{\bf F}}-<\xi,I\circ\Psi[y_{1},y_{2}]_{\wedge_{\cal B}^{n-1}{\bf F}}>\] \[= \sum_{k_{2}}<d_{\bf F}^{n-1}\psi(<\xi,x_{k_{2}}>),y_{1}>b_{k_{2}} +\sum_{k_{2}}\psi(<\xi,x_{k_{2}}>)<d_{\bf F}^{n-1}b_{k_{2}},y_{1}>\] \[-\sum_{k_{1}}<d_{\bf F}^{n-1}\psi(<\xi,x_{k_{1}}>),y_{2}>b_{k_{1} }-\sum_{k_{1}}\psi(<\xi,x_{k_{1}}>)<d_{\bf F}^{n-1}b_{k_{1}},y_{2}>\] \[-<\xi,I\circ\Psi[y_{1},y_{2}]_{\wedge_{\cal B}^{n-1}{\bf F}}>.\]
Thus, by condition \(d_{\bf F}^{n-1}(\Psi^{*}(\xi))=\Psi^{*}(d_{\bf E}^{n-1}(\xi))\), we get
\[<\xi,I\circ\Psi[y_{1},y_{2}]_{\wedge_{\cal B}^{n-1}{\bf F}}> \tag{25}\] \[= \sum_{k_{1}k_{2}}\psi(<\xi,[x_{k_{1}},x_{k_{2}}]_{\wedge_{\cal B} ^{n-1}{\bf E}}>)b_{k_{1}}b_{k_{2}}+\sum_{k_{2}}\psi(<\xi,x_{k_{2}}>)<d_{\bf F} ^{n-1}b_{k_{2}},y_{1}>\] \[-\sum_{k_{1}}\psi(<\xi,x_{k_{1}}>)<d_{\bf F}^{n-1}b_{k_{1}},y_{2}>.\]
Let
\[z=\sum_{k_{1}k_{2}}[x_{k_{1}},x_{k_{2}}]_{\wedge_{\cal A}^{n-1}{\bf E}}\otimes _{\cal A}b_{k_{1}}b_{k_{2}}+\sum_{k_{2}}x_{k_{2}}\otimes_{\cal A}[y_{1},b_{k_{ 2}}]_{\wedge_{\cal B}^{n-1}{\bf F}}-\sum_{k_{1}}x_{k_{1}}\otimes_{\cal A}[y_{2 },b_{k_{1}}]_{\wedge_{\cal B}^{n-1}{\bf F}}.\]
Then we have
\[<\xi,I(\Psi[y_{1},y_{2}]_{\wedge_{\cal B}^{n-1}{\bf F}}-z)>=0,\ \ \forall\xi\in\wedge_{\cal A}^{n-1}{\bf E}_{\cal A}^{*}.\]
Since the map \(I\) is an isomorphism, we obtain that \(\Psi[y_{1},y_{2}]_{\wedge_{\cal B}^{n-1}{\bf F}}-z\) A similar method shows that 2) implies 1).
The graph of a pair \(({\bf E},[\cdot,\cdots,\cdot]_{\bf E},{\cal A})\stackrel{{(\Psi,\psi)}}{{\rightleftarrows}}({\bf F},[\cdot,\cdots,\cdot]_{\bf F},{\cal B})\) is defined by
\[Gr_{(\Psi,\psi)}:=\{x+\widehat{\Psi}(x)|x\in{\bf E}\otimes_{\cal A}{\cal B}\} \subset({\bf E}\otimes_{\cal A}{\cal B})\oplus{\bf F}.\]
Here \(\widehat{\Psi}\) is the \({\cal B}\)-map \({\bf E}\otimes_{\cal A}{\cal B}\to{\bf F}\) defined by \(X\otimes_{\cal A}{\cal B}\to\Psi(X)b,\ X\in{\bf E},b\in{\cal B}.\)
The graph of a pair \(({\bf F},[\cdot,\cdots,\cdot]_{\bf F},{\cal B})\stackrel{{(\Psi,\psi)}}{{\rightleftarrows}}({\bf E},[\cdot,\cdots,\cdot]_{\bf F},{\cal A})\) is given by
\[Gr_{(\Psi,\psi)}:=\{\Psi(Y)+Y|Y\in{\bf F}\}\subset({\bf E}\otimes_{\cal A}{ \cal B})\oplus{\bf F}.\]
Thus, we have the following theorem:
**Theorem 4.9**.: _Given two \(n\)-Lie-Rinehart algebras \(({\bf E},[\cdot,\cdots,\cdot]_{\bf E},{\cal A})\) and \(({\bf F},[\cdot,\cdots,\cdot]_{\bf F},{\cal B})\) and an algebraic homomorphism \(\psi:{\cal A}\to{\cal B}.\)_
1. \(({\bf E},[\cdot,\cdots,\cdot]_{\bf E},{\cal A})\stackrel{{(\Psi,\psi)}}{{\rightleftarrows}}({\bf F},[\cdot,\cdots,\cdot]_{\bf F},{\cal B})\) _is a morphism of_ \(n\)_-Lie-Rinehart algebras if and only if its graph_ \(Gr_{(\Psi,\psi)}\) _is an_ \(n\)_-Lie-Rinehart subalgebra (over_ \({\cal B}\)_) of the_ \(\psi\)_-sum_ \({\bf E}\oplus_{\psi}{\bf F}\)_._
2. \(({\bf F},[\cdot,\cdots,\cdot]_{\bf F},{\cal B})\stackrel{{(\Psi,\psi)}}{{\rightleftarrows}}({\bf E},[\cdot,\cdots,\cdot]_{\bf E},{\cal A})\) _is a morphism of_ \(n\)_-Lie-Rinehart algebras if and only if its graph_ \(Gr_{(\Psi,\psi)}\) _is an_ \(n\)_-Lie-Rinehart subalgebra (over_ \({\cal B}\)_) of the_ \(\psi\)_-sum_ \({\bf E}\oplus_{\psi}{\bf F}\)
Proof.: 1) Consider \(x=X\otimes_{\mathcal{A}}b\), Then \(x+\widehat{\Psi}(x)=X\otimes_{\mathcal{A}}b+b\Psi(X)\in Gr_{(\Psi,\psi)}\). By (9) in Theorem 3.12, \(x+\widehat{\Psi}(x)\) belongs to \(\mathbf{E}\oplus_{\psi}\mathbf{F}\) if and only if
\[\psi([X_{1},\cdot,\cdot,\cdot,X_{n-1},a]_{\mathbf{E}})b_{1}\cdot \cdot\cdot b_{n-1}\] \[= [b_{1}\Psi(X_{1}),\cdots,b_{n-1}\Psi(X_{n-1}),\psi(a)]_{\mathbf{F}}\] \[= b_{1}\cdots b_{n-1}[\Psi(X_{1}),\cdots,\Psi(X_{n-1}),\psi(a)]_{ \mathbf{F}}\]
holds, for all \(X_{i}\otimes_{\mathcal{A}}b_{i}+\widehat{\Psi}(X_{i}\otimes_{\mathcal{A}}b_{i })\in Gr_{(\Psi,\psi)}\), \(\forall 2\leq i\leq n-1\). By (13) in Definition 4.1 and the arbitrariness of \(b_{i}\), the above condition holds if and only if the pair \((\mathbf{E},[\cdot,\cdots,\cdot]_{\mathbf{E}},\mathcal{A})\stackrel{{ (\Psi,\psi)}}{{\rightarrow}}(\mathbf{F},[\cdot,\cdots,\cdot]_{\mathbf{F}}, \mathcal{B})\) satisfies Equation (13). Next, we need only to prove that the graph \(Gr_{(\Psi,\psi)}\) is closed under the \(n\)-bracket if and only if Equation (14) holds for the pair \((\mathbf{E},[\cdot,\cdots,\cdot]_{\mathbf{E}},\mathcal{A})\stackrel{{ (\Psi,\psi)}}{{\rightarrow}}(\mathbf{F},[\cdot,\cdots,\cdot]_{\mathbf{F}}, \mathcal{B})\). In fact, by the Proposition 3.13, the condition is true.
2) Similar to 1), the graph \(Gr_{(\Psi,\psi)}\) is also a subalgebra of \(\mathbf{E}\oplus_{\psi}\mathbf{F}\).
The proof is finished.
## 5 Morphisms and Comorphisms of \(n\)-Lie algebroids
In this section, we give the definitions of morphisms and comorphisms of \(n\)-Lie algebroids. It is proved that morphisms and comorphisms of \(n\)-Lie algebroids are equivalent to comorphisms and morphisms of \(n\)-Lie-Rinehart algebras respectively.
Given a smooth map \(\phi:M\to N\). Let \(E_{2}\) be a vector bundle over \(N\). We have the pull-back bundle \(\phi^{!}E_{2}\) (over \(M\)). Thus, there exist an algebraic homomorphism \(\psi=\phi^{*}:C^{\infty}(N)\to C^{\infty}(M)\). Let \(E_{1}\) be a vector bundle over \(M\) and \(\Phi_{E}:\phi^{!}E_{2}\to E_{1}\) be a bundle map. We have the dual bundle map
(26)
Obviously, it induces a map
\[\Phi_{E}^{*}:\Gamma(E_{2})\rightarrow\Gamma(E_{1}). \tag{27}\]
The vector bundle comorphism have a different definition from [22].
**Definition 5.1**.: _A vector bundle comorphism, depicted by a diagram_
(28)
_is given by a base map \(\phi:M\to N\) together with a family of linear maps (going in the 'opposite' direction)_
\[\Phi_{E}:(E_{2})_{\phi(x)}\rightarrow(E_{1})_{x}\]
_depending smoothly on \(x\), in the sense that the resulting map \(\phi^{*}E_{2}\to E_{1}\) is smooth._
The above vector bundle comorphism is equivalent to a vector bundle map \(\Phi_{E}:\phi^{!}E_{2}\to E_{1}\).
**Definition 5.2**.: _Let \((E_{1},M,\rho_{1},[\cdot,\cdots,\cdot]_{1})\) and \((E_{2},N,\rho_{2},[\cdot,\cdots,\cdot]_{2})\) be two \(n\)-Lie algebroids over bases \(M\) and \(N\) respectively. Given a smooth map \(\phi:M\to N\), and a vector bundle morphism \(\Phi_{E}:\phi^{!}E_{2}\to E_{1}\), if it satisfies_
1. \(\phi_{*}\circ\rho_{1}\circ\Phi_{E}^{*}=\rho_{2}\)_,_
2. _the pullback map_ (27) _preserves_ \(n\)_-brackets,_
_then we call \(\Phi_{E}:\phi^{!}E_{2}\to E_{1}\) a **comorphism of \(n\)-Lie algebroids**, written as_
\[(E_{2},N,[\cdot,\cdots,\cdot]_{2})\stackrel{{(\Phi_{E},\phi)}}{{ \rightleftarrows}}(E_{1},M,[\cdot,\cdots,\cdot]_{1}).\]
_In particular, if \(\phi\) is surjective and \(\Phi_{E}\) is injective, then we call \((E_{2},N)\) a co-subalgebroid of \((E_{1},M)\)._
Obviously, this condition \((i)\) is not automatic hold. For example, let \(N=M\), with \(\phi\) the identity map, let \(E_{2}=TM\) be the tangent bundle. Let \(E_{1}=0\) be the trivial \(n\)-Lie algebroid with zero anchor map and zero \(n\)-bracket. Let \(X_{1},\cdots,X_{n-1}\in\Gamma(TM)\) be some non-zero vector fields. Then there is a unique \(n\)-Lie algebroid morphism \(\Phi_{E}:\phi^{!}E_{2}\to 0\) covering \(\phi=id_{M}\); the pull-back map on sections is the zero map, and in particular preserves brackets. But the condition \((i)\) would tell us \(0\sim_{\phi}\rho_{2}(X_{1}\wedge\cdots X_{n-1})\), i.e. \(\rho_{2}(X_{1}\wedge\cdots\wedge X_{n-1})=0\).
**Remark 5.3**.: _For every open set of all \(x\in M\), where the pullback map \(\Phi_{E}^{*}:\Gamma((E_{2})_{\phi(x)})\to\Gamma((E_{1})_{x})\) is non-zero, condition \((i)\) is automatic. First, for every sections \(\sigma_{1},\cdots,\sigma_{n}\) of \(E_{2}\) and \(f\in C^{\infty}(N)\), we have \(\Phi_{E}^{*}[\sigma_{1},\cdot,\cdot,\cdot,f\sigma_{n}]=[\Phi_{E}^{*}\sigma_{ 1},\cdot,\cdot,\cdot,(\phi^{*}f)\Phi_{E}^{*}\sigma_{n}]\). Using the Leibnitz rule, one obtain a formula_
\[(\phi^{*}(\rho_{2}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})f)-\rho_{1}(\Phi_ {E}^{*}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1}))(\phi^{*}f))\Phi_{E}^{*} \sigma_{n}=0.\]
_This proves that \(\phi^{*}(\rho_{2}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})f)=\rho_{1}(\Phi_ {E}^{*}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1}))(\phi^{*}f)\) at all point \(x\in M\) where \(\Phi_{E}^{*}\sigma_{n}|_{x}\neq 0\) for every \(\sigma_{1},\cdots,\sigma_{n-1}\in\Gamma(E_{2})\)._
**Corollary 5.4**.: _Let \(\psi=\phi^{*}:C^{\infty}(N)\to C^{\infty}(M)\). In Definition 5.2, \((1)\) is equivalent to_
\[\psi([\sigma_{1},\cdots,\sigma_{n-1},f]_{E_{2}})=[\Phi_{E}^{*}(\sigma_{1}), \cdots,\Phi_{E}^{*}(\sigma_{n-1}),\psi(f)]_{E_{1}},\ \forall\sigma_{1},\cdots,\sigma_{n-1}\in E_{2},f\in C^{\infty}(N).\]
_(C.f. relation \((1)\) of Definition 4.1.) Thus, the **comorphism** of \(n\)-Lie algebroids in Definition 5.2 is equivalent to the fact that_
\[(\Gamma(E_{2}),C^{\infty}(N))\stackrel{{(\Phi_{E}^{*},\psi)}}{{ \rightleftarrows}}(\Gamma(E_{1}),C^{\infty}(M))\]
_is a **morphism** of \(n\)-Lie-Rinehart algebras._
We remark that, the first condition can be restated as: for each \(Y_{1},\cdots,Y_{n-1}\in\Gamma(E_{2})\), the vector field \(\rho_{1}(\Phi_{E}^{*}(Y_{1}\wedge\cdots\wedge Y_{n-1}))\) is \(\phi\)-related to \(\rho_{2}(Y_{1}\wedge\cdots\wedge Y_{n-1})\). We denote by \(\mathcal{LA}^{\vee}\) the category of \(n\)-Lie algebroids of rank \(n\) and \(n\)-Lie algebroidomorphisms.
**Definition 5.5** ([9]).: _Let \((M,\pi_{1})\) and \((N,\pi_{2})\) be two manifolds with \(n\)-vector fields. A smooth map \(\phi:M\to N\) is called \((\pi_{1},\pi_{2})\)-map if the induced brackets on functions satisfy:_
\[\{\phi^{*}f_{1},\cdots,\phi^{*}f_{n}\}_{1}=\phi^{*}\{f_{1},\cdots,f_{n}\}_{2},\]
_for all \(f_{1},\cdots,f_{n}\in C^{\infty}(N)\), or equivalently, \(\phi_{*}\pi_{1}=\pi_{2}\). A \((\pi_{1},\pi_{2})\)-map \(\phi:(M,\pi_{1})\to(N,\pi_{2})\) between Nambu-Poisson manifolds of the same order \(n\) is called a Nambu-Poisson map._
Now, let \(\mathcal{VB}_{Nambu}\) be the category of vector bundles with linear Nambu-Poisson structures of rank \(n\); morphisms in this category are vector bundle maps that are also Nambu-Poisson maps. The following result shows that there is category equivalence between \(\mathcal{VB}_{Nambu}\) and \(\mathcal{LA}^{\vee}\). For any section \(\sigma\in\Gamma(E)\), let \(\varphi_{\sigma}\in C^{\infty}(M)\) be the corresponding linear function on the dual bundle \(E^{*}\).
**Theorem 5.6**.: _Let \((E_{1},M,\rho_{1},[\cdot\,,\cdots,\cdot]_{1})\) and \((E_{2},N,\rho_{2},[\cdot\,,\cdots,\cdot]_{2})\) be two \(n\)-Lie algebroids of rank \(n\) over bases \(M_{1}\) and \(M_{2}\) respectively. A vector bundle morphism \(\Phi_{E}:\phi^{!}E_{2}\to E_{1}\) is an \(n\)-Lie algebroid morphism if and only if the dual map \(\Phi_{E^{*}}:E_{1}^{*}\to E_{2}^{*}\) is a Nambu-Poisson map._
Proof.: Let \(p_{1}:E_{1}^{*}\to M\) and \(p_{2}:E_{2}^{*}\to N\) be two dual bundles correspondence to \(E_{1}\) and \(E_{2}\) respectively. To simplify notation, we denote all the pull-back maps \(\phi^{*},\Phi_{E}^{*},\Phi_{E^{*}}^{*}\). by \(\Phi^{*}\). For any vector bundle morphism \(\Phi_{E}:\phi^{!}E_{2}\to E_{1}\), and \(\sigma\in\Gamma(E_{2})\), we have that
\[\varphi_{\Phi^{*}\sigma}=\Phi^{*}\varphi_{\sigma}. \tag{29}\]
Given sections \(\sigma_{1},\cdots,\sigma_{n}\in\Gamma(E_{2})\) and functions \(f_{1},\cdots,f_{n}\in C^{\infty}(M_{2})\), for all \(0\leq k\leq n-2\), we have
\[\Phi^{*}\{\varphi_{\sigma_{1}},\cdots,\varphi_{\sigma_{k}},p_{2}^{*}f_{k+1}, \cdot\cdot,\cdot,\cdot,p_{2}^{*}f_{n}\}_{2}=0=\{\Phi^{*}\varphi_{\sigma_{1}}, \cdot\cdot,\cdot,\cdot,\Phi^{*}\varphi_{\sigma_{k}},\Phi^{*}p_{2}^{*}f_{k+1}, \cdots,\Phi^{*}p_{2}^{*}f_{n}\}_{1}, \tag{30}\]
\[\varphi_{\Phi^{*}[\sigma_{1},\cdots,\sigma_{n}]_{2}}=\Phi^{*}\varphi_{[\sigma _{1},\cdots,\sigma_{n}]_{2}}=\Phi^{*}\{\varphi_{\sigma_{1}},\cdots,\varphi_{ \sigma_{n}}\}_{2}, \tag{31}\]
\[\varphi_{[\Phi^{*}\sigma_{1},\cdots,\Phi^{*}\sigma_{n}]_{1}}=\{\varphi_{\Phi^{ *}\sigma_{1}},\cdots,\varphi_{\Phi^{*}\sigma_{n}}\}_{1}=\{\Phi^{*}\varphi_{ \sigma_{1}},\cdots,\Phi^{*}\varphi_{\sigma_{n}}\}_{1}, \tag{32}\]
\[p_{1}^{*}\Phi^{*}(\rho_{2}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})f_{1})= \Phi^{*}p_{2}^{*}(\rho_{2}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})f_{1})= \Phi^{*}\{\varphi_{\sigma_{1}},\cdots,\varphi_{\sigma_{n-1}},p_{2}^{*}f_{1}\} _{2}, \tag{33}\]
\[p_{1}^{*}(\rho_{1}(\Phi^{*}\sigma_{1}\wedge\cdots\wedge\Phi^{*}\sigma_{n-1}) \Phi^{*}f_{1})=\{\varphi_{\Phi^{*}\sigma_{1}},\cdots,\varphi_{\Phi^{*}\sigma_{ n-1}},p_{1}^{*}\Phi^{*}f_{1}\}_{1}=\{\Phi^{*}\varphi_{\sigma_{1}},\cdots, \Phi^{*}p_{2}^{*}f_{1}\}_{1}. \tag{34}\]
The Equation (30) holds because of the local coordinates of \(\pi_{1}\) and \(\pi_{2}\) on \(E_{1}^{*}\) and \(E_{2}^{*}\) respectively. Therefore, \(\Phi_{E}\) being an \(n\)-Lie algebroid comorphism is equivalent to the equality of the left hand sides of Equations (31), (32) and equality of the left hand sides of Equations (33), (34), while \(\Phi^{*}\) being a Nambu-Poisson map is equivalent to the equality of the corresponding right hand sides.
**Remark 5.7**.: _It is known that under some connectedness and simply connectedness assumption, any Lie bialgebra integrates to a Poisson-Lie group, and any Lie bialgebroid integrates to a Poisson groupoid. These results does not hold in the context of Nambu structures of order \(\geq 3\). Therefore, we not consider the integral problem of \(n\)-Lie algebroids in this paper._
Next we define \(n\)-Lie algebroid morphism \(H_{E}:E_{1}\to E_{2}\).
**Definition 5.8**.: _Let \((E,M,\rho,[\cdot\,,\cdots\,,\cdot])\) be an \(n\)-Lie algebroid, and \(H\subseteq E\) a vector subbundle along \(N\subseteq M\). We say that a vector subbundle \(H\) is an \(n\)-Lie subalgebroid when it has the following properties:_
1. _If_ \(\sigma_{1}|_{N},\cdots,\sigma_{n}|_{N}\in\Gamma(H)\)_, then we have_ \([\sigma_{1},\cdots,\sigma_{n}]|_{N}\in\Gamma(H)\)_, where_ \(\sigma_{1},\cdots,\sigma_{n}\in\Gamma(E)\)_,_
2. \(\rho(\wedge^{n-1}H)\subseteq TN\)_._
_Therefore, an \(n\)-Lie subalgebroid is itself an \(n\)-Lie algebroid._
**Proposition 5.9**.: _Let \(H\subseteq E\) is an \(n\)-Lie subalgebroid along \(N\subseteq M\), then \(H\) has an \(n\)-Lie algebroid structure, with anchor the restriction of \(\rho:\wedge^{n-1}E\to TN\), and with the unique bracket such that_
\[[\sigma_{1}|_{N},\cdots,\sigma_{n}|_{N}]_{N}=[\sigma_{1},\cdots,\sigma_{n}]|_{N} \tag{35}\]
_whenever \(\sigma_{1}|_{N},\cdots,\sigma_{n}|_{N}\in\Gamma(H)\)._
Proof.: To prove that this bracket is well-defined, we have to show that \([\sigma_{1},\cdots,\sigma_{n}]|_{N}=0\) whenever \(\sigma_{n}|_{N}=0\). Let \(\sigma_{n}=\sum_{j}f_{j}^{n}\sigma_{j}^{n}\) where \(f_{j}^{n}\in C^{\infty}(M)\) vanish on \(N\). Thus, we have that
\[[\sigma_{1},\cdots,\sigma_{n}]|_{N}=\sum_{j}f_{j}^{n}|_{N}[\sigma_{1},\cdots, \sigma_{n-1},\sigma_{j}^{n}]|_{N}+\sum_{j}(\rho(\sigma_{1}\wedge\cdots\wedge \sigma_{n-1})f_{j}^{n})|_{N}\sigma_{j}^{n}|_{N}=0,\]
where we used that \(\rho(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})f_{j}^{n}=0\), since \(\rho(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})\in TN\) and the \(f_{j}^{n}\) vanish on \(N\).
Now we use \(n\)-Lie subalgebroids to define morphisms of \(n\)-Lie algebroids.
**Definition 5.10**.: _Given two \(n\)-Lie algebroids \((E_{1},M,\rho_{1},[\cdot\,,\cdots,\cdot]_{1})\) and \((E_{2},N,\rho_{2},[\cdot\,,\cdots,\cdot]_{2})\), a vector bundle map_
\[\Phi_{E}:E_{1}\to E_{2}\]
_is an \(n\)**-Lie algebroid morphism**, written \((E_{1},M,[\cdot\,,\cdots,\cdot]_{1})\overset{(\Phi_{E},\phi)}{\rightrightarrows}(E_{2},N,[\cdot\,,\cdots,\cdot]_{2})\), if its graph \(Gr(\Phi_{E})\subseteq E_{2}\times E_{1}^{-(n-1)}\) is an \(n\)-Lie subalgebroid along \(Gr(\phi)\)._
\(E_{1}^{-(n-1)}\) is \(E_{1}\) as a vector space, but the \(n\)-Lie bracket on section spaces \(\Gamma(A)\) is given by \((-1)^{n-1}\) the bracket on \(E_{1}\), and with \((-1)^{n-1}\) the anchor of \(E_{1}\). The category of \(n\)-Lie algebroids of rank \(n\) with morphisms will be denoted by \(\mathcal{LA}\). Having defined the category \(\mathcal{LA}\), it is natural to ask what corresponds to it on the dual side, in terms of the linear Nambu-Poisson structures on vector bundles. The answer will have to wait until we have the notion of Nambu-Poisson relations.
**Theorem 5.11**.: _With the above notations, \((E_{1},M,[\,\cdot\,,\cdot\,,\cdot\,]_{1})\stackrel{{(\Phi_{E}, \phi)}}{{\rightarrow}}(E_{2},N,[\,\cdot\,,\cdot\,,\cdot\,]_{2})\) is a morphism of \(n\)-Lie algebroids if and only if_
1. \[\rho_{2}\circ\Phi_{E}=\phi_{*}\circ\rho_{1}\] (36)
2. \[\Phi_{E}([\sigma_{1},\cdots,\sigma_{n}])=[\Phi_{E}(\sigma_{1}),\cdots,\Phi_{E }(\sigma_{n})],\] (37) _for all_ \(\sigma_{1},\cdots,\sigma_{n}\in\Gamma(E_{1})\)_._
The Equation (37) can be written as local expression as follows. Let \(\Psi_{E}^{!}(\sigma_{i})=\sum_{k_{i}}f_{k_{i}}\tau_{k_{i}},\ 1\leq i\leq n\), for some \(f_{k_{i}}\in C^{\infty}(M),\tau_{i}\in\Gamma(E_{2})\), where \(\Psi_{E}^{!}\) is the induced bundle map \(E_{1}\rightarrow\phi^{!}E_{2}\), then
\[\Psi_{E}^{!}([\sigma_{1},\cdots,\sigma_{n}])=\sum_{k_{1}\cdots k_{n}}f_{k_{1}} \cdots f_{k_{n}}[\tau_{k_{1}},\cdots,\tau_{k_{n}}]+\sum_{i}\sum_{k_{i}}(-1)^{ n-i}[\sigma_{1},\cdots,\widehat{\sigma_{i}},\cdots,\sigma_{n},f_{k_{i}}].\]
Proof.: Assume that \((E_{1},M,[\,\cdot\,,\cdots\,,\cdot\,]_{1})\stackrel{{(\Phi_{E}, \phi)}}{{\rightarrow}}(E_{2},N,[\,\cdot\,,\cdot\,,\cdot\,]_{2})\) is a morphism of \(n\)-Lie algebroids. Then \(Gr(\Phi_{E})\subseteq E_{2}\times E_{1}^{-(n-1)}\) is a subalgebroid along \(Gr(\phi)\). By the definition of \(n\)-Lie subalgebroids, we have:
1. for all \(\sigma_{1},\cdots,\sigma_{n-1}\in\Gamma(E_{1})\) and \(\Phi_{E}(\sigma_{1}),\cdots,\Phi_{E}(\sigma_{n-1})\in\Gamma(E_{2})\), \[\rho((\Phi_{E}(\sigma_{1}),\sigma_{1}),\cdots,(\Phi_{E}(\sigma_{n-1}), \sigma_{n-1}))\] \[= (\rho_{2}(\Phi_{E}(\sigma_{1})\wedge\cdots\wedge\Phi_{E}(\sigma_{ n-1})),\rho_{1}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1}))\in TGr(\phi).\] Then by the definition of \(TGr(\phi)\), we have the following expression: \[\rho_{2}\circ(\Phi_{E}(\sigma_{1})\wedge\cdots\wedge\Phi_{E}(\sigma_{n-1}))= \phi_{*}\circ\rho_{1}(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})\] (38) The Equation (38) is equivalent to the equation (36).
2. for all \((\Phi_{E}(\sigma_{1}),\sigma_{1}),\cdots,(\Phi_{E}(\sigma_{n}),\sigma_{n})\in \Gamma(Gr(\Phi_{E}))\), \[[(\Phi_{E}(\sigma_{1}),\sigma_{1}),\cdots,(\Phi_{E}(\sigma_{n}),\sigma_{n})]\] \[= ([\Phi_{E}(\sigma_{1}),\cdots,\Phi_{E}(\sigma_{n})]_{2},(-1)^{n-1} [\sigma_{1},\cdots,\sigma_{n}]_{1})\in\Gamma(E_{2}\times E_{1}^{-(n-1)}).\] Then by the definition of \(Gr(\Phi_{E})\), we get the following expression: \[[\Phi_{E}(\sigma_{1}),\cdots,\Phi_{E}(\sigma_{n})]_{2}=\Phi_{E}([\sigma_{1}, \cdots,\sigma_{n}]_{1})\] (39) The Equation (39) is exactly the Equation (37).
Conversely, the proof is obvious.
**Corollary 5.12**.: _Let \(\psi=\phi^{*}:C^{\infty}(N)\to C^{\infty}(M)\). In Theorem 5.11, (1) is equivalent to_
\[[\sigma_{1},\cdots,\sigma_{n-1},\psi(g)]_{1}=\sum_{k_{1},\cdots,k_{n}}f_{k_{1} }\cdots f_{k_{n}}\psi([\tau_{1},\cdots,\tau_{n-1},g]_{2}),\ \forall\sigma_{1},\cdots,\sigma_{n-1}\in E_{1},g\in C^{\infty}(N).\]
_(C.f. relation (1) of Definition 4.2.) Thus, the **morphism** of \(n\)-Lie algebroid in Theorem (5.11) is equivalent to the fact that_
\[(\Gamma(E_{1}),C^{\infty}(M))\stackrel{{(\Phi_{E}^{!},\psi)}}{{ \leftarrow}}(\Gamma(E_{2}),C^{\infty}(N))\]
_is a **comorphism** of \(n\)-Lie-Rinehart algebras._
Applications to Nambu-Poisson manifolds
In this section, we revisit some notations and property about Nambu-Poisson submanifolds and coisotropic submanifold of Nambu-Poisson manifolds. Then we introduce the concept of Nambu-Poisson relation which is a generalization of Poisson relation introduced by Weinstein in [27]. Finally, we prove that \(\Phi_{E}:E_{1}\to E_{2}\) is an \(n\)-Lie algebroid morphism if and only if the dual comorphism \(\Phi_{E^{*}}:E_{1}^{*}\dashrightarrow E_{2}^{*}\) is a Nambu-Poisson relation. Thus we obtain that there is a category equivalence between \(\mathcal{VB}_{Nambu}^{\vee}\) and \(\mathcal{LA}\), see Theorem 6.10.
### Nambu-Poisson submanifolds
A submanifold \(N\subseteq M\) is called **Nambu-Poisson submanifold** if the Nambu-Poisson tensor \(\pi\) is everywhere tangent to \(N\), that is \(\pi_{x}\in\wedge^{n}T_{x}N\subseteq\wedge^{n}T_{x}M\). Thus, we define an \(n\)-vector field \(\pi_{N}\in\wedge^{n}TN\), satisfying the following property:
\[\pi_{N}\sim_{i}\pi\]
where \(i:N\to M\) is the inclusion. Therefore, the corresponding Nambu-Poisson bracket \(\{\cdot\,,\cdots,\cdot\}_{N}\) is given by
\[\{i^{*}f_{1},i^{*}f_{2},\cdots,i^{*}f_{n}\}_{N}=i^{*}\{f_{1},f_{2},\cdots,f_{ n}\}.\]
The Jacobi identity for \(\pi_{N}\) follows from that for \(\pi\). We have the following easy observation for Nambu-Poisson submanifolds of Nambu-Poisson manifolds.
**Proposition 6.1**.: _The following are equivalent:_
1. \(N\) _is a Nambu-Poisson submanifold._
2. \(\pi^{\sharp}(\wedge^{n-1}T^{*}M|_{N})\subseteq TN\)_._
3. \(\pi^{\sharp}(\wedge^{n-2}T^{*}M|_{N}\wedge(TN)^{\circ})=0\)_._
4. _All Hamiltonian vector fields_ \(X_{f_{1}\cdots f_{n-1}}\)_,_ \(f_{1},\cdots,f_{n-1}\in C^{\infty}(M)\) _are tangent to_ \(N\)_._
5. _When_ \(N\) _is a closed embedded submanifold, these conditions (a)-(d) are also equivalent to the following property: the vanishing ideal of_ \[\mathcal{I}(N):=\{f\in C^{\infty}(M)|f|_{N}\equiv 0\}\] _is an_ \(n\)_-Lie algebra ideal; i.e.,_ \(\{f_{1},\cdots,f_{i-1},g,f_{i+1},\cdots,f_{n}\}\in\mathcal{I}(N)\) _whenever_ \(f_{j}\in C^{\infty}(M)\) _and_ \(g\in\mathcal{I}(N)\)_._
Proof.: If \(i:(N,\pi_{N})\hookrightarrow(M,\pi)\) is a Nambu-Poisson submanifold, then \(\pi_{N}\) is \(i\)-related to \(\pi\):
\[(i_{*})_{x}(\pi_{N,x})=\pi_{x},\quad\forall x\in N.\]
This is equivalent to
\[(i_{*})_{x}\circ\pi_{N,x}^{\sharp}\circ(i^{*})_{x}=\pi_{x}^{\sharp}. \tag{40}\]
Since \(i_{*}\) is injective, this proves that \(\pi_{N}\) is unique. Obviously, if \((N,\pi_{N})\) is a Nambu-Poisson submanifold, then we have \(\pi^{\sharp}(\wedge^{n-1}T^{*}M|_{N})\subseteq TN\).
Next, let \(i:N\hookrightarrow M\) be a submanifold such that \(\mathrm{Im}\pi_{x}^{\sharp}\subseteq(i_{*})_{x}(TN)\). We claim that there exists a unique smooth \(n\)-vector field \(\pi_{N}\) on \(N\) such that \((a)\) holds. In fact, since \(\mathrm{Im}\pi_{x}^{\sharp}\subseteq(i_{*})_{x}(TN)\), it is enough to check that for any \(\alpha_{1}\wedge\cdots\wedge\alpha_{n-1}\in\wedge^{n-2}T_{x}^{*}M\wedge(T_{x}N )^{\circ}=ker((i^{*})_{x})\) we have \(\pi_{x}^{\sharp}(\alpha)=0\). By the \(n\)-skew-symmetry, for any \(\alpha_{n}\in T_{x}^{*}M\) we have that
\[<\pi_{x}^{\sharp}(\alpha_{1}\wedge\cdots\wedge\alpha_{n-1}),\alpha_{n}>=(-1)^ {n-j}<\pi_{x}^{\sharp}(\alpha_{1}\wedge\cdots\wedge\alpha_{j-1}\wedge\alpha_{ n}\wedge\alpha_{j+1}\wedge\cdots\wedge\alpha_{n-1}),\alpha_{j}>=0.\]
The \(n\)-skew-symmetry for \(\pi_{N}\) follows from that for \(\pi\). The smoothness of \(\pi_{N}\) is automatically holds.
The Schouten brackets of \(i\)-related multivector fields are also \(i\)-related:
\[(i_{*})_{x}([\pi_{N},\pi_{N}])_{x}=[\pi,\pi]_{i(x)}=0.\]
Therefore, if (i) holds, then \(N\) has a unique Nambu-Poisson structure such that it is a Nambu-Poisson submanifold.
For the equivalence between \((b)\) and \((c)\), by using
\[<\pi_{x}^{\sharp}(\alpha_{1}\wedge\cdots\wedge\alpha_{n-1}),\alpha_{n}>=(-1)^{ n-j}<\pi_{x}^{\sharp}(\alpha_{1}\wedge\cdots\wedge\alpha_{j-1}\wedge\alpha_{n} \wedge\alpha_{j+1}\wedge\cdots\wedge\alpha_{n-1}),\alpha_{j}>,\]
with \(\alpha_{1},\cdots,\alpha_{n-1}\in T_{x}^{*}M\) and \(\alpha_{n}\in(T_{x}N)^{\circ}\), we get
\[\pi_{x}^{\sharp}(\wedge^{n-1}T_{x}^{*}M)\subseteq T_{x}N\quad\Leftrightarrow \quad\pi_{x}^{\sharp}(\wedge^{n-2}T_{x}^{*}M\wedge(T_{x}N)^{\circ})=0.\]
The equivalence between \((b)\) and \((d)\) is obvious.
Let \(N\) is an embedded submanifold. If \((d)\) holds, then the functions vanishing on \(N\) are an \(n\)-Lie algebra ideal since \(g|_{N}=0\) implies that
\[\{f_{1},\cdots,f_{i-1},g,f_{i+1},\cdots,f_{n}\}|_{N}=(-1)^{n-i}\{f_{1},f_{2}, \cdots,f_{i-1},f_{i+1},\cdots,f_{n},g\}|_{N}=0.\]
Since \(X_{f_{1}\cdots f_{i-1}f_{i+1}\cdots f_{n}}\) is tangent to \(N\). This proves \((e)\). Conversely, if \((e)\) holds, such that
\[\{f_{1},f_{2},\cdots,f_{i-1},f_{i+1},\cdots,f_{n},g\}|_{N}=(-1)^{n-i}\{f_{1}, \cdots,f_{i-1},g,f_{i+1},\cdots,f_{n}\}|_{N}=0\]
whenever \(g|_{N}=0\). It follows that \(<dg,X_{f_{1}\cdots f_{i-1}f_{i+1}\cdots f_{n}}>|_{N}=X_{f_{1}\cdots f_{i-1}f_{ i+1}\cdots f_{n}}(g)|_{N}=0\), whenever \(g|_{N}=0\). The differentials \(dg|_{N}\) for \(g|_{N}=0\)\((TN)^{\circ}\), hence this implies that \(X_{f_{1}\cdots f_{i-1}f_{i+1}\cdots f_{n}}|_{N}\in\Gamma(TN)\), which gives \((d)\).
### Coisotropic submanifolds
In the subsection, we introduce coisotropic submanifolds of Nambu-Poisson manifolds and give a Nambu-Poisson relation. The coisotropic submanifold of Nambu-Poisson manifolds has introduced in [9] through the closed embedded submanifold. However, under local condition, the embedded submanifold is equivalent to the immersion submanifold. In the subsection, a submanifold is an immersion submanifold unless otherwise specified. The following proposition from [9].
**Proposition 6.2** ([9]).: _The following are equivalent:_
1. \(\pi^{\sharp}(\wedge^{n-1}(TN)^{\circ})\subseteq TN\)_._
2. _For every_ \(f_{1},\cdots,f_{n-1}\) _such that_ \(f_{i}|_{N}=0,\ \ \forall i=1,\cdots,n-1\)_, the Hamiltonian vector filed_ \(X_{f_{1}\cdots f_{n-1}}\) _is tangent to_ \(N\)_._
3. _The space of functions_ \(f\) _with_ \(f|_{N}=0\) _are an_ \(n\)_-Lie subalgebra (or Nambu-Poisson subalgebra) under the Nambu-Poisson bracket._
A submanifold \(N\subseteq M\) is called a coisotropic submanifold if it satisfies any of these equivalent conditions.
In [9], the authors give the property of coisotropic submanifold with respect to any multivector field. Now, we give a similar proposition.
**Proposition 6.3**.: _Let \((M_{1},\pi_{1})\) and \((M_{2},\pi_{2})\) be two manifolds with \(n\)-vector fields \(\pi_{1}\) and \(\pi_{2}\) respectively. Let \(\Phi:M_{1}\to M_{2}\) be a smooth map. Then \(\Phi_{*}\pi_{1}=\pi_{2}\) if and only if its graph_
\[Gr(\Phi)=\{(\Phi(m_{1}),m_{1})|m_{1}\in M_{1}\}\]
_is a coisotropic submanifold of \(M_{2}\times M_{1}\) with respect to \(\pi_{2}\oplus(-1)^{n-1}\pi_{1}\)._
Proof.: Note that, a tangent vector to the graph consist of a pair \((\Phi_{*}X_{m_{1}},X_{m_{1}})\), where \(m_{1}\in M_{1},X_{m_{1}}\in T_{m_{1}}M_{1}\). Therefore, \((TGF(\Phi))^{\circ}\) consist of a pair of covectors \((\alpha,-\Phi^{*}\alpha)\), where \(\alpha\in T_{\Phi(m_{1})}^{*}M_{2}\). Thus, \(Gr(\Phi)\) is a coisotropic submanifold of \(M_{2}\times M_{1}\) with respect to \(\pi_{2}\oplus(-1)^{n-1}\pi_{1}\) if and only if \(\pi_{2}^{\sharp}\times(-1)^{n-1}\pi_{1}\) map \((\alpha_{1},-\Phi^{*}\alpha_{1})\wedge\cdots\wedge(\alpha_{n-1},-\Phi^{*} \alpha_{n-1})\) into \(TGF(\Phi)\), for all \(\alpha_{1},\cdots,\alpha_{n-1}\in T_{\Phi(m_{1})}^{*}M_{2}\) and \(m_{1}\in M_{1}\). In other words, \(\Phi_{*}\pi_{1}=\pi_{2}\). The proof is finished. \(\blacksquare\)
Let \(\pi_{1},\pi_{2}\) be two Nambu-Poisson tensors on \(M_{1}\) and \(M_{2}\) respectively. Then the map \(\Phi\) is a Nambu-Poisson map. In [27], Weinstein introduced the notion of Poisson relation. In a similar fashion for Nambu-Poisson manifolds, we can give the following definition:
**Definition 6.4**.: _Let \(M_{1},M_{2}\) be two Nambu-Poisson manifolds. A_ **Nambu-Poisson relation** _from \(M_{1}\) to \(M_{2}\) is a coisotropic submanifold \(N\subseteq M_{2}\times M_{1}^{(-1)^{n-1}}\), where \(M_{1}^{(-1)^{n-1}}\) is \(M_{1}\) with the Nambu-Poisson structure \((-1)^{n-1}\pi_{1}\)._
Nambu-Poisson relations are regarded as "comorphism". We will thus write
\[N:M_{1}\dashrightarrow M_{2}\]
for a submanifold \(N\subseteq M_{2}\times M_{1}\) seen as such a "morphism". But, we need consider the compatible condition for the Nambu-Poisson relation: Given submanifolds \(N\subseteq M_{2}\times M_{1}\) and \(H\subseteq M_{3}\times M_{2}\), the composition \(H\circ N\) need not to be a submanifold.
**Definition 6.5**.: _We say that two relations \(N:M_{1}\dashrightarrow M_{2}\) and \(H:M_{2}\dashrightarrow M_{3}\) (given by submanifolds \(N\subseteq M_{2}\times M_{1}\) and \(H\subseteq M_{3}\times M_{2}\)) have_ **clean composition** _if_
* \(H\circ N\) _is a submanifold;_
* \(T(H\circ N)=TH\circ TN\)__**fiberwise**_._
By \((b)\), for all \(m_{i}\in M_{i}\) with \((m_{3},m_{2})\in H\) and \((m_{2},m_{1})\in N\), we get that
\[T_{(m_{3},m_{1})}(H\circ N)=T_{(m_{3},m_{2})}H\circ T_{(m_{2},m_{1})}N.\]
**Proposition 6.6**.: _Given two Nambu-Poisson relations \(N:M_{1}\dashrightarrow M_{2}\) and \(H:M_{2}\dashrightarrow M_{3}\) with clean composition \(H\circ N:M_{1}\dashrightarrow M_{3}\). Then \(H\circ N\) is again a Nambu-Poisson relation._
Proof.: In order to show \(H\circ N\) is a coisotropic submanifold. Let
\[(\alpha_{3_{j}},-\alpha_{1_{j}})\in(T(H\circ N))^{\circ},\ \ 1\leq j\leq n-1\]
be given, with base point \((m_{3},m_{1})\in H\circ N\). By clean composition, we can pick a point \(m_{2}\in M_{2}\) with \((m_{3},m_{2})\in H\) and \((m_{2},m_{1})\in N\) such that
\[T_{(m_{3},m_{1})}(H\circ N)=T_{(m_{3},m_{2})}H\circ T_{(m_{2},m_{1})}N.\]
Thus, there are some differential \(1\)-forms \(\alpha_{2_{j}}\in T_{m_{2}}^{*}M_{2},\ \ 1\leq j\leq n-1\) such that \((\alpha_{3_{j}},-\alpha_{2_{j}})\in(TH)^{\circ}\) and \((\alpha_{2_{j}},-\alpha_{1_{j}})\in(TN)^{\circ}\). Let \(X_{i}=\pi_{i}^{\sharp}(\alpha_{i_{1}}\wedge\cdots\wedge\alpha_{i_{n-1}})\). Then we have \((X_{3},X_{2})\in TH\) and \((X_{2},X_{1})\in TN\), since \(H,N\) are coisotropic. Thus, we obtain that \(H\circ N\) is coisotropic since \((X_{3},X_{1})\in TH\circ TN=T(H\circ N)\). \(\blacksquare\)
Note that \(H\subseteq E\) is an \(n\)-Lie subalgebroid if and only if \(\{\sigma\in\Gamma(E)|\sigma|_{N}\in\Gamma(H)\}\) is an \(n\)-Lie subalgebra, with \(\{\sigma\in\Gamma(E)|\sigma|_{N}=0\}\) as an \(n\)-Lie ideal, that is \(\rho(\wedge^{n-1}H)\subseteq TN\). Therefore, for the dual picture, we have
\[\sigma|_{N}\in\Gamma(H) \Leftrightarrow \phi_{\sigma}\ \ \text{vanishes}\ \ \text{on}\ \ H^{\circ}\subseteq E^{*}|_{N},\] \[\sigma|_{N}=0 \Leftrightarrow \phi_{\sigma}\ \ \text{vanishes}\ \text{on}\ \ E^{*}|_{N}.\]
**Proposition 6.7**.: _Let \(E\to M\) be an \(n\)-Lie algebroid and \(H\to N\) be a vector bundle. Then \(H\) is an \(n\)-Lie subalgebroid if and only if \(H^{\circ}\subseteq E^{*}\) is a coisotropic submanifold._
Proof.: Let \(H^{\circ}\subseteq E\) be coisotropic. If \(\sigma_{1}|_{N},\cdots,\sigma_{n-1}|_{N}\in\Gamma(H)\) and \(f|_{N}=0\), then \(\phi_{\sigma_{1}},\cdots,\phi_{\sigma_{n-1}}\) and \(p^{*}f\) vanish on \(H^{\circ}\), so we have
\[\{\phi_{\sigma_{1}},\cdots,\phi_{\sigma_{n-1}},p^{*}f\}=p^{*}(\rho(\sigma_{1} \wedge\cdots\sigma_{n-1})f).\]
Since \(H^{\circ}\subseteq E\) is a coisotropic submanifold, we have \(\rho(\sigma_{1}\wedge\cdots\sigma_{n-1})f=0\), which implies that \(\rho(\sigma_{1}\wedge\cdots\wedge\sigma_{n-1})\) is tangent to \(N\). Therefore, \(\rho(\wedge^{n-1}H)\subseteq TN\) since \(\sigma_{1},\cdots,\sigma_{n-1}\) are any sections restricting to sections of \(N\). Similar, If \(\sigma_{1}|_{N},\cdots,\sigma_{n}|_{N}\in\Gamma(H)\), then \(\phi_{\sigma_{1}}|_{H^{\circ}}=0,\cdots,\phi_{\sigma_{n}}|_{H^{\circ}}=0\), hence we have
\[\{\phi_{\sigma_{1}},\cdots,\phi_{\sigma_{n}}\}=\phi_{[\sigma_{1},\cdots,\sigma _{n}]}\]
which implies that \([\sigma_{1},\cdots,\sigma_{n}]|_{N}\subseteq\Gamma(H)\). This shows that \(H\) is an \(n\)-Lie subalgebroid.
Conversely, if \(H\) is an \(n\)-Lie subalgebroid, then for all \(\sigma_{1}|_{N},\cdots,\sigma_{n}|_{N}\in\Gamma(H)\), and all \(f_{1}|_{N},\cdots,f_{n}|_{N}=0\), the Nambu-Poisson bracket
\[\{\phi_{\sigma_{1}},\cdots,\phi_{\sigma_{n}}\} = \phi_{[\sigma_{1},\cdots,\sigma_{n}]},\] \[\{\phi_{\sigma_{1}},\cdots,\phi_{\sigma_{n-1},p^{*}f_{i}}\} = p^{*}(\rho(\sigma_{1}\wedge\cdots\sigma_{n-1})f_{i}),\] \[\{\phi_{\sigma_{1}},\cdots,\sigma_{k},p^{*}f_{k+1},\cdots,p^{*}f_{ n}\} = 0,\ \ \forall\ 0\leq k\leq n-2\]
all restrict to \(0\) on \(H^{\circ}\). Since these functions generate the vanishing \(n\)-Lie ideal of \(H^{\circ}\) inside \(C^{\infty}(M)\), then we have this ideal is an \(n\)-Lie subalgebra; that is, \(H^{\circ}\) is coisotropic.
**Remark 6.8**.: _In Poisson geometry, there exist a nice symmetry:_
* _For a Poisson manifold_ \((M,\pi)\)_, we have that_ \(N\subseteq M\) _is a coisotropic submanifold if and only if_ \((TN)^{\circ}\subseteq T^{*}\) _is a Lie subalgebroid._
* _For a Lie algebroid_ \(E\)_, a vector subbundle_ \(H\subseteq E\) _is a Lie subalgebroid if and only if_ \((H)^{\circ}\subseteq E^{*}\) _is a coisotropic submanifold._
_However, for Nambu-Poisson manifold \((M,\pi)\), where \(\pi\in\mathfrak{X}^{l}(M),\ \ l\geq 3,\) the first description is not true since \(T^{*}M\) is not an \(n\)-Lie algebroid._
**Definition 6.9**.: _We denote by \(\mathcal{VB}^{\vee}{}_{Nambu}\) the category of vector bundles with linear Nambu-Poisson structures of rank \(n\), with morphisms that are Nambu-Poisson relations._
**Theorem 6.10**.: _Given two \(n\)-Lie algebroids \(E_{1}\to M_{1}\) and \(E_{2}\to M_{2}\) of rank \(n\). Then \(\Phi_{E}:E_{1}\to E_{2}\) is an \(n\)-Lie algebroid morphism if and only if the dual comorphism \(\Phi_{E^{*}}:E_{1}^{*}\dashrightarrow E_{2}^{*}\) is a Nambu-Poisson relation. We conclude that there is an equivalence of categories between \(\mathcal{VB}^{\vee}{}_{Nambu}\) and \(\mathcal{LA}\)._
Proof.: By the definition of \(n\)-Lie algebroid morphisms, we have that \(\Phi_{E}\) is an \(n\)-Lie algebroid morphism if and only if its graph is an \(n\)-Lie subalgebroid. By Proposition 6.7 and the definition of Nambu-Poisson relations, we conclude that this is the case if and only if the dual comorphism \(\Phi_{E^{*}}:E_{1}^{*}\dashrightarrow E_{2}^{*}\) is a Nambu-Poisson relation.
|
2301.13758 | Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for
Dynamic Environments | Model-based next state prediction and state value prediction are slow to
converge. To address these challenges, we do the following: i) Instead of a
neural network, we do model-based planning using a parallel memory retrieval
system (which we term the slow mechanism); ii) Instead of learning state
values, we guide the agent's actions using goal-directed exploration, by using
a neural network to choose the next action given the current state and the goal
state (which we term the fast mechanism). The goal-directed exploration is
trained online using hippocampal replay of visited states and future imagined
states every single time step, leading to fast and efficient training.
Empirical studies show that our proposed method has a 92% solve rate across 100
episodes in a dynamically changing grid world, significantly outperforming
state-of-the-art actor critic mechanisms such as PPO (54%), TRPO (50%) and A2C
(24%). Ablation studies demonstrate that both mechanisms are crucial. We posit
that the future of Reinforcement Learning (RL) will be to model goals and
sub-goals for various tasks, and plan it out in a goal-directed memory-based
approach. | John Chong Min Tan, Mehul Motani | 2023-01-31T16:47:09Z | http://arxiv.org/abs/2301.13758v2 | # Learning, Fast and Slow:
###### Abstract
Model-based next state prediction and state value prediction are slow to converge. To address these challenges, we do the following: i) Instead of a neural network, we do model-based planning using a parallel memory retrieval system (which we term the _slow_ mechanism); ii) Instead of learning state values, we guide the agent's actions using goal-directed exploration, by using a neural network to choose the next action given the current state and the goal state (which we term the _fast_ mechanism). The goal-directed exploration is trained online using hippocampal replay of visited states and future imagined states every single time step, leading to fast and efficient training. Empirical studies show that our proposed method has a 92% solve rate across 100 episodes in a dynamically changing grid world, significantly outperforming state-of-the-art actor critic mechanisms such as PPO (54%), TRPO (50%) and A2C (24%). Ablation studies demonstrate that both mechanisms are crucial. We posit that the future of Reinforcement Learning (RL) will be to model goals and sub-goals for various tasks, and plan it out in a goal-directed memory-based approach.
## 1 Introduction
Humans learn quickly, while Reinforcement Learning (RL) takes millions of time steps to learn how to perform tasks such as locomotion (Schulman et al., 2017) or Atari games (Mnih et al., 2013; Hafner et al., 2020). We posit that the traditional focus of maximizing reward in an optimization fashion (Sutton and Barto, 2018) for RL would entail the need to constantly explore the environment even after solving in order to find the optimal path, leading to slow convergence to the solution path. This constant exploration may be required for optimization-based games such as chess and Go in order to continually improve (Silver et al., 2016, 2017; Schrittwieser et al., 2020), and indeed, human masters in these games spend years to perfect and hone their skill. However, in most real-life tasks such as navigation, locomotion or even deciding what to eat for lunch, optimality may not be required. Rather, fast learning and decision making should be prioritized in order to survive in a fast-paced world. Such a satisficing agent could perhaps be used in self-driving cars whereby the environment changes frequently. In such environments, a pursuit of optimality is not just sample intensive and impractical, but can be detrimental to adaptive learning as a once-optimal policy might need to be unlearned to do well should the environment change.
We introduce a type of online RL which does not seek to optimize, but rather, to satisfice. When we remove optimality as a hard constraint, we can develop agents which learn and adapt faster to changing environments. Our proposed approach consists of two parts (see **Fig. 1**):
**Goal-Directed Mechanism (Fast).** Humans are typically goal-directed, and imbuing this pursuit of a goal to an AI system could lead to efficient exploration of an environment. This is implemented via a goal-conditioned neural network.
**Memory-based Mechanism (Slow).** Humans typically use memory to guide selection of actions, and doing so can lead to finding a solution path based on past experiences. This is implemented via hash table storage and retrieval.
## 2 Preliminaries - Modeling the World
There have been a series of works that utilize world models to do next state prediction. Such model-based methods have been used successfully in MuZero for Atari games, chess, Go, shogi (Schrittwieser et al., 2020), as well as SimPLE (Kaiser et al., 2019), Dreamer (Hafner et al., 2019), Dreamer v2 for Atari games (Hafner et al., 2020) and Dreamer v3 for multiple domains (Hafner et al., 2023). These model-based methods are generally more sample efficient (Sutton and Barto, 2018), but the downside is that the world models take a long time to learn. This is notably so in MuZero which takes 80 GPU days to achieve superhuman performance in Atari games (see Table 3 in Hafner et al. (2020)), while a human just needs 2 hours to be able to perform sufficiently well in the games (Mnih et al., 2013). Moreover, such a next state prediction can be very lossy, as can be seen in Fig. 5 of Hafner et al. (2023) where the world model prediction deviates from the ground truth after just 5 frames.
### Difficulty of next state prediction
We perform an experiment to illustrate this point more concretely. Here, we contrast the performance of next action prediction (policy network) versus next state prediction (world model) given the current state and the goal state. The environment used was either a 10x10 grid or a 20x20 grid, with the actions from the set {Up, Down, Left, Right, Don't Move}. We use a two-layer Multi-Layer Perceptron (MLP) with 128 nodes each and output to a final softmax layer of 5 nodes for next action prediction, and 10/20 nodes for next state prediction. We train the model using categorical cross-entropy loss using 1000 samples (See **Appendix A** for more details). The correct next action and next state corresponds to the fastest next step to be taken in order to reach the goal, preferring moves along the x-axis first rather than y-axis. At epoch 50 for actions and epoch 200 for next state prediction, we introduce a change in some predictions by changing the preference to prefer moves on y-axis first rather than x-axis. We seek to find out two things: i) how fast it takes for the model's predictions to converge to the ground truth, ii) how fast the trained model takes to adapt to a prediction change.
Figure 1: Learning, Fast and Slow. **(Left)** Fast mechanism for inference using Neural Network. **(Right)** Slow mechanism for inference using parallel memory retrieval.
### Results for next action/state prediction
**Figs. 2 and 3** detail the accuracy of predicting the next action and state respectively in a 10x10 or 20x20 grid world.
_Q1. How fast does the model take to converge?_
For the 10x10 grid, we can see that the next action prediction only takes 10 epochs to converge (accuracy of 0.99 and above), while the next state prediction takes approximately 200 epochs. For the 20x20 grid, the next action prediction takes about 20 epochs to converge, while the next state prediction has not converged even after 200 epochs. The next state prediction takes almost 20 times as long, just judging by the results of the 10x10 grid. This highlights the inefficiencies of trying to learn the next state prediction from observation.
_Q2. How fast does the model adapt to a prediction change?_
For both the 10x10 grid and 20x20 grid, the prediction change of the next action and next state was learned in a quicker time than the time it took for convergence originally. Notably, it only took 5 and 15 epochs to converge for the actions for the 10x10 grid and 20x20 grid respectively. Correspondingly, it took 150 and more than 200 epochs for the next state prediction to converge. The next state prediction takes almost 30 times as long, just judging by the results of the 10x10 grid. This again highlights the inefficiencies of trying to learn the next state prediction from observation.
**Interpretation of Results.** The results show that next action prediction is much faster to learn than next state prediction, and we design our RL agent with this in mind. We will want to utilize this next action prediction in the form of a goal-conditioned neural network to predict actions, very similar to the policy network in that of Actor-Critic models. Also, we will not want to use neural networks to do next state prediction, and instead, utilize memory retrieval to do model-based planning.
## 3 Incorporating Goals - Reward is not enough
Maintaining a value of each state (or state-action pair) is typical in RL and can serve as a way to cache intermediate states. If the environment is unchanging, this can be useful for determining how good the next state is, such as in unchanging board environments like Go (Silver et al., 2016). However, since correctly evaluating each state's value takes time, it will be difficult to evaluate the value exactly if the environment is constantly changing. Moreover, even within the same environment, a variant of the task usually entails a different reward function (i.e. navigating to different locations), and this leads to added difficulties in learning the state value function. In such situations, it may be better to specify the problem not in terms of maximizing reward, but rather, to fulfill a goal.
In contrast to the standpoint by Silver et al. (2021) that rewards are enough, and are "sufficient to express a wide variety of goals", we posit that rewards are not crucial to shape an agent's behavior if there is already a sufficiently good way to model goals into the system. For cases such as doing well in an Atari game with arbitrary external rewards associated with each state, we may need to model
such a value function to do well. However, if we are thinking about navigation in real-life whereby we already have an end-goal in mind, such value modeling may not be necessary.
Indeed, for sparse reward settings (Ecoffet et al., 2019, 2021), the usefulness of reward as a signal is diminished and curiosity-based intrinsic rewards such as that in Intrinsic Curiosity Module (ICM) (Pathak et al., 2017) may be needed to boost the reward signal. The fact that an agent requires alternate rewards to learn in a sparse reward setting hints that reward alone is not sufficient for decision making.
It is also worth highlighting that even for cases whereby reward is successfully used to solve the problem, for instance in Atari games, sticky actions (the next action has a high chance of repeating the previous one) may still be required to explore sufficiently large parts of the environment in order to solve it (Ecoffet et al., 2019, 2021). In fact, this sticky action is reminiscent of an agent with a goal and heading straight towards it, and is very different from traditional reward-based explore-exploit agents which tend to display erratic behavior as they may sometimes explore instead of exploit while heading towards an objective.
Hence, we posit that in order to have efficient learning for RL, it is necessary to include some form of goal-directed behavior. In fact, numerous works have utilized a form of goal-based learning (Schaul et al., 2015; Andrychowicz et al., 2017; Warde-Farley et al., 2018; Colas et al., 2019). Here, we propose to do this using a goal-conditioned neural network to predict the next action.
## 4 Memory for efficient learning
Traditional RL systems just keep track of scalar rewards. This is typical in TD-Learning or Q-Learning (Sutton and Barto, 2018) or their neural net equivalents such as Deep Q Networks (Mnih et al., 2013) or Proximal Policy Optimization (PPO) (Schulman et al., 2017). Recently, systems which leverage external memory, such as Go-Explore or its variants (Ecoffet et al., 2019, 2021; Tan and Motani, 2022), has been shown to lead to improvements over just traditional reward-based mechanisms for Reinforcement Learning. In Go-Explore, the memory stores the trajectory of the shortest path and best reward accumulated so far for any visited state. Utilizing such a memory can lead to faster identification of promising states than just relying on a value estimate alone. We need not follow the exact memory mechanism deployed in these works, but just incorporate the idea of leveraging external memory for more efficient learning than just using the neural network weights.
Combining external memory with cognitive architectures has also been done in work such as Soar (Laird, 2019). A memory retrieval mechanism based on state similarity to infer value is also done in Botvinick et al. (2019). More recently, there has been work which uses large scale memory retrieval for learning in Go, which can achieve better win rates by just changing the external dataset without even changing the parameters of the agent (Humphreys et al., 2022). We seek to build upon this work and instead of just treating external memory as a static database, we add and remove memories according to the agent's experience in order to make the agent more performant and adaptable to a changing environment.
### Memory as a proxy to world models
If we do not need to pursue optimality, we can leverage external memory for world modeling instead of learning the exact transition probabilities between states. Using external memory for world modeling has a few key advantages:
1. It solves the intractability problem of probability distributions if there are unbounded number of outcomes, as probability can just be calculated on the small subset of transitions within the memory 2. It is quick to update and a change in the stored memory can immediately lead to a change in agent behavior
2. The memory can be dynamically adapted to be in line with the agent's environment - we do not need to model the entire Markov Decision Process (MDP); we just need to model the portion which is relevant for the agent.
Rather than modeling probabilities of the transition to next states in the MDP, we utilize a hash table with the current state as the key, and the future action and states as the values. For instance, for an MDP denoted by **Fig. 4**, the corresponding hash table is **Table 1**.
### Different types of memory
Typically, one refers to memory in the deep learning literature as that of the memory of the weights in the neural network, much like Long Term Potentiation in synapses of neurons (Lynch, 2004). However, there exists another form of memory which could be useful, and that is the kind of memory that is used in hard disks on computers - readable and writable, and provides reliable storage. While biological organisms typically use the former, the latter kind of memory has advantages of reliability and quick updating. The difference between memory in a neural network and memory of an external storage is illustrated in **Table 2**. Neural networks and external memory retrieval/storage have their own advantages and disadvantages, and we posit that a combination of both of them is best.
## 5 Algorithm
Having established the benefits of both the fast goal-directed mechanism and slow memory-based retrieval mechanism, we detail a workable algorithm to implement both mechanisms in a single agent. Of note, fast and slow mechanisms have been analyzed for various domains (Kahneman, 2011; Anthony et al., 2017; Botvinick et al., 2019; Pham et al., 2021), but ours is unique for the case of online RL.
We begin with an empty episodic memory and overall memory bank. At the beginning of each episode, we reset the episodic memory bank, while allowing the overall memory bank to carry over from previous episodes.
**Goal-Directed Exploration.** Firstly, our agent needs to determine an action to take given the current state and the goal state. One way to do this will be to choose the action directly from the goal-directed neural network. This network will take in a start state and goal state as inputs, and output the probabilities of taking the next action via a softmax layer output over all the discrete actions. Our model uses 2 MLP layers of 128 nodes as the hidden layers. Mathematically, \(p=f(s|g,\theta)\), where \(p\) is the probability vector, \(f(\cdot)\) is a learnable function mapper parameterized by the neural network weights \(\theta\), \(s\) is the start state, and \(g\) is the goal state. We treat these probabilities as the exploitation value, and add in count-based exploration similar to that in Upper Confidence Bounds for Trees
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Key** & **Value** & **Value** \\
**(State)** & **(Action)** & **(Next State)** \\ \hline
1 & A & 2 \\ \hline
1 & B & 3 \\ \hline
2 & C & 1 \\ \hline
3 & D & 2 \\ \hline \end{tabular}
\end{table}
Table 1: Memory hash table to store environmental transitions
Figure 4: A model of the world, with states labeled as S1, S2 and S3, and action transitions labeled as A, B, C, D
(UCT) in Monte Carlo Tree Search (MCTS) (Browne et al., 2012). The next action will then be given by:
\[a^{*}=\operatorname*{arg\,max}_{a}(p(a)-\alpha\sqrt{numvisits(a)}), \tag{1}\]
where \(p(a)\) is the probability of each action generated by the goal-directed network, \(\alpha\) is the exploration constant set to 1, \(numvisits(a)\) is the number of times the action \(a\) has been sampled from the current state \(s\) and is retrieved from episodic memory.
The beauty of this mechanism is that the goal-directed neural network can serve as a compass to guide the initial action. The action may not be the best possible one, but it just needs to be approximate, much like finding how to get to a tower in the middle of a forest and just making a first step towards the tower based on its general direction. Initially, we purely use the goal-directed mechanism as a guide, as the exploration term will be 0 when there is no memory of the current state in the episodic memory. Should the sequence of actions not achieve the desired results and we return to one of the already explored states in the episodic memory, it can then be influenced by the exploration term as it will bias actions that are not tried before.
**Memory-based Retrieval and Planning.** Secondly, we will query the memory-based retrieval of a sequence of actions to see if we are able to reach the goal state. This memory-based retrieval is done in parallel across \(B\) multiple branches, much like how parallel processing is done in minicolumns of the neocortex (Edelman and Mountcastle, 1982). Each branch will match the current state to memory and retrieve the corresponding next state and action. They will continue to match until maximum lookahead depth \(D\) is reached or until the goal state is found. We then select the branch with the shortest trajectory to the goal state, if there is a found trajectory. The algorithm for memory retrieval is detailed in **Algorithm 1**. If we manage to find a trajectory to the goal state, we then take the first action of this trajectory and override the action found by the goal-directed mechanism, as this action is found by lookahead and hence more precise. Note that we intentionally only use memory to obtain the next state for lookahead and not a neural network next state predictor. This is because such a next state predictor takes a long time to converge and using it for planning may lead to lossy lookahead, as explored in **Section 2.1**.
**Perform Action.** Next, we perform the desired action and obtain the next state and reward from the environment.
\begin{table}
\begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline & **Neural Network** & **External Memory Storage / Retrieval** \\ \hline Inference & Fast (one pass) & Slow (requires multiple lookahead retrievals) \\ \hline Learning & Slow (requires many gradient updates) & Fast (instantaneous change by changing memory bank) \\ \hline Generalization & Can interpolate well & Need the right abstraction space to store memory to generalize \\ \hline Storage & Unreliable. Previously learned input-output relations may be changed with update of weights & Reliable. Previously stored memory will never be changed unless intentionally discarded \\ \hline \end{tabular}
\end{table}
Table 2: Comparison between using neural network weights and external memory storage/retrieval
**Updating Memory Bank.** We update the episodic memory and the overall memory with this transition. The key of the memory transition is the current state, and the values are the action and the next state, as shown in **Table 1**. In order to cater for changes in dynamics of the environment, we remove all stored memories in the episodic memory and overall memory that conflict with the current transition (e.g. if a State 1 and Action 1 currently leads to State 2, we remove all memories with State 1 and Action 1 not leading to State 2). This also has the added effect of increasing the exploration bias in (1) for wrongly predicted states and hence could serve a similar function as ICM.
**Updating Goal-Directed Neural Network.** Hippocampal replay (see **Fig. 5**) has been known to help with memory consolidation and decision making (Joo & Frank, 2018). Previous works have attempted to model hippocampal replay by sampling from a replay buffer to learn the transitions (Mnih et al., 2013, 2015; Schaul et al., 2015). For efficient learning, we posit that hippocampal replay should also be used to train the goal-directed neural network. The algorithm for hippocampal replay is detailed in **Algorithm 2**.
Overall, we can keep repeating the entire algorithm until the episode is completed (i.e. reward 1 attained), or until a certain amount of time steps are reached. The overall goal-directed memory-based algorithm is detailed in **Algorithm 3** in **Appendix B**. Our source code is made publicly available at [https://github.com/tanchongmin/Learning-Fast-and-Slow](https://github.com/tanchongmin/Learning-Fast-and-Slow).
## 6 Experimental Setup
### Considerations
**Online Learning.** The key aim of the experiment was to evaluate the performance of an online learning agent. Hence, there is no training and testing phase and we evaluate the agent starting from the very first episode.
**No Oracle World Model.** We want to provide the agent with minimal hints or guidance to make it realistic. As such, there is no perfect world model given to the agent for use for planning - the agent has to learn about the world from its interactions, and it has to learn it fast.
### Environment
The environment used is a 2D grid world, where there are \(n\) by \(n\) squares, where \(n\) is the grid size. There are also some grid squares which are denoted as obstacles and are not traversable. The agent starts off at a grid square and is supposed to head towards the door (goal) position. We have two configurations of the environment used:
1. **Static.** There are no obstacles. The start point is at \((0,0)\) (top left) and end point is at \((n-1,n-1)\) (bottom right). This is to evaluate learning on typical RL environments.
2. **Dynamic.** The obstacles change mid-way (episode 50), and the start and end points vary randomly with each episode. This is a difficult environment to evaluate learning on a continuously changing environment, which is not frequently studied in RL. See **Fig. 6** for an illustration.
**State Space.** The agent is provided with both its own position and the door (goal) position.
**Reward.** This is a sparse reward environment and the agent will only be counted as completing the episode and receive a reward of 1 if it manages to reach the door before \(n\times n\) time steps. Otherwise, it will receive a reward of 0.
**Action Space.** The available action space is discrete from the set {Up, Down, Left, Right}. There is no wraparound, and the agent will remain in its existing position should it collide with the edges of the grid or with an obstacle.
### Agents
We use the following agents:
1. **Fast & Slow Agent.** This is the proposed goal-directed (fast), memory-based (slow) agent. We use lookahead depth of 20 and 100 parallel branches for memory retrieval.
2. **Actor-Critic Agents - PPO, TRPO, A2C.** We use three competitive on-policy actor critic algorithms - PPO (Schulman et al., 2017), Trust Region Policy Optimization (TRPO) (Schulman
Figure 5: Hippocampal replay in mice, which showcases forward play (pre-play) and reverse play (replay), which are involved in memory retrieval and consolidation for processes such as decision-making. Extracted from Fig. 2 of (Joo & Frank, 2018), with additional illustrations of a blue and purple line for goal-directed learning at the bottom (\(S\) denoting start state, and \(G\) denoting goal state). There is replay occurring for both 1) past visited states and 2) future imagined states. We use these insights in designing **Algorithm 2** for consolidating learned experiences. We utilize this replay to learn a goal-directed policy 1) with any state along the past trajectory as the start state and the goal state as the current state (blue line), and 2) with any state along the future imagined trajectory (if any) as the start state and the goal state as the actual goal state (purple line).
et al., 2015) and Advantage Actor Critic (A2C) (Mnih et al., 2016). We use Stable Baselines 3 (Raffin et al., 2021) for reliable re-implementations of these RL algorithms. In order to give these methods the best performance in our environment, we do grid search over the following learning rates: \([0.1,0.01,0.001,0.0001]\) as well as their initial default values and select the best performing one for our environment. The eventual learning rates selected were 0.0003 for PPO (default), 0.001 for TRPO (default) and 0.0001 for A2C.
3. **Q-Learning Agent.** This agent uses Q-learning, with random action selection for first few episodes, and thereafter greedy action selection. The number of episodes for random selection was selected using grid search over the entire integer interval from 0 to 100. This serves as a baseline for the efficacy of value-based methods. Note that we did not use Deep Q Network (DQN) (Mnih et al., 2013) as experiments with it failed to learn within 100 episodes, which suggests that DQN is more sample inefficient than tabular Q-learning for our environment. Refer to **Appendix C** for details.
### Evaluation Criteria
We evaluate the agents across 100 episodes purely with online training (there is no test and training split). We use two different metrics for evaluation, as detailed below:
1. **Solve Rate.** This is the percentage of episodes in which the agent reaches the goal. This is a proxy for adaptability.
2. **Steps Above Minimum.** This is the number of time steps the agent takes above the minimum possible (computed using Breadth First Search). If the agent fails to complete the environment, the time step will then be the maximum time step. This is a proxy for efficiency.
## 7 Results
The steps per episode for Fast & Slow, PPO, TRPO, A2C and Q-Learning agents for the 10x10 static environment (minimum steps is 18) is shown in **Figs. 7 and 8**.
The solve rate and steps above minimum for Fast & Slow, PPO, TRPO and A2C for the 10x10 dynamic environment are detailed in **Tables 3 and 4** respectively. Due to the slow learning (> 50 episodes to converge) of the Q-Learning agent on the static environment, we do not evaluate it on the dynamic environment. Refer to **Appendix D** for the detailed results for each episode.
_Q3: How does the Fast & Slow approach compare to traditional actor-critic/value-based approaches in a static environment?_
**Adaptability.** In terms of solve rate, we can see that Fast & Slow and TRPO are the best (100%), followed by PPO (96%), A2C (95%) and then Q-learning (32%). In fact, Q-learning requires approximately 75 episodes before it learns via random exploration, highlighting the inefficiencies of such a value-based method. The actor-critic methods perform substantially better and solve the environment within 10 episodes. This is likely because the critic network is updated by the returns-to-go and hence learn the value of each state faster than one-step Bellman updates. For the Fast & Slow method, the ability to combine both mechanisms give it the edge, enabling it to solve the environment the fastest.
**Efficiency.** The Fast & Slow network has the lowest steps above minimum (7), followed by TRPO (366), PPO (576), A2C (1090) and Q-Learning (5949). The superiority of Fast & Slow is likely due to the benefit of the slow memory mechanism finding the shortest trajectory in memory, and also being able to repeat a successful solution path.
_Q4: How does the Fast & Slow approach compare to traditional actor-critic approaches in a dynamic environment?_
**Adaptability.** In terms of solve rate, we can see that Fast & Slow performs the best (92%), followed by PPO (54%), TRPO (50%), then A2C (24%). This highlights that traditional value-based methods can be slow to converge in the presence of varying goals in each episode. Having a goal-directed approach to infer the best action given the goal as in Fast & Slow may be the better approach for a continually changing environment. It is also to be noted that there is learning in all algorithms except in TRPO, as even when the obstacles change, the last 50 episodes still have a higher solve rate than the first 50. This shows that an explicit memory mechanism is useful for learning, and while actor-critic approaches do have some form of memory in the weights, it is not as fast to adapt to changes.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Agent** & \multicolumn{3}{c|}{**Steps Above Minimum**} \\ \cline{2-4} & First 50 episodes & Last 50 episodes & Total \\ \hline Fast \& Slow & **923** & **555** & **1478** \\ \hline PPO & 2872 & 2336 & 5208 \\ \hline TRPO & 2669 & 3001 & 5670 \\ \hline A2C & 4032 & 3774 & 7806 \\ \hline \end{tabular}
\end{table}
Table 4: Efficiency of methods evaluated by steps above minimum on a dynamic 10x10 navigation task. Lower is better (in bold).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Agent** & \multicolumn{3}{c|}{**Solve Rate(\%)**} \\ \cline{2-4} & First 50 episodes & Last 50 episodes & Total \\ \hline Fast \& Slow & **88** & **96** & **92** \\ \hline PPO & 50 & 58 & 54 \\ \hline TRPO & 56 & 44 & 50 \\ \hline A2C & 20 & 28 & 24 \\ \hline \end{tabular}
\end{table}
Table 3: Adaptability of methods evaluated by solve rate on a dynamic 10x10 navigation task. Higher is better (in bold).
**Efficiency.** In terms of steps above minimum, we can see that Fast & Slow performs the best (1478), followed by PPO (5208), TRPO (5670), then A2C (7806). In fact, Fast & Slow performs so well that it takes 4 times fewer steps above minimum than the other algorithms.
## 8 Ablation Studies
Having established the superior performance of our algorithm compared to other state-of-the-art algorithms, we conduct ablation studies to understand the components of the Fast & Slow approach. We ablate by removing the fast and/or slow mechanisms, and change the hyperparameters of the lookahead depth (5, 10, and 50) and parallel branches (10, 50 and 200) for the memory retrieval part.
The solve rate and steps above minimum for the ablation study are detailed in **Tables 5 and 6** respectively. More detailed results can be found in **Appendix E**. Note that the baseline Fast & Slow network uses 20 lookahead depth and 100 parallel branches.
_Q5: How much do the fast and slow mechanisms contribute to performance?_
We can see that both the fast and slow mechanisms are crucial, as removing either one leads to poorer performance both in terms of adaptability and efficiency, but still comparable performance to that of the actor-critic methods analyzed in **Tables 3 and 4**. The biggest impact comes in removing both the fast and slow mechanisms, and just relying on the count-based mechanism alone is not sufficient for performance.
The fast mechanism is actually more important than the slow one for adaptability, as the solve rate without the slow is 71% compared to 51% without the fast. This may be because a good initial direction from the goal-directed mechanism helps a lot more than just count-based exploration to reach the end goal. However, the slow mechanism makes up for it near the end as it is able to find the goal when it is near enough. Hence, the efficiency is similar without either the fast or the slow mechanism.
_Q6: How would performance vary if we change the hyperparameters of the Fast & Slow approach?_
In general, having more lookahead depth and parallel threads help to boost the adaptability and the efficiency of the Fast & Slow approach. This makes intuitive sense as there are more possible (shorter) trajectories to the goal state that can be found if we search deeper and with more branches, which leads to higher solve rate and efficiency.
## 9 Discussion
Overall, it can be seen that Fast & Slow achieves significant performance gains over state-of-the-art actor-critic models and traditional value-based methods like Q-learning in a goal-based navigation environment with a quantifiable goal state. In fact, Fast & Slow scales up well and manages to perform well in dynamic environments of larger grid sizes like 20x20 and 40x40. For 20x20, Fast &
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Agent**} & \multicolumn{3}{c|}{**Solve Rate(\%)**} \\ \cline{2-4} & First 50 episodes & Last 50 episodes & Total \\ \hline Baseline & 88 & 96 & 92 \\ \hline No Slow & 70 & 72 & 71 \\ \hline No Fast & 52 & 50 & 51 \\ \hline No Fast, Slow & 26 & 18 & 22 \\ \hline
5 lookahead depth & 82 & 96 & 89 \\ \hline
10 lookahead depth & 84 & 94 & 89 \\ \hline
50 lookahead depth & 88 & **98** & **93** \\ \hline
10 parallel threads & 88 & 96 & 92 \\ \hline
50 parallel threads & 84 & **98** & 91 \\ \hline
200 parallel threads & **90** & 96 & **93** \\ \hline \end{tabular}
\end{table}
Table 5: Ablation study on adaptability of Fast & Slow agent evaluated by the solve rate of the agents on a dynamic 10x10 navigation task. Higher is better (in bold).
Slow achieves 85% solve rate compared to best actor-critic's 18% (4.7 fold increase in performance). For 40x40, Fast & Slow achieves a three fold improvement, which shows the benefit of our proposed method. See **Appendix F** for details.
The fast and slow mechanisms are both critical for functioning - the fast goal-directed mechanism gives an overall initial direction that aids an agent with exploring a new environment, while the slow memory retrieval mechanism gives the agent the benefit of using past experience to form a trajectory to the goal in order to guide actions.
As a plus point, due to the parallelism of the memory retrieval mechanism, Fast & Slow has competitive runtimes to existing algorithms and is able to complete an episode on the 10x10 environment in about 2-3 seconds on a COTS CPU, making it suitable for real-world deployment.
## 10 Future Work
**Multi-Agent Learning.** The beauty of the memory mechanism is that an agent need not just learn through its own experiences, but it can internalize other agents' experiences into its memory, and have their behavior policy adjusted immediately with the incorporation of the new memory. Hence, we can have multiple agents in the same environment learning from the best performing one.
**Generic Goal Setting.** In order to utilize Fast & Slow in domains without a quantifiable goal, one way to do so will be to use Natural Language Processing (NLP) means to vectorize a goal state via Transformer-like architectures (Vaswani et al., 2017). This has been successfully used in SayCan (Brohan et al., 2022) and it can lead to generic applications of our proposed method.
**Scaling to continuous domains.** We can map our count-based approach to continuous domains by using density models (Bellemare et al., 2016), or seek out approaches to abstract continuous space into discrete spaces so as to apply our algorithm to continuous state/action domains.
**Memory Forgetting.** Implementing a memory forgetting mechanism such as using the Ebbinghaus forgetting curve (Murre & Dros, 2015) could help to bias memories towards more recent ones that are more relevant to the environment.
## Acknowledgements
This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-01-003[T]).
|
2309.10473 | Capacitive crosstalk in gate-based dispersive sensing of spin qubits | In gate-based dispersive sensing, the response of a resonator attached to a
quantum dot gate is detected by a reflected radio-frequency signal. This
enables fast readout of spin qubits and tune up of arrays of quantum dots, but
comes at the expense of increased susceptibility to crosstalk, as the resonator
can amplify spurious signals and induce fluctuations in the quantum dot
potential. We attach tank circuits with superconducting NbN inductors and
internal quality factors $Q_{\mathrm{i}}$>1000 to the interdot barrier gate of
silicon double quantum dot devices. Measuring the interdot transition in
transport, we quantify radio-frequency crosstalk that results in a ring-up of
the resonator when neighbouring plunger gates are driven with frequency
components matching the resonator frequency. This effect complicates qubit
operation and scales with the loaded quality factor of the resonator, the
mutual capacitance between device gate electrodes, and with the inverse of the
parasitic capacitance to ground. Setting qubit frequencies below the resonator
frequency is expected to substantially suppress this type of crosstalk. | Eoin G. Kelly, Alexei Orekhov, Nico Hendrickx, Matthias Mergenthaler, Felix Schupp, Stephan Paredes, Rafael S. Eggli, Andreas V. Kuhlmann, Patrick Harvey-Collard, Andreas Fuhrer, Gian Salis | 2023-09-19T09:40:49Z | http://arxiv.org/abs/2309.10473v1 | # Capacitive crosstalk in gate-based dispersive sensing of spin qubits
###### Abstract
In gate-based dispersive sensing, the response of a resonator attached to a quantum dot gate is detected by a reflected radio-frequency signal. This enables fast readout of spin qubits and tune up of arrays of quantum dots, but comes at the expense of increased susceptibility to crosstalk, as the resonator can amplify spurious signals and induce fluctuations in the quantum dot potential. We attach tank circuits with superconducting NbN inductors and internal quality factors \(Q_{\mathrm{i}}>1000\) to the interdot barrier gate of silicon double quantum dot devices. Measuring the interdot transition in transport, we quantify radio-frequency crosstalk that results in a ring-up of the resonator when neighbouring plunger gates are driven with frequency components matching the resonator frequency. This effect complicates qubit operation and scales with the loaded quality factor of the resonator, the mutual capacitance between device gate electrodes, and with the inverse of the parasitic capacitance to ground. Setting qubit frequencies below the resonator frequency is expected to substantially suppress this type of crosstalk.
High-bandwidth readout of spin qubits can be achieved by radio-frequency (RF) reflectometry [1], where an RF signal is reflected off a resonator that is either connected directly to a gate of the quantum dot (QD) that defines the spin qubit, or to additional QDs that serve as charge sensors [2; 3]. The former approach is known as gate-based sensing and avoids the additional footprint of the charge sensors and the necessary leads connected to them [4; 5; 6; 7; 8]. Rather than detecting the absolute charge state of the spin qubit system, this method detects charge susceptibility in the form of a quantum capacitance [9]. Pauli spin blockade leads to a spin-dependent tunneling between two neighbouring QDs, which is seen as a variation in the resonator load capacitance [10] and thereby enables the readout of spin states.
The sensitivity of gate-based dispersive readout can be improved by increasing the internal quality factor \(Q_{\mathrm{i}}\) and reducing the parasitic capacitance \(C_{\mathrm{p}}\) of the resonator [11]. Both can be achieved by using a superconducting inductor fabricated from a thin film of a high-kinetic inductance material such as NbN, which also enables a small resonator footprint and is compatible with the magnetic fields necessary for spin qubit operation [12; 13; 14; 8; 15].
In this work, we show that attaching a high-quality factor resonator to the gate of a spin qubit device drastically increases the sensitivity of that gate to crosstalk with control pulses applied to neighbouring gates, e.g., to manipulate the spin state via electric-dipole spin resonance (EDSR) [16; 17; 18; 19; 20; 21; 22]. We introduce a method of quantifying such AC crosstalk in a dispersive readout setup and apply it to a double QD in a Si fin field-effect transistor (finFET) device with a tank circuit connected to the barrier gate. The tank circuit is composed of a high-kinetic-inductance NbN nanowire, providing a high \(Q_{\mathrm{i}}\approx 1500\) and a low \(C_{\mathrm{p}}\). The resonator is excited whenever control pulses on neighbouring gate lines spectrally overlap with its resonance frequency, giving rise to a strongly amplified modulation of the barrier gate and thereby of the double-QD confinement potential. The amplitude of the crosstalk voltage induced on the barrier gate is measured in transport by analysing the corresponding broadening of an interdot charge transition line. This provides an efficient way of characterising AC crosstalk on the device level that does not rely on the tune-up and calibration of qubits [23; 24; 25; 26]. In addition to unintentional driving of neighbouring qubits [27], this ring-up is expected to lead to increased qubit decoherence in systems with strong spin-orbit interaction (SOI), intrinsic to holes in Si [16; 19] and Ge [28; 22], which possess highly anisotropic electric-field dependent \(g\)-tensors [29; 30; 31].
We find that in our device, the main contribution to this type of crosstalk comes from capacitance between the bondpads of neighbouring gate electrodes. Our electrical circuit model predicts that the crosstalk scales proportionally with the loaded quality factor \(Q_{\mathrm{i}}\) and with the ratio between the crosstalk capacitance \(C_{\mathrm{ct}}\) and \(C_{\mathrm{p}}\). Above the resonator frequency \(f_{\mathrm{r}}\), the crosstalk induced on the barrier gate saturates at a value of \(C_{\mathrm{ct}}/C_{\mathrm{p}}\), whereas for frequencies below \(f_{\mathrm{r}}\) it is suppressed. These findings can aid in the the design of spin qubit architectures with gate-based readout.
The QD devices consist of a fin patterned from bulk silicon, along with two gate layers each consisting of a silicon-oxide dielectric and a TiN gate metal patterned in a self-aligned process [32]. For the first device (device A), a double QD is formed by accumulating holes underneath gates P1 and P2 in the second gate layer (GL2), while the tunnel coupling between the dots is tuned by the barrier gate B in the first gate layer (GL1), see Fig. 1(a). Lead gates LL and LR (GL1) are used to accumulate charge reservoirs which
are tunnel coupled to the respective dots and contacted to source and drain contacts made of PtSi. All gates and the contacts are connected to tungsten bondpads through vias in a silicon oxide encapsulation layer. Devices similar to the ones used here have been shown to host hole spin qubits with operation of both single- and two-qubit gates [16, 33].
High-kinetic-inductance superconducting nanowire inductors with a wire width of 400 nm are fabricated by dry etching a \(\sim\)12 nm thick film of NbN exhibiting a nominal sheet inductance of 66 pH per square, deposited on an intrinsic silicon substrate by DC magnetron sputtering. A scanning electron microscopy image of a typical inductor is shown in Fig. 1(d). One end of such an inductor with a nominal inductance of 1.5 uH was connected to the barrier gate of device A by wirebonding from the inductor chiplet to the QD chiplet. The other end of the inductor was connected to a (multiplexed) readout line on the PCB [Fig. 1(a)]. Such multi-module assemblies consisting of a resonator chiplet separate to the spin qubit device chiplet offer advantages in terms of separation of fabrication steps and choice of materials [34, 15, 35].
The resonator formed by this inductor together with \(C_{\mathrm{p}}\) has a resonance frequency \(f_{\mathrm{r}}\) of \(\sim\)299 MHz. The magnitude and phase response is plotted in Fig. 1(b). The superconducting nature of the inductor leads to a large \(Q_{\mathrm{i}}\) of \(1480\pm 480\), as determined from a fit of the resonance circle in the complex plane with the method outlined in [36]. The large error bar in \(Q_{\mathrm{i}}\) arises from the resonator being overcoupled, with a loaded quality factor of \(Q_{\mathrm{i}}=370\pm 50\) that is dominated by the coupling quality factor of \(Q_{\mathrm{c}}=500\pm 56\). Modeling the parasitic capacitance as a lumped element attached to the devices side of the inductor provides an estimate of \(C_{\mathrm{p}}=0.19\) pF. The resonance frequency does not exactly match the point at which the magnitude of the reflection coefficient \(|S_{11}|\) is minimal as displayed in Fig. 1(b). This is due to a rotation of the resonance circle in the complex plane, typically attributed to non-ideal interference effects [37].
The resonator can be operated as a gate-based dispersive sensor by probing the reflected signal at resonance while sweeping the plunger gates. The obtained charge stability diagram of the double QD system is shown in Fig. 1(c) with clear dot-to-lead transitions visible down to the last hole. Interdot transitions are also visible because gate B has different lever arms to the two dots. The signal amplitude is smaller than that of the dot-to-lead transitions.
Surprisingly, the charge stability diagram can also be observed in \(\Delta V_{\mathrm{out}}\) when driving a neighbouring plunger gate. Such an indirect excitation of the resonator indicates the presence of a finite crosstalk capacitance \(C_{\mathrm{ct}}\) between that plunger gate and gate B. The measured phase of \(\Delta V_{\mathrm{out}}\) when exciting an AC amplitude \(\Delta V_{\mathrm{P1}}\) on gate P1 at resonance is shown in Fig. 2(a) around the \((0,1)-(1,0)\) charge region. We note that such crosstalk is distinct from the harmonic voltage conversion observed in Ref. [38], where it is induced by transitions of single charges at interdot and dot-to-lead transitions.
The crosstalk in this device is quantified by measuring the broadening of the interdot charge transition line in the source-drain current \(I_{\mathrm{SD}}\) and relating this to a ring-up peak voltage amplitude \(\delta V_{\mathrm{B}}\) on gate B. We first determine the line broadening when directly exciting the tank circuit with
Figure 1: **(a)** Reflectometry setup with a NbN nanowire inductor on a separate Si chiplet, wire bonded on one side to the barrier gate of a Si finFET double QD device, and on the other side to a multiplexed readout line on a printed circuit board (PCB). Total parasitic capacitance \(C_{\mathrm{p}}\) is given by capacitance to ground \(C_{\mathrm{p},0}\) and sum of crosstalk capacitances \(C_{\mathrm{ct}}\). **(b)** Normalised reflection amplitude and phase of the resonator and corresponding fit, giving \(Q_{\mathrm{i}}=1478\pm 480\) and \(Q_{\mathrm{c}}=500\pm 56\). **(c)** Charge stability diagram of the finFET double-QD at fixed \(V_{\mathrm{B}}=-0.845\) V obtained by reflectometry at 299.3 MHz revealing the few-hole regime with \((N_{1},N_{2})\) holes in the two QDs. **(d)** False-coloured scanning electron microscope image of a similar NbN nanowire inductor with a wire width of 400 nm.
an oscillation amplitude \(\Delta V_{\rm B}\) [see Fig.1(a)] at a frequency that is on resonance with the tank circuit. The typical bias triangles with a bright interdot line are shown in a map of \(I_{\rm SD}\) in Fig. 2(b) for an applied source-drain bias \(V_{\rm SD}\) of \(5\,\)mV and with \(\Delta V_{\rm B}=0\). Fig. 2(c) shows scans of the interdot line along the DC voltage \(V_{\rm P2}\) and for different amplitudes \(\Delta V_{\rm B}\). The broadening is fit by a time-averaged sinusoidally-shifted Lorentzian function, see Fig. 2(d) and supplementary material Sec. I for details. The amplitude of the broadening (i.e., half the distance between the extreme positions of the fitted Lorentzian peaks) is more than 400 times larger than \(\Delta V_{\rm B}\) and is a consequence of a resonant ring-up of the voltage on gate B. The amplitude \(\delta V_{\rm B}\) of this ring-up is obtained by multiplying the broadening amplitude scanned along \(V_{\rm Pi}\) by \(\beta_{\rm B,Pi}\), where \(\beta_{\rm B,Pi}\) denotes the ratio between voltage changes on gate B and on gate P\(i\) required to stay on the interdot line. We find \(\beta_{\rm B,P2}=-0.23\) and \(\beta_{\rm B,P1}=0.40\), see Sec. II of the supplementary material.
Figure 2(e) shows the obtained amplitude \(\delta V_{\rm B}\) as a function of \(\Delta V_{\rm B}\) by individually fitting the scans of the interdot line along P1 and P2 and adjusting for the relative voltage ratios \(\beta_{\rm B,P1}\) and \(\beta_{\rm B,P2}\) respectively. We find a linear relationship with an average amplification factor of \(100\pm 3\). This value is consistent with the amplification factor as calculated from numerical simulation of the readout circuit (see Sec. IV of the supplementary material).
Using the same method but for indirect excitation of the tank circuit by a resonant AC drive with amplitude \(\Delta V_{\rm P1}\) (\(\Delta V_{\rm P2}\)) on gate P1 (P2), we find a significant ring-up of gate B with an amplitude \(\delta V_{\rm B}\) that is 26.2 (20.3) times larger than the exciting amplitude on the P1 (P2) gate [Fig. 2(f)]. As we show next, this crosstalk-induced excitation of the resonator and thereby of the potential on gate B occurs for any signal that contains spectral components within the bandwidth of the resonator frequency.
In spin qubit experiments, baseband signals are typically applied to plunger gates when transitioning from a qubit manipulation point to a readout point in charge configuration space. The repetition of such baseband signals may lead to harmonics that excite the resonator gate. To illustrate this effect, we apply a square wave [Fig. 3(a), upper] or sawtooth wave [Fig. 3(a), lower] of varying frequency \(f_{\rm bb}\) to gate P1. We fit the broadening of the interdot line \(\delta V_{e}\) by sweeping the interdot detuning \(V_{e}\) (see Sec. III of the supplementary material for definition) across the \((0,1)-(1,0)\) transition, as indicated in Fig. 2(b). When varying the baseband frequency \(f_{\rm bb}\), a crosstalk-induced broadening of the interdot line is observed every time a harmonic \(n\) of the signal matches the resonator frequency \(f_{\rm r}\), see Fig. 3(a). Figure 3(b) shows the fitted interdot peak broadening \(\delta V_{e}\) as a function of \(f_{\rm r}/f_{\rm bb}\) for a sawtooth wave. Excitations occur whenever \(f_{\rm bb}\cross{n}=f_{\rm r}\), indicated by the white dots in Fig. 3(a), with the expected Fourier amplitudes scaling with \(\frac{1}{n}\). Similarly, only odd harmonics are observed for a square wave excitation. The adverse effect of this ring-up on the operation of QDs as qubits can be reduced for larger \(n\) by filtering the baseband pulses. However, fast ramp times between different charge states may be necessary to fulfill diabaticity requirements when initialising spin states via rapid adiabatic passage [39; 40].
When manipulating a spin qubit, a series of sinusoidal drive pulses are applied to the gates, see Fig. 3(c). We focus
Figure 2: **(a)** Charge stability diagram at the \((0,1)-(1,0)\) transition, detected in the phase of \(V_{\rm out}\) by indirect excitation on gate P1. **(b)** Finite bias triangles measured in \(I_{\rm SD}\) at the \((0,1)-(1,0)\) transition, and axis of the detuning voltage \(V_{\rm\epsilon}\). **(c)**\(I_{\rm SD}\) as a function of \(V_{\rm P2}\) while varying the AC drive amplitude \(\Delta V_{\rm B}\) applied to gate B at frequency \(f_{\rm r}\). The peak broadening (white dots, arrow shows twice the broadening amplitude) is a measure of the resonator ring-up amplitude on gate B. **(d)** Line-cuts of \(I_{\rm SD}\) for two different sinusoidal drive amplitudes \(\Delta V_{\rm B}\) and their corresponding fits. **(e)** Extracted ring-up amplitude \(\delta V_{\rm B}\) on gate B as a function of \(\Delta V_{\rm B}\), yielding an average amplification of \(\Delta V_{\rm B}\) by a factor of \(100\pm 3\) (solid line). **(f)**\(\delta V_{\rm B}\) as a function of AC drive amplitudes \(\Delta V_{\rm P1}\) (\(\Delta V_{\rm P2}\)) applied to neighbouring gates P1 (P2), demonstrating amplification of the drive signal by factors 26.2 (20.3). In (e) and (f), diamonds (circles) represent data obtained for DC scans of the interdot lines along P1 (P2).
on a typical Rabi experiment where the duration \(t\) of the drive pulses is varied in order to observe Rabi oscillations induced by EDSR. The fitted interdot peak broadening for such a pulse train with a repetition rate \(1/T\) of \(1\,\mathrm{MHz}\) is shown in Fig. 3(d), where both the drive pulse frequency \(f_{\mathrm{drive}}\) and the pulse duration \(t\) are swept. The baseband amplitude is set to zero. The observed peak broadening as a function of \(f_{\mathrm{drive}}\) and \(t\) matches the sinc function \(\sin(x)/x\) expected for the Fourier transformation of a sinusoidal pulse of a finite length, with \(x=\pi t(f_{\mathrm{drive}}-f_{\mathrm{r}})\). Note that the observed pattern resembles but is not related to the typical Rabi chevron pattern observed when varying \(f_{\mathrm{drive}}\) and \(t\) for pulses applied to a qubit. While the fringes of the sinc function can be reduced by using a Gaussian envelope of the drive pulses, the broadening of the Fourier spectrum remains and therefore extends the crosstalk into a bandwidth \(1/t\) around \(f_{\mathrm{r}}\). We additionally resolve the effect of the repetition of the pulses with period \(T\) as lines spaced by the repetition rate, as displayed in Fig. 3(f) for a fixed \(t=250\,\mathrm{ns}\).
The indirect excitation of the resonator can be reproduced in a discrete-element circuit model, see Sec. V of the supplementary material for details of the circuit diagram. The resonator is indirectly excited by a drive signal on the neighbouring gate through a mutual capacitance \(C_{\mathrm{ct}}\). The total parasitic capacitance to ground, \(C_{\mathrm{p}}\), is given by the sum of \(C_{\mathrm{p},0}\) and \(C_{\mathrm{ct}}\), which is kept constant at \(0.19\,\mathrm{pF}\). The amplification factors between \(\delta V_{\mathrm{B}}\) and \(\Delta V_{\mathrm{P1}}\) (\(\Delta V_{\mathrm{P2}}\)) are found to match those in Fig. 2(f) by choosing \(C_{\mathrm{ct}}\) equal to \(16.0\,\mathrm{fF}\) (\(11.9\,\mathrm{fF}\)) when exciting on P1 (P2). This can mostly be accounted for by the mutual capacitance between the corresponding bondpads, for which we find values of \(12.1\,\mathrm{fF}\) (\(7.3\,\mathrm{fF}\)) using Ansys Maxwell (Table 1). We assign the remaining coupling capacitance of around \(4\,\mathrm{fF}\) to mu
Figure 4: **(a)** Measured signal \(|\Delta V_{\mathrm{out}}|\) (dots) transmitted through resonator attached to gate B of device B (inset) when applying an AC drive to one of the neighbouring gates \(i\). The simulated transmitted signal \(|\Delta V_{\mathrm{out}}|\) is indicated as solid lines. **(b)** Simulated amplification factor \(\delta V_{\mathrm{B}}/\Delta V_{\mathrm{P2}}\) of the crosstalk as a function of the drive frequency on the neighbouring gate P2.
Figure 3: **(a)** Measurement of \(f_{\mathrm{SD}}\) while applying a square (upper) and sawtooth (lower) wave with frequency \(f_{\mathrm{bb}}\) and peak amplitude at gate P1 of \(3.15\,\mathrm{mV}\). **(b)** Fitted peak broadening \(\delta V_{\mathrm{\varepsilon}}\), revealing the amplitudes of the harmonics \(n=f_{\mathrm{r}}/f_{\mathrm{bb}}\) of the sawtooth wave applied. **(c)** Typical Rabi pulse sequence with a square wave component (period \(T\)) alternating between the manipulation point \(V_{\mathrm{m}}\) and the readout point \(V_{\mathrm{r}}\), superposed with a sinusoidal qubit drive pulse of length \(t\). **(d)** Peak broadening \(\delta V_{\mathrm{\varepsilon}}\) when Rabi pulses with frequency \(f_{\mathrm{drive}}\) and length \(t\) are applied repeatedly with period \(T=\)\(1\,\mathrm{\SIUnitSymbolMicro s}\) and peak amplitude of \(0.7\,\mathrm{mV}\) at gate P1. **(e)** Same as (d) but varying the baseband pulse period \(T\).
tual capacitance between bond wires and to the PCB side of our setup. Capacitances between neighbouring gates at the device level were found to be on the order of \(\sim\!0.5\,\mathrm{fF}\) and therefore negligible.
To further confirm that the observed crosstalk is dominated by the capacitance between bondpads, another Si fine-FET device (device B) was measured. This device has two nanogates in GL1 and three nanogates in GL2, as depicted in the inset of Fig. 4(b), with a \(750\,\mathrm{nH}\) inductor attached to one of the first-gate-layer gates, named gate B. This leads to the formation of a resonator with a resonance frequency of \(454\,\mathrm{MHz}\). Three of the other gates, labelled gate 1 (nearest neighbour), gate 2 (next-nearest neighbour), and gate 3 (next-next nearest neighbour), were individually excited with an AC drive tone of varying frequency, and the voltage amplitude \(|\Delta V_{\mathrm{out}}|\) transmitted through the resonator was measured. The transmitted signal peaks at the resonator frequency. The peak amplitude is a measure for the voltage amplitude at gate B induced by AC crosstalk. A discrete-element circuit model (see Sec. V of the supplementary material) was used to model the results. In this model, the various crosstalk capacitances between the different gates were obtained from electrostatic simulations, taking into account that the bondpad layout where the order of the bondpads is the same as that of the nanogates. The measured \(|\Delta V_{\mathrm{out}}|\) is presented in Fig. 4(a) along with the simulated results from the circuit model (solid lines). The transmission magnitude decreases with the distance of the excited gate to gate B, in good agreement with the simulation. We thus attribute the dominant source of crosstalk to the capacitance between gate electrodes in the bondpad layer.
The frequency dependence of the AC crosstalk \(\delta V_{\mathrm{B}}/\Delta V_{\mathrm{P2}}\) is simulated for device A, see Fig. 4(b) for the case of \(C_{\mathrm{ct}}=$11.9\,\mathrm{fF}$\). The maximum value of \(20.3\) is reached at \(f_{\mathrm{r}}\). For our typical case, \(C_{\mathrm{p}}\gg C_{\mathrm{ct}}\), this maximum value is well approximated by \(Q_{\mathrm{l}}C_{\mathrm{ct}}/C_{\mathrm{p}}\). Below \(f_{\mathrm{r}}\), the crosstalk reaches a minimum and is suppressed. Above \(f_{\mathrm{r}}\), it saturates at a value of approximately \(C_{\mathrm{ct}}/C_{\mathrm{p}}=6\%\). This suggests that placing qubit frequencies well below the resonator frequency is optimal to suppress crosstalk in architectures with gate-based dispersive qubit readout. Note that our model does not account for higher-order resonator modes, where additional resonances at higher frequencies would lead to crosstalk amplitudes well above this saturation value.
In conclusion, gate-based dispersive sensing has been used to measure the charge stability diagram of a Si fine-FET double QD device down to the last hole with a superconducting NbN tank circuit attached to the barrier gate. The high quality factor of the resonator comes at the expense of increased AC crosstalk where a ring-up of the gate electrode attached to the tank circuit is observed when other gate electrodes are driven with frequency components matching the resonator frequency. We have demonstrated a method of quantifying such crosstalk by measuring the source-drain current through the double QD device and fitting the broadening of the interdot current peak due to the ring-up of the resonator gate.
These findings identify some limitations of gate-based dispersive sensing as a qubit readout technique. The crosstalk amplitude on resonance scales as \(Q_{\mathrm{l}}C_{\mathrm{ct}}/C_{\mathrm{p}}\) and reaches the limit \(C_{\mathrm{ct}}/C_{\mathrm{p}}\) above the resonator frequency \(f_{\mathrm{r}}\). Pulses applied to QD gates that contain spectral components above \(f_{\mathrm{r}}\) should therefore be avoided, even more so if higher modes of the resonator exist. Below the resonator frequency, crosstalk is found to be largely suppressed. This suggests to place qubit frequencies well below \(f_{\mathrm{r}}\) to increase the bandwidth for qubit driving. In general, crosstalk can be reduced by optimising the bondpad layout, improving signal routing or using differential signalling schemes [41, 42, 27]. Unfortunately, an increase of \(C_{\mathrm{p}}\) or decrease of \(Q_{\mathrm{l}}\), while reducing the crosstalk, decreases the sensitivity of the readout circuit to the device capacitance, i.e., the spin readout signal. One way to overcome this would be to reduce the interaction of the spin qubits with the resonator in the qubit manipulation phase, e.g. using a transistor between tank circuit and QD gate [43] or by switching \(C_{\mathrm{p}}\) by varactor diodes.
See supplementary material for details on the fit function for the broadening of the interdot peak, measured ratios of voltage changes on each gate to stay on the interdot line, definition of detuning voltage, circuit models of both devices and the device bondpad layout.
This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement number 847471, and by the NCCR SPIN under grant number 51NF40-180604 of the Swiss National Science Foundation. We thank the Cleanroom Operations Team of the Binnig and Rohrer Nanotechnology Center (BRNC) for their help and support.
|
2309.12264 | GRB221009A gamma-ray events from non-standard neutrino self-interactions | The flux of high-energy astrophysical neutrinos observed by the present
generation of neutrino detectors has already indicated a few hints of new
physics beyond the Standard Model. In this work, we show that high-energy
gamma-ray observations can also be considered as a complementary probe for
unveiling the source of high-energy astrophysical neutrino events and new
physics. Recently, the LHAASO collaboration has reported O(5000) gamma-ray
events in the energy range between 0.5 TeV -18 TeV from gamma-ray burst
GRB221009A within 2000 seconds after the initial outburst. We showed that
attenuated high-energy gamma rays can be produced from the interaction of
astrophysical neutrinos with CMB neutrinos through non-standard
self-interaction of neutrinos mediated by light scalar bosons. The non-standard
interaction of neutrinos recently took a lot of attention in cosmology for its
role in reducing Hubble tension. We have constrained the parameter space of
non-standard self-interacting neutrinos from the flux of photons observed by
LHAASO and showed consistency of the same with the resulting parameter space
from Hubble tension requirements and other recent constraints from
laboratory/cosmology. | Mansi Dhuria | 2023-09-21T17:14:47Z | http://arxiv.org/abs/2309.12264v1 | # GRB221009A gamma-ray events from non-standard neutrino self-interactions
###### Abstract
The flux of high-energy astrophysical neutrinos observed by the present generation of neutrino detectors has already indicated a few hints of new physics beyond the Standard Model. In this work, we show that high-energy gamma-ray observations can also be considered as a complementary probe for unveiling the source of high-energy astrophysical neutrino events and new physics. Recently, the LHAASO collaboration has reported \(\mathcal{O}(5000)\) gamma-ray events in the energy range between 0.5 TeV -18 TeV from gamma-ray burst GRB 221009A within 2000 seconds after the initial outburst. We showed that attenuated high-energy gamma rays can be produced from the interaction of astrophysical neutrinos with CMB neutrinos through non-standard self-interaction of neutrinos mediated by light scalar bosons. The non-standard interaction of neutrinos recently took a lot of attention in cosmology for its role in reducing Hubble tension. We have constrained the parameter space of non-standard self-interacting neutrinos from the flux of photons observed by LHAASO and showed consistency of the same with the resulting parameter space from Hubble tension requirements and other recent constraints from laboratory/cosmology.
## I Introduction
The results from various cosmological and astrophysical observations have convinced us that neutrinos play a significant role in multi-messenger astronomy as one of the messengers used to study exotic astrophysical phenomena as well as in understanding the evolution of the universe. In the standard model (SM) of particle physics, neutrinos are considered to interact very weakly through the weak force, thus making them challenging to detect. However, recently, there has been a lot of debate about the possibility of moderately strong non-standard interaction of neutrinos with each other through a new mediator, typically known as self-interaction of neutrinos (\(\nu\)SI). It plays an important role in reducing the Hubble tension [1; 2; 3; 4; 5; 6], allowing KeV sterile neutrino as viable Dark matter (DM) candidate [7; 8; 9; 10] and supernova neutrino emission [11]. As the new mediator required to include self-interaction between neutrinos naturally invokes the existence of physics beyond the Standard Model (BSM), one can study its implications in explaining other issues in astrophysics and cosmology that might include BSM physics. In this work, we have considered the presence of self-interaction of neutrinos in explaining the flux of high-energy photons obtained from gamma-ray bursts.
Recently, on 09 October 2022, a prodigious bright gamma-ray burst (dubbed GRB221009A) was first recorded by the Burst Alert Telescope (BAT) on the Swift satellite [12] and later confirmed by Fermi Gamma-ray burst Monitor (GBM) [13; 14] and Fermi-LAT [15; 16] at redshift \(z_{0}\sim 0.15\)[17; 18]. The extremely energetic gamma rays emitted from the cosmic burst were detected by Large High Altitude Air Shower Observatory (LHAASO) [19] and Carpet-2 experiment [20]. More specifically, the Square kilometer array (KM2A) of LHAASO ([19]) reported the observation of around 5000 very-high-energy photons with energy up to 18 TeV in a 2000 sec time window while the carpet-2 experiment reported the claim for detection of 251 TeV photon-like shower events [20]. The reported redshift corresponds to a co-moving distance of around 643 Mpc from the Earth. The observation of these events is quite astonishing as the flux of such photons would be severely attenuated because of the pair production of electrons and positrons via interaction with extra-galactic background light (EBL)(\(\gamma+\gamma_{\rm EBL}\to e^{-}e^{+}\)). Therefore, the photons would hardly arrive on the earth. This has speculated many proposals by invoking BSM Physics. The observation of such events has been explained from sterile neutrino decay [21; 22; 23; 24], scalar decay [25], Lorentz invariance violation [26; 27], axion-photon conversion [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39] and inverse Compton mechanism [40; 41; 42] etc.
It has been observed that in addition to photons, neutrinos can also be produced through the decay of kaons and muons emitted during gamma-ray bursts. The IceCube collaboration has also performed dedicated searches for co-relating some of the GRB events with diffuse extra-galactic neutrino background of very high energy neutrinos [43; 44] and also constrained the time-integrated flux of neutrinos from GRB221009A at Earth [45; 46]. It has been argued in [21; 22; 23; 24] that neutrinos emitted from GRB can convert into sterile neutrinos through mixing or dipole interaction. The produced sterile neutrino can then decay into photons through the same process after propagating for a long distance so that EBL would not significantly attenuate this secondary photon flux. Thus, the observation of photons can be explained through the decay of sterile neutrino. However, in this case, the flux of photons would depend on the mixing angle between sterile and active neutrinos. The mixing angle required to explain a non-zero number of photons (\(N_{\gamma}>1\)) would be very high (0.1-1), which is ruled out by experiments. Further, the resulting parameter space also gets ruled out by various astrophysical and
cosmological observations [24].
In this work, we consider the possibility of producing such events from a direct scattering of active neutrinos with cosmic microwave background (CMB) neutrinos rather than the decay of sterile neutrinos produced through mixing with active neutrinos. The high-energy neutrinos produced during GRB can scatter with CMB neutrinos through newly proposed self-interactions of neutrinos mediated by scalar bosons. If the mediator also interacts with a corresponding leptonic partner of the neutrino in a specific model of BSM physics, the interaction of high-energy neutrinos with CMB neutrinos can produce high-energy photons and background CMB photons at a one-loop level. Even though the loop-level cross-section will be suppressed than the cross-section for elastic scattering of high energy neutrinos with CMB neutrinos by a small order (\(10^{-2}-10^{-3}\)), there will be a non-zero probability of producing photons from such scatterings. We have observed that the mean free path for the scattering can be such that the astrophysical high energy photons would be produced near the surface of the earth and hence they will not be attenuated by EBL. More interestingly, some of the parameter space required to obtain a relevant number of gamma-ray events is also compatible with the parameter space required to resolve/reduce Hubble tension. In fact, results are also compatible with the allowed region of \((g-2)_{\mu}\) constraints and safe from other astrophysical and cosmological constraints.
The plan of the rest of the paper is as follows: In SSII, we motivate the role of self-interacting neutrinos in cosmology, specifically for alleviating Hubble tension. In SSII(a) and SSII(b), we discuss the toy model of particle physics by involving the interaction of the scalar mediator with neutrino and its leptonic partner. Then we explain the possibility of producing astrophysical photons from the scattering of high-energy astrophysical neutrinos with CMB neutrinos. In SSIII, we estimate the effect of such interactions on the astrophysical flux of photons generated from loop-level scattering. We also discuss that some of the resulting parameter space is also compatible with constraints from Hubble tension requirement and \((g-2)_{\mu}\) discrepancy. Finally, in SSIV, we discuss our results with interesting conclusions and future directions.
## II Self-interacting neutrinos
The self-interacting neutrinos refer to a scenario where neutrinos interact strongly with each other secretly through a new interaction mediated by either scalar or vector boson. This will allow neutrinos to remain in thermal equilibrium with each other till later times. Thus, the epoch of neutrino decoupling gets delayed until even close to the onset of matter-radiation equality. The effect of the same on the CMB power spectrum has been analyzed in detail in literature [1; 2; 3; 4; 5; 6] in the context of Hubble tension. Below, we have briefly discussed the impact of strong self-interactions in increasing the value of the Hubble constant measured by CMB in the context of the \(\Lambda\)CDM model.
The position of CMB multiple for a particular mode \(k\) can is given by [4]
\[l\approx\frac{(m\pi-\phi_{\nu})}{\theta_{*}},\ \ \text{with}\ \theta_{*}=\frac{r_{s}^{*}}{D_{A}^{*}}\,, \tag{1}\]
where \(m\pi\) corresponds to the position of peaks, \(\phi_{\nu}\) corresponds to the phase shift, \(D_{A}^{*}\) is the distance between the surface of the last scattering and today, and \(r_{s}^{*}\) corresponds to the radius of the sound horizon at the epoch of recombination. The quantities \(D_{A}^{*}\) and \(r_{s}^{*}\) can be expressed as a function of the Hubble parameter \(H(z)\) as [4]: \(D_{A}^{*}=\int_{0}^{z^{*}}\frac{1}{H(z)}\,dz\) and \(r_{s}^{*}=\int_{z^{*}}^{\infty}\frac{c_{s}(z)}{H(z)}\,dz\), where \(c_{s}(z)\approx 1/\sqrt{3}\) is the speed of sound in the baryon-photon plasma. The phase shift (\(\phi_{\nu}\)) depends on the ratio of free-streaming neutrino energy density to the total radiation energy density [4]. The decrease in the number density of free-streaming neutrino will decrease the phase shift \(\phi_{\nu}\), which further leads to a shift in the position of the CMB multiple towards a high \(l\) value. This change can be avoided by increasing the value of \(\theta_{*}\), which can be achieved either by decreasing the value of \(D_{A}^{*}\) while keeping \(r_{s}^{*}\) unchanged or increasing the value of \(r_{s}^{*}\) while keeping \(D_{A}^{*}\) unchanged. In the standard cosmological model, the evolution of Hubble constant evolves with redshift \(z\) is given by \(H(z)=H_{0}\sqrt{\Omega_{r}(1+z)^{4}+\Omega_{m}(1+z)^{3}+\Omega_{\Lambda}}\), with \(\Omega_{m}\), \(\Omega_{r}\) and \(\Omega_{\Lambda}\) being the fraction of the energy density acquired by radiation, matter, and vacuum respectively. If we slightly increase the value of \(H_{0}\) and \(\Omega_{\Lambda}\) such that there is an increase in the value of \(H(z)\) at low redshift while there is negligible change for \(H(z)\) at high redshifts, one can decrease \(D_{A}^{*}\) in such a way that the position of observed CMB multipoles will not be changed. In this way, the presence of self-interacting neutrinos impels a higher value of \(H_{0}\) and reduces the discrepancy between the value of \(H_{0}\) measured by CMB and low-redshift observations.
### The model
The minimal model of self-interaction typically involves neutrinophilic interaction given by \(\mathcal{L}\supset g_{\nu}\nu_{i}\nu_{i}\), where \(i=e,\mu,\tau\) corresponds to different flavors of active neutrinos. As IceCube has searched only for track-like events from GRB221009A, we consider self-interaction only between muon flavors of neutrinos. Interestingly, the interaction of scalar with muons is also considered in the literature for resolving the discrepancy between the theoretical and experimental values of \((g-2)_{\mu}\)[47]. Therefore, in this work, we consider a model in which the real singlet scalar at low energies couples both to muon neutrinos as well as muons. The interaction couplings are given as:
\[\mathcal{L}\supset g_{\mu}\phi\bar{\mu}\mu+g_{\nu_{\mu}}\phi\nu_{\mu}\nu_{\mu}. \tag{2}\]
Assuming the Majorana nature of neutrinos, we have used Weyl notation to represent neutrino coupling to the scalar. For muons, we are considering Dirac notation to represent its coupling with the scalar boson. The interaction coupling \(g_{\nu_{\mu}}\) can be generally estimated from the mechanism of neutrino mass generation, while \(g_{\mu}\) can be obtained from the non-renormalizable coupling involving Higgs field and a new scalar field. As we are not interested in the UV completion of the model, we just assume that both the couplings can be the same or different depending on the particular model used to generate such couplings.
### Scattering of self-interacting neutrinos with CMB neutrinos
Neutrinos are often considered to be astounding "messengers" because of their capability to carry insight into various astrophysical processes and events happening in the universe. In SM, neutrinos can traverse large distances on their way to Earth because the probability of an interaction between neutrinos and matter is very small. In the presence of new non-standard interaction of neutrinos, the high energy neutrinos emitted from various astrophysical objects such as gamma-ray bursts, galaxy sources, etc. might scatter with other new particles while traveling to Earth. Thus, there is a possibility that the mean free path of neutrino will become smaller. In the case of self-interacting neutrinos, the mean free path of neutrinos can be affected due to the secret self-interaction of astrophysical neutrinos with the cosmic neutrino background. The high-energy astrophysical neutrinos can scatter strongly with CMB neutrinos while traveling to Earth and impact the flux of astrophysical neutrinos observed from various sources. In other words, the flux of high-energy astrophysical neutrinos obtained from various sources can provide complementary probes to constrain the strength of non-standard interaction of neutrinos. The effect has been analyzed in [48; 49] by analyzing constraints on the self-interacting neutrino coupling from the flux of astrophysical neutrino point sources obtained from NGC and IceCube results. The Feynman diagram representing the s-channel scattering of astrophysical neutrinos (\(\nu_{\mu a}\)) with CMB neutrinos (\(\nu_{\mu b}\)) has been shown in fig 1.
In this work, we have proposed that the s-channel scattering of astrophysical neutrinos can also produce high-energy gamma rays at the radiative level if the mediator interacts with both muon neutrinos and muons. Thus, the flux of high-energy gamma rays obtained from various sources can also be considered to constrain the strength of non-standard interactions of neutrinos. The one-loop level Feynman diagram for producing photons from radiative s-channel scattering of astrophysical neutrinos (\(\nu_{\mu a}\)) with CMB neutrinos (\(\nu_{\mu b}\)) has been shown in fig 2.
We will analyze constraints on the strength of non-standard interaction of neutrinos from the flux of high energy gamma-ray events observed by GRB221009A. We will also analyze if the self-interaction coupling required to produce such high-energy photons is consistent with the values required to address the Hubble tension and other recent laboratory/cosmological constraints.
## III Impact on astrophysical gamma-ray flux from GRB221009A
As discussed earlier, the LHAASO collaboration has recently reported an extremely bright and long-duration Gamma Ray Burst, named GRB221009A [12]. They have detected \(\mathcal{O}(5000)\) events of photons with energies ranging from 0.5 TeV to 18 TeV within a time window of 2000 at redshift \(z=0.15\). The violent reactions around GRB normally produce a large number of pions or kaons which can further decay into photons and neutrinos. As the photon will interact with background photons to annihilate into electron-positron pairs, the flux of astrophysical neutrinos will be significantly attenuated. Thus, the detection can not be explained due to extragalactic background light coming from GRB events. Interestingly, the observed flux of high and very-high-energy pho
Figure 1: Feynman diagram for scattering of high energy astrophysical neutrinos with CMB neutrinos. Here, \(\nu_{\mu a}\) and \(\nu_{\mu b}\) correspond to high-energy astrophysical neutrinos and background CMB neutrinos respectively.
Figure 2: Feynman diagram for scattering of high energy astrophysical neutrinos (\(\nu_{\mu a}\)) with CMB neutrinos (\(\nu_{\mu b}\)) into high energy astrophysical photons (\(\gamma_{a}\)) and CMB background photons (\(\gamma_{b}\)).
tons can be obtained from neutrinos emitted during GRB events through the interaction of emitted high-energy astrophysical neutrinos with CMB neutrinos. Thus, GRB events would provide a unique opportunity to probe the non-standard interaction of neutrinos.
In this section, we calculate the probability of producing high-energy gamma rays from the scattering of high-energy astrophysical neutrinos with CMB neutrinos. The unattenuated \(\gamma\) flux of GRB 221009A obtained by extrapolating the flux measured by FermiLAT in the energy range (0.1 - 1) GeV to higher energies (around TeV) is given by [22]
\[\phi_{\gamma}^{0}(E_{\gamma})=\frac{2.1\times 10^{-6}}{\rm cm^{2}s^{-1}TeV} \left(\frac{E_{\gamma}}{\rm TeV}\right)^{-1.87\pm 0.04}. \tag{3}\]
The emission of neutrinos from GRB221009A has been analyzed by complementary experiments such as Icecube. The non-observation of track-like neutrino events in the energy range 0.8 TeV - 1 PeV has set constraints on neutrino fluence \(E_{\nu}^{2}\phi_{\nu_{I}}^{\rm int}\leq 3.9\times 10^{-5}\) TeV cm\({}^{-2}\)[45; 46]. As neutrinos would mainly be emitted from muon and kaon decay, they would mostly consist of astrophysical muon neutrinos (\(\nu_{\mu a}\)). Therefore, the ratio of the flux of neutrinos to the flux of unattenuated gamma rays will be given by
\[r_{\nu\gamma}=\frac{\phi_{\nu_{\mu a}}}{\phi_{\gamma}^{0}(E_{ \gamma})}. \tag{4}\]
By dividing the neutrino fluence with a long period (\(\Delta\tau\sim 600\) sec) of intense gamma-ray emission, one gets the ratio of the fluxes \(r_{\nu\gamma}\lesssim 3\times 10^{-2}\)[22].
As shown in fig. 1, the high energy neutrinos emitted from muon/kaon decay produced during GRB at redshift z=0.15 can scatter with CMB neutrinos. The optical depth of neutrino would be given by:
\[\tau_{\nu_{\mu}}=\frac{\lambda_{\nu_{\mu}}}{d}, \tag{5}\]
where \(\lambda_{\nu_{\mu}}\) corresponds to total mean free path of neutrinos. The mean free path of neutrinos annihilating into gamma rays will be given by \(\lambda_{\nu_{\mu}\rightarrow\gamma}=\mathcal{BR}(\nu_{\mu a}\nu_{\mu b} \rightarrow\gamma_{a}\gamma_{b})\lambda_{\nu_{\mu}}\).
Thus, the probability of receiving gamma rays on earth from the scattering of astrophysical neutrinos with CMB neutrinos in the distance interval [x, x+dx] will be given by:
\[e^{-x/\lambda_{\nu_{\mu}\rightarrow\gamma}}\;\frac{dx}{\lambda_{ \nu_{\mu}\rightarrow\gamma}}\;e^{-(d-x)/\lambda_{\gamma}}, \tag{6}\]
where \(\lambda_{\nu_{\mu}\rightarrow\gamma}\) is the mean free path for the scattering of neutrinos into gamma rays and \(\lambda_{\gamma}\) corresponds to the mean free path of gamma rays. Multiplying eq. (6) by neutrino flux and integrating over \(x\), the secondary gamma-ray flux from the neutrino scattering will be given by:
\[\phi_{\nu_{\mu}}^{\gamma}=\phi_{\nu_{\mu}}\frac{1}{(\lambda_{ \mu\rightarrow\gamma}/\lambda_{\gamma})-1}\left[e^{-d/\lambda_{\mu}}-e^{-d/ \lambda_{\gamma}}\right] \tag{7}\]
For \(\phi_{\nu_{\mu a}}=r_{\nu\gamma}\times\phi_{\gamma}^{0}(E_{\gamma})=0.03\; \phi_{\gamma}^{0}(E_{\gamma})\) using eq. (4), the secondary gamma ray flux will be:
\[\phi_{\nu_{\mu}}^{\gamma}=0.03\;\frac{\phi_{\gamma}^{0}}{(\lambda_{\mu \rightarrow\gamma}/\lambda_{\gamma})-1}\left[e^{-d/\lambda_{\mu\rightarrow \gamma}}-e^{-d/\lambda_{\gamma}}\right] \tag{8}\]
In the above expression, the second exponential factor corresponds to the gamma-ray flux produced directly in GRB and can be ignored for \(\lambda_{\mu\rightarrow\gamma}\sim d\approx 10^{27}\) cm. Hence, there is a possibility that the gamma-ray flux produced from the scattering of neutrinos will not be exponentially attenuated as compared to the gamma-ray flux produced directly produced from GRB. In forthcoming subsections, we numerically estimate the mean free path for astrophysical neutrinos and present results related to the flux of astrophysical gamma rays produced from the scattering of muon neutrinos.
### Mean free path of neutrinos
The mean free path of neutrinos emitted from astrophysical sources can be calculated from the interaction rate of incident neutrinos with the background CMB neutrinos. The value of \(\lambda_{\nu_{\mu}\rightarrow\gamma}\) will be given by [49]
\[\lambda_{\nu_{\mu}\rightarrow\gamma}=\frac{1}{\Gamma(\nu_{\mu a }\nu_{\mu b}\rightarrow\gamma_{a}\gamma_{b})}, \tag{9}\]
where \(\Gamma(\nu_{\mu a}\nu_{\mu b}\rightarrow\gamma_{a}\gamma_{b})\) is the interaction rate of the incident neutrino with CMB neutrinos. The cross-section for the production of gamma rays from the scattering of astrophysical neutrinos with CMB neutrinos (shown in Feynman diagram given in fig. 2) is given by [50; 51]
\[\sigma(\nu_{\mu a}\nu_{\mu b}\rightarrow\gamma_{a}\gamma_{b})= \frac{81\alpha^{2}s}{4\pi^{3}}\frac{(g_{\mu}g_{\nu_{\mu}})^{2}}{(s-m_{\phi}^{2 })^{2}+m_{\phi}^{2}\Gamma_{\phi}^{2}}\] \[\times\left|1+\sum_{f}Q_{\mu}^{2}m_{\mu}^{2}C_{0}^{\gamma} \right|^{2}, \tag{10}\]
where scalar Passarino-Veltman function \(C_{0}^{\gamma}\) is given by,
\[C_{0}^{\gamma}(s,m_{\mu})=\frac{1}{2s}{\rm ln}^{2}\left(\frac{ \sqrt{1-4m_{\mu}^{2}/s}-1}{\sqrt{1-4m_{\mu}^{2}/s}+1}\right). \tag{11}\]
By using the expression of cross section given in eq. (10), the thermal interaction rate is given by [49]
\[\Gamma(\nu_{\mu a}\nu_{\mu b}\rightarrow\gamma_{a}\gamma_{b})= \int\frac{d^{3}p}{(2\pi)^{3}}f_{i}(\vec{p_{i}})v_{\rm Mol}\sigma(s(E_{\nu},\vec {p})). \tag{12}\]
Today, the CMB neutrino background has a thermal distribution with total number density \(n_{\rm tot}\approx 340{\rm cm^{-3}}\) and temperature \(T_{\nu}=1.9\) K. Given this, the background CMB neutrino can be considered as non-relativistic with \(m_{\nu_{\mu}}>T_{\nu}\). In this case, the center of mass energy
becomes independent of the momentum and we can get \(s=\sqrt{2m_{\nu_{\mu}}E_{\nu_{\mu a}}}\) and \(v_{\rm{Mol}}=1\). Using this, the integral can be easily solved in a lab frame and the interaction rate for scattering with non-relativistic background neutrinos reduces to
\[\Gamma(\nu_{\mu a}\nu_{\mu b}\rightarrow\gamma_{a}\gamma_{b})=\sigma(2E_{\nu_{ \mu a}}m_{\nu_{\mu}})n_{\nu_{\mu b}}, \tag{13}\]
Using this, the mean free path (MFP) for secondary gamma rays can be calculated from
\[\lambda_{\nu_{\mu}\rightarrow\gamma}=\frac{1}{\sigma(2E_{\nu_{\mu a}}m_{\nu_{ \mu}})n_{\nu_{\mu b}}}, \tag{14}\]
For the scattering process involving \(\nu_{\mu a}\nu_{\mu b}\rightarrow\gamma_{a}\gamma_{b}\), the energy of gamma rays can be approximately equal to the energy of emitted astrophysical neutrinos. Thus, we can keep \(E_{\nu_{\mu a}}\approx E_{\gamma_{a}}\). Using this, we have calculated the mean free path of neutrino elastic scattering as well as neutrinos annihilating into gamma rays. The results are shown in fig. 3.
We can see from fig. 3 that the black dashed line gives mean free path for neutrino for elastic scattering of incident astrophysical neutrino with CMB neutrinos for \(g_{\nu_{\mu}}=0.01\) while the red solid line gives mean free path of neutrino for \(g_{\nu_{\mu}}=g_{\mu}=0.01\) when the interaction of incident astrophysical neutrino with CMB neutrinos produces gamma rays. The dip in the curve shows the resonance enhancement of the cross-section at \(s\approx 4m_{\phi}^{2}\). In both cases, the mean free path for both processes lies in the range \(d\gtrsim 645\) Mpc \(=2\times 10^{27}\) km. Thus, the optical depth for neutrinos traveling to earth given by \(\tau=e^{-d/\lambda_{\mu\rightarrow\gamma}}\) will be around \(\mathcal{O}(1)\). Therefore, all the gamma rays produced from neutrino scattering will reach the Earth.
### Astrophysical gamma-ray flux
Now we calculate the secondary flux of astrophysical photons emitted from scattering of incident astrophysical neutrinos with CMB neutrinos by using eq. (8)-(14) for different values of couplings (\(g_{\nu_{\mu}}\), \(g_{\mu}\)) and fixing the mass of mediator around \(m_{\phi}\sim\) MeV. We assume that the background CMB photons will not acquire much energy, therefore we can keep \(E_{\gamma a}\approx E_{\nu a}\). Further, we have compared our results with the unattenuated and attenuated gamma-ray flux directly coming from GRB events. As discussed above, the final expression for calculating the secondary gamma-ray flux is given by:
\[\phi_{\nu_{\mu}}^{\gamma}(E_{\gamma})=\ \frac{0.03\phi_{\gamma}^{0}(E_{\gamma})}{( \lambda_{\mu\rightarrow\gamma}/\lambda_{\gamma})-1}\left[e^{-d/\lambda_{\mu \rightarrow\gamma}}-e^{-d/\lambda_{\gamma}}\right]. \tag{15}\]
For calculating the flux, we need to determine both \(\lambda_{\mu\rightarrow\gamma}\) and \(\lambda_{\gamma}\). The value of \(\lambda_{\mu\rightarrow\gamma}\) has been already calculated using eq. (14). We have obtained the value of \(\lambda_{\gamma}\) at different energies by using publicly available data for the optical depth of photons (calculated at redshift z=0.15) given in [52] for energy up to 30 TeV. The behavior of secondary flux as a function of energy is shown in fig. 4.
We can clearly see from the figure that the flux of astrophysical gamma rays produced from neutrino sources does not attenuate at high energies even though the magnitude of flux is lower than the direct gamma-ray flux at lower energies. The results are shown for three different benchmark values of the coupling of \(g_{\nu_{\mu}}\) and \(g_{\mu}\) respectively. The choice of benchmark couplings has been motivated by Hubble tension requirement as well as the the allowed region of \(g_{\mu}\) from experimental results of \((g-2)_{\mu}\) etc. We will discuss the specific choice of coupling parameters in the next subsection.
### Parameter Space of neutrino self-interaction coupling vs mass of the mediator
In this subsection, we estimate the total number of secondary gamma-ray events observed from the scattering of astrophysical neutrinos with CMB neutrinos. The number of events can be computed by multiplying the flux with an effective cross-section area and observation time. Using an effective area of 1 km\({}^{2}\) and observation time window of 2000 sec [22], the number of events in the energy range \(E_{\gamma}\sim(1-30)\) TeV can be calculated from
\[N_{\gamma}=\int_{1\rm TeV}^{30\rm TeV}\phi_{\nu_{\mu}}^{\gamma}(E_{\gamma}) \ dE_{\gamma}\ dA\ dt, \tag{16}\]
where \(\phi_{\nu_{\mu}}^{\gamma}(E_{\gamma})\) corresponds to flux given in eq. (15). As \(N_{\gamma}\) depends on the self-interaction neutrino coupling (\(g_{\nu_{\mu}}\)) and mass of the mediator (\(m_{\phi}\)), we can use eq. (16) in order to constrain the parameter space of \(g_{\nu_{\mu}}\) as a function of \(m_{\phi}\). Thus, we have obtained \(g_{\nu_{\mu}}-m_{\phi}\) parameter space by considering the different observed number of
Figure 3: The black dashed line corresponds to the mean free path of neutrino for the elastic scattering of an incident astrophysical neutrino with CMB neutrinos for interaction coupling \(g_{\nu_{\mu}}=0.01\). The red solid line shows the mean free path of neutrino for \(g_{\nu_{\mu}}=g_{\mu}=0.01\) when the interaction of an incident astrophysical neutrino with CMB neutrinos produces gamma rays. The dip in the curve shows the resonance enhancement of the cross-section at \(s\approx 4m_{\phi}^{2}\).
events. In particular, we consider three cases for the observed number of events: (i) \(N_{\gamma}=100\) (ii) \(N_{\gamma}=1000\) (iii) \(N_{\gamma}=5000\). The values of \(g_{\nu_{\mu}}\) and \(g_{\mu}\) depend on the underlying model of BSM physics. As we do not consider any UV complete model of BSM physics, we are assuming that the values of both couplings can either be different or the same. Therefore, while obtaining the parameter space, we have considered two possibilities: (a) \(g_{\nu_{\mu}}=g_{\mu}\), (b) \(g_{\nu_{\mu}}\neq g_{\mu}\). For the second case, we have fixed the value of \(g_{\mu}\) allowed by constraints from the recently measured value of \((g-2)_{\mu}\)[47]. The results are shown in figs. 5 and 6 for both cases along with parameter space allowed by Hubble tension requirement and ruled out by other cosmological/ laboratory constraints. The dashed, dotted, and solid black curves in both figures show the parameter space required to observe \(N_{\gamma}=100\), \(N_{\gamma}=1000\), \(N_{\gamma}=5000\) events respectively. Below, we discuss all other constraints shown in figs. 5 and 6.
Constraints from Hubble TensionAs discussed in [1], the strength of self-interacting neutrino required to get the right value of Hubble constant can be categorized in two regimes, dubbed as strong-interacting neutrino (SI\(\nu\)) and moderately interacting neutrino (MI\(\nu\)). The values of \(G_{\mathit{eff}}\) in both regimes are given as :
\[G_{\mathit{eff}}=\begin{cases}(4.7^{+0.4}_{-0.6}\ \mathrm{MeV})^{-2},&\mathrm{SI }\nu\\ (89^{+171}_{-61}\ \mathrm{MeV})^{-2},&\mathrm{MI}\nu.\end{cases} \tag{17}\]
The resulting parameter space along with Hubble tension constraints is shown separately in figs. 5(b) and 6(b) for both cases by superimposing allowed range of \(G_{\mathrm{eff}}\) as blue-shaded and green-shaded band respectively.
Various other Cosmological and laboratory constraints:The new interaction between neutrinos and scalar mediator allows the light scalar mediator to be in thermal equilibrium before the onset of neutrino decoupling and affect the total number of relativistic degree of freedom (\(\Delta N_{\mathrm{eff}}\)) present in the universe. Therefore, the requirement \(\Delta N_{\mathrm{eff}}\lesssim 0.5\) puts a bound on the mass of real scalar mediator to be \(m_{\phi}\gtrsim 0.16\) MeV [53]. The ruled-out region is shown as a light-red shaded band in both figs. 5 and 6. The constraints from the laboratory originate from the possible decay channel of kaon to the light scalar given by \(K\to\mu\nu_{\mu}\phi\). Therefore, the experimental bounds on the kaon decay rate also put a bound on the coupling \(g_{\nu_{\mu}}\) for \(m_{\phi}\lesssim m_{\mu}\)[54] in fig. 5 and 6. The ruled-out parameter space from this bound is shown as a light-blue shaded region in both figures. In the presence of a new scalar mediator coupled to the muon, the experimentally allowed value of \(\Delta a_{\mu}=(g-2)_{\mu}/2\) puts a bound on the coupling parameter \(g_{\mu}\). Therefore, for the case of \(g_{\nu_{\mu}}=g_{\mu}\) as shown in figs. 5(a) and 5(b), we have considered constraints from the updated value of \((g-2)_{\mu}\)[47]. The allowed region is shown as purple-shaded band in figs. 5(a) and 5(b). Similarly, the interaction coupling \(g_{\mu}\) also gets constrained from beam dump experiments [55]. The gray shaded region in figs. 5(a) and 5(b) excludes the parameter space from beam-dump experiments [55]. For \(g_{\nu_{\mu}}\neq g_{\mu}\), we have already fixed the value of \(g_{\mu}=5\times 10^{-4}\) which is consistent with the allowed experimental value of \(\Delta a_{\mu}\). Thus, the bound from \((g-2)_{\mu}\) does not exist on \(g_{\nu_{\mu}}\) in figs. 6(a) and 6(b). The bound from beam dump experiment will also not apply in the second case shown in figs 6(a) and 6(b).
Finally, we realize that a tiny amount of parameter space remains available for \(N_{\gamma}=100\) events for the case of \(g_{\nu_{\mu}}=g_{\mu}\), which is also consistent with the allowed region of \((g-2)_{\mu}\). In this case, the region favored by Hubble tension constraints is ruled out by all other constraints. For \(g_{\nu_{\mu}}\neq g_{\mu}\), the parameter space remains available for \(N_{\gamma}\sim(100-5000)\) events. Interestingly, some of the parameter space for \(N_{\gamma}=5000\) events is also consistent with the MI\(\nu\) range allowed by Hubble tension constraints and free from other laboratory and cosmology constraints. Thus, we conclude that the scattering of an astrophysical neutrino with CMB neutrinos can be the origin of high-energy astrophysical events observed from GRB, and some of the allowed region of \(g_{\nu_{\mu}}\) is also consistent with allowed values from Hubble tension constraints and \((g-2)_{\mu}\) discrepancy.
## IV Concluding remarks
In the last few years, neutrino astronomy has turned out to be extremely useful in providing a more comprehensive understanding of the universe. In this work, we have emphasized the role of neutrinos in explaining
Figure 4: The gray dashed line shows the unattenuated gamma-ray flux directly coming from GRB. The black dashed line shows the attenuated gamma-ray flux directly coming from GRB. The red, green, and black solid lines show the secondary flux of astrophysical high energy gamma rays obtained from the scattering of astrophysical neutrinos with CMB neutrinos for different couplings of \(g_{\nu_{\mu}}\) and \(g_{\mu}\) respectively. The mass of the mediator has been fixed to be \(m_{\phi}\approx 1\) MeV.
high-energy gamma-ray events observed in GRB221009A through non-standard self-interaction of neutrinos. The model of self-interacting neutrinos is primarily motivated in cosmology for resolving the discrepancy between the value of the Hubble constant measured by CMB observations and low redshift experiments. The inclusion of the same can delay the epoch of neutrino decoupling and also modify the CMB power spectrum obtained in the \(\Lambda\)CDM model. As a result of this, the comparison of the modified CMB power spectrum with the measured CMB power spectrum allows a high value of Hubble constant in the \(\Lambda\)CDM model, thus reducing the Hubble tension. The detailed CMB analysis of the same allows a very specific range of self-interaction coupling vs. mass of the mediator. In this work, we assume that the same interaction of the scalar boson with neutrinos can also produce secondary gamma rays from the scattering of astrophysical neutrinos with CMB neutrinos if the new scalar mediator interacts both with muon neutrinos and its leptonic partner. The interaction of scalar with muons is already motivated by the discrepancy between the theoretical and experimental values of \((g-2)_{\mu}\). Basically, the interaction of an astrophysical neutrino with CMB would produce a scalar boson, which can further decay into a high-energy astrophysical photon and a CMB photon at a one-loop level through the interaction of scalar mediator with muons.
By considering a toy model of a light scalar interacting with muon neutrinos and muons, we have calculated the flux of high-energy astrophysical gamma rays produced through such a process and show that we can obtain the required number of gamma-ray events produced by GRB221009A without having the same to be attenuated while traveling to earth. In fact, some of the resulting parameter space of self-interacting neutrino coupling is also in agreement with the parameter space obtained from Hubble tension requirements, allowed \((g-2)_{\mu}\) region and free from other laboratory and cosmology constraints as well. Thus, the strong self-interaction between neutrinos in the astrophysical environment can explain the origin of high-energy gamma-ray events observed in GRB221009A. In the future, as CMB observations become precise, the constraints on the allowed region of self-interaction between neutrinos will become more stringent. Therefore, it would be interesting to study the consequences of the same in astrophysical processes. In fact, the exotic high-energy astrophysical gamma rays emitted from the scattering of neutrinos can act as a complementary probe to unveil the source of high-energy astrophysical neutrino events.
Figure 5: The dashed, dotted, and solid black curves correspond to parameter space required to observe \(N_{\gamma}=100\), \(N_{\gamma}=1000\), \(N_{\gamma}=5000\) events respectively. The light-red shaded region represents the parameter space ruled out by constraints from BBN [53]. The light-blue shaded region shows the excluded parameter space from the constraint on the branching ratio of kaon decay: \(K\rightarrow\mu\nu_{\mu}\phi\)[54]. The gray shaded region excludes the parameter space from beam-dump experiments [55]. In the right-hand side figure, the blue and green shaded bands correspond to MI\(\nu\) and SI\(\nu\) region allowed by Hubble tension constraints [1]. In the left-hand figure, we can see that the small amount of parameter space available for \(N_{\gamma}=100\) events is also consistent with the allowed region from \((g-2)_{\mu}\)[47].
## Acknowledgments
MD would like to acknowledge support through the DST-Inspire Faculty Fellowship of the Department of Science and Technology (DST), Government of India under the Grant Agreement number: IFA18-PH215. MD would also like to thank the organizers of "Annual Theory Discussion days-2023" held at PRL Ahmedabad, India where preliminary results of this work were presented.
|
2303.18074 | Instantaneous vacuum and States of Low Energy for a scalar field in
cosmological backgrounds | We construct the instantaneous vacuum state for a quantum scalar field
coupled to another classical scalar field as described in [1]. We then compare
it with the state of low energy constructed for a particular solution. We show
that under physically motivated conditions they become very similar. | Antonio Ferreiro, Silvia Pla | 2023-03-31T14:06:09Z | http://arxiv.org/abs/2303.18074v1 | # Instantaneous vacuum and States of Low Energy for a scalar field in cosmological backgrounds
###### Abstract
We construct the instantaneous vacuum state for a quantum scalar field coupled to another classical scalar field as described in [1]. We then compare it with the state of low energy constructed for a particular solution. We show that under physically motivated conditions they become very similar.
## 1 Introduction
In our current cosmological model, quantum effects of matter play a key role in our understanding of the early Universe. In particular, during the inflationary period and the posterior reheating era generation of quantum fluctuations and particle production are ubiquitous. The best framework to study these effects is quantum field theory in curved spacetime, which in the last decades has been constructed in a mathematically consistent way. However, many of the relevant results have been established in a rather unpractical methodology for actual numerical calculations/simulations, which are fundamental in most cosmological scenarios.
One example of this is the consistent characterization of suitable vacuum states in terms of the so-called Hadamard condition [2, 3]. This is a requirement on the singular behavior of the two-point correlation function at separate spacetime points. By satisfying this condition, the Wick polynomials of any degree can be guaranteed to exist, allowing the perturbative expansion of an interacting theory to be well-defined at all orders [4, 5, 6]. For the case of cosmological spacetimes, an equivalent notion has been proposed in terms of the behavior of the two-point function in the limit \(k\to\infty\)[7]. This condition, known as the adiabatic condition, has been shown to be equivalent to the Hadamard condition in the limit of infinite adiabatic order [8, 9, 10].
Obtaining a vacuum state that satisfies the Hadamard condition is, in general, a challenging task. However, in cosmological spacetimes one can successfully construct vacuum states that satisfy this condition by means of the low energy states, introduced for the first time in Ref. [11]. In this framework, the vacuum is defined as the state that minimizes the vacuum expectation |
2301.00042 | Quantifying the Expressive Capacity of Quantum Systems: Fundamental
Limits and Eigentasks | The expressive capacity of quantum systems for machine learning is limited by
quantum sampling noise incurred during measurement. Although it is generally
believed that noise limits the resolvable capacity of quantum systems, the
precise impact of noise on learning is not yet fully understood. We present a
mathematical framework for evaluating the available expressive capacity of
general quantum systems from a finite number of measurements, and provide a
methodology for extracting the extrema of this capacity, its eigentasks.
Eigentasks are a native set of functions that a given quantum system can
approximate with minimal error. We show that extracting low-noise eigentasks
leads to improved performance for machine learning tasks such as
classification, displaying robustness to overfitting. We obtain a tight bound
on the expressive capacity, and present analyses suggesting that correlations
in the measured quantum system enhance learning capacity by reducing noise in
eigentasks. These results are supported by experiments on superconducting
quantum processors. Our findings have broad implications for quantum machine
learning and sensing applications. | Fangjun Hu, Gerasimos Angelatos, Saeed A. Khan, Marti Vives, Esin TΓΌreci, Leon Bello, Graham E. Rowlands, Guilhem J. Ribeill, Hakan E. TΓΌreci | 2022-12-30T20:15:31Z | http://arxiv.org/abs/2301.00042v2 | # Fundamental Limits to Expressive Capacity of Finitely Sampled Qubit-Based Systems
###### Abstract
The expressive capacity for learning with quantum systems is fundamentally limited by the quantum sampling noise incurred during measurement. While studies suggest that noise limits the resolvable capacity of quantum systems, its precise impact on learning remains an open question. We develop a framework for quantifying the expressive capacity of qubit-based systems from finite numbers of projective measurements, and calculate a tight bound on the expressive capacity and the corresponding accuracy limit that we compare to experiments on superconducting quantum processors. We uncover the native function set a finitely-sampled quantum system can approximate, called eigentasks. We then demonstrate how low-noise eigentasks improve performance for tasks such as classification in a way that is robust to noise and overfitting. We also present experimental and numerical analyses suggesting that entanglement enhances learning capacity by reducing noise in eigentasks. Our results are broadly relevant to quantum machine learning and sensing applications.
+
Footnote β : These two authors contributed equally
## I Introduction
Learning with quantum systems is a promising application of near-term quantum processors, with several recent demonstrations in both quantum machine learning (QML) [1; 2; 3; 4; 5] and quantum sensing [6; 7; 8]. A broad class of such data-driven applications proceed by embedding data into the evolution of a quantum system, where the embedding, dynamics, and extracted outputs via measurement are all governed by a set of general parameters \(\mathbf{\theta}\). Depending on the learning scheme, different components of this general framework may be trained for optimal performance of a given task. Irrespective of the scheme, however, the fundamental role of the quantum system is that of a high-dimensional feature generator. Given inputs \(\mathbf{u}\), a set of frequencies for the occurrence of different measurement outcomes act as a feature vector to learn a function \(f(\mathbf{u})\) that minimizes the chosen loss function (see Fig. 1). The relationship between the physical structure of the model and the function classes that can be expressed with high accuracy, referred to as _expressivity_, is a fundamental question of basic importance to the success of quantum models. Recent results have begun to shed light on this important question and provide guidance on the choice of parameterized quantum models [9; 10; 11; 12; 13; 14; 15; 16]. Yet when it comes to experimental implementations, the presence of noise is found to substantially curtail theoretical expectations for performance [1; 2; 3].
Given an input \(\mathbf{u}\) to a general dynamical system, we define its Expressive Capacity (EC) as a measure of the accuracy with which \(K\) linearly independent functions \(\{f(\mathbf{u})\}\) of the input can be constructed from \(K\) readout features. This is a suitable generalization to noisy systems of the Information Processing Capacity introduced in Ref. [17]. A central challenge in determining the EC for _quantum_ systems is the fundamentally stochastic nature of measurement outcomes. Even when technical noise due to system parameter fluctuations is minimized as in an error-corrected quantum computer, there is a fundamental level of noise, the quantum sampling noise (QSN), which cannot be eliminated in learning with quantum systems. QSN therefore sets a fundamental limit to the EC of any physical system. Although QSN is well-understood theoretically, a formulation of its impact on learning is a challenging task as it is strongly determined by the quantum state of the system relative to the measurement basis, and is highly correlated when entanglement is present. Consequently, the impact of QSN is often ignored [18; 19; 20; 21] (with a few exceptions [14; 22]), even though it can place strong constraints on practical optimization [23] and performance [22]. In this article, we develop a mathematical framework to quantify the EC that exactly accounts for the structure of QSN, providing a tight bound for an \(L\)-qubit system under \(S\) measurements, and illustrate how a mathematical framework for its quantification can guide experimental design for QML applications.
Our work goes beyond simply defining the EC as a figure of merit, however. In particular, we offer a methodology to identify the native function set that is most accurately realizable by a given encoding under finite sampling. Equivalently, we show that this defines a construction of measured features that is optimally robust to noise in readout, thereby revealing how such a quantum system can be optimally employed for learning tasks. Finally, while the strength of the EC lies in its generality, we provide numerical examples that suggest that higher EC is typically indicative of improved performance on specific QML tasks. As such, the EC provides a metric whose optimization can be targeted for improved learning performance in a task-agnostic and parameter-independent manner.
This strategy for defining the noise-constrained EC natu
rally focuses on accessible noisy output features under a specified measurement scheme, as opposed to unmeasured degrees of freedom. This makes the EC an efficiently-computable quantity in practice, as we demonstrate using both numerical simulations and experiments on IBM Quantum's superconducting multi-qubit processors [24]. Our work also identifies enhancement in measurable quantum correlations as a general principle to increase the EC of quantum systems under finite sampling.
## II Learning with quantum systems
The most general approach to learning from data using a generic quantum system is depicted schematically in Fig. 1. A table with symbols and abbreviations used in the text can be found in Appendix A. For concreteness, we detail a specific realization for \(L\)-qubit systems that are measured projectively, which will be analyzed in the remainder of this work. Any learning scheme begins with embedding the data \(\mathbf{u}\) through a quantum channel parameterized by \(\mathbf{\theta}\) acting on a known initial state, \(\hat{\rho}(\mathbf{u};\mathbf{\theta})=\mathcal{U}(\mathbf{u};\mathbf{\theta})\hat{\rho}_{0}\). For an \(L\)-qubit quantum system, for example, we consider \(\hat{\rho}_{0}=\ket{0}\bra{0}^{\otimes L}\).
Any computation must be performed using outputs extracted from the quantum system via measurements in a specified basis parameterized by \(K\) operators \(\{\hat{M}_{k}\}\), \(k=0,\cdots,K-1\). For a projectively measured \(L\)-qubit system, the measurement basis is defined by the \(K=2^{L}\) projectors \(\hat{M}_{k}=\ket{\mathbf{b}_{k}}\!\bra{\mathbf{b}_{k}}\) corresponding to measurement of bitstrings \(\mathbf{b}_{k}\). A single measurement or "shot" yields a discrete outcome \(\mathbf{b}^{(s)}(\mathbf{u})\) for each observable: if the outcome of shot \(s\) is state \(k\), then \(\mathbf{b}^{(s)}(\mathbf{u})\leftarrow\mathbf{b}_{k}\). Measured features are then constructed by ensemble-averaging over \(S\) repeated shots: \(\bar{X}_{k}(\mathbf{u})=1/S\sum_{s}\delta(\mathbf{b}^{(s)}(\mathbf{u}),\mathbf{b}_{k})\). Hence \(\bar{X}_{k}(\mathbf{u})\) in this case is the measured frequency of occurrence of the bitstring \(\mathbf{b}_{k}\) in \(S\) repetitions of the experiment with the same input \(\mathbf{u}\). These measured features are formally random variables that are unbiased estimators of the expected value of the corresponding observable as computed from \(\hat{\rho}(\mathbf{u})\): explicitly,
\[\lim_{S\rightarrow\infty}\bar{X}_{k}(\mathbf{u})\equiv x_{k}(\mathbf{u})=\mathrm{Tr} \{\hat{M}_{k}\hat{\rho}(\mathbf{u};\mathbf{\theta})\}, \tag{1}\]
so that \(x_{k}\) is the quantum mechanical probability of occurrence of the \(k\)th bitstring.
In QML theory, it is standard to consider the limit \(S\rightarrow\infty\), and to thus use expected features \(\{x_{k}(\mathbf{u})\}\) for learning. However, for any practical implementation, measured features \(\{\bar{X}_{k}(\mathbf{u})\}\) must be constructed under finite \(S\), in which case their fundamentally quantum-stochastic nature can no longer be ignored. This quantum sampling noise, like any other source of noise, can unsurprisingly limit the EC. Completely unlike classical noise sources however, the statistics of quantum sampling noise are strongly determined by the state of the quantum system itself. This leads to a rich noise structure that changes dramatically based on, for example, the entanglement of the generated quantum state, as depicted in Fig. 1. In this work, we exactly account for this structure of quantum sampling noise to quantify its fundamental impact on EC. By further leveraging the complexity and quantum state dependence of sampling noise, we provide a practical, experimentally applicable methodology that maximizes the capacity for learning functions using finitely-sampled quantum systems, and also avoids overfitting in ML tasks.
We begin by observing that \(\bar{\mathbf{X}}\) are samples from a multinomial distribution with \(S\) trials and \(K=2^{L}\) categories, which can be decomposed into their expected value and an input-dependent sampling noise:
\[\bar{\mathbf{X}}(\mathbf{u})=\mathbf{x}(\mathbf{u})+\frac{1}{\sqrt{S}}\mathbf{\zeta}(\mathbf{u}), \tag{2}\]
where \(\mathbf{\zeta}(\mathbf{u})\) is a zero-mean random vector obeying multinomial statistics. As discussed in Appendix B and C, what makes quantum systems special is the fundamental relationship between the noise \(\mathbf{\zeta}(\mathbf{u})\) and the'signal' \(\mathbf{x}(\mathbf{u})\). Precisely, the covariance \(\mathbf{\Sigma}(\mathbf{u})\) of \(\mathbf{\zeta}(\mathbf{u})\) depends on the generated quantum state: \(\mathbf{\Sigma}_{kk^{\prime}}(\mathbf{u})=\mathrm{Tr}\{\hat{M}_{k}\hat{M}_{k^{\prime}} \hat{\rho}(\mathbf{u})\}-\mathrm{Tr}\{\hat{M}_{k}\hat{\rho}(\mathbf{u})\}\mathrm{Tr}\{ \hat{M}_{k^{\prime}}\hat{\rho}(\mathbf{u})\}\). This _quantum covariance_ of the measured observables therefore comprises non-linear functions of the signal \(\mathbf{x}(\mathbf{u})\) itself; at a given \(S\), we will show that this allows for more information to be extracted from systems with more quantum correlations between observables. Note that \(\mathbf{\zeta}\) can be straightforwardly modified to include other
Figure 1: (a) Representation of the learning framework considered in this work β inputs \(\mathbf{u}\) are transformed to a set of outputs via a feature generator, here implemented using a finitely-sampled quantum system as shown in (b). Inputs are encoded in the state of a quantum system via a general quantum channel \(\mathcal{U}\). Information is extracted from the quantum system via projective measurements in the computational basis. The geometric structure of the quantum sampling noise in the high-dimensional measured feature space can strongly depend on the encoding, and the degree of entanglement generated upon parametric evolution. The learning scheme discussed in the present work optimally leverages the geometric structure of correlated noise. This framework describes a wide range of practical quantum systems, from quantum circuits used in QML, to quantum annealers exhibiting continuous evolution, and beyond, all defined by general parameters \(\mathbf{\theta}\). As shown in (a), learned estimates for desired functions are constructed via a trained linear estimator \(\hat{\mathbf{w}}\) applied to \(K\) measured observables \(\bar{\mathbf{X}}\) of the quantum system, with a resolution limited by quantum sampling noise with finite shots \(S\). Capacity then quantifies the error in the approximation of a target function via this scheme.
noise sources, such as gate or measurement errors (see Appendix B.2), with \(1/\sqrt{S}\) then interpreted as a general noise strength. However our focus here remains on quantum sampling noise and its fundamental role in learning with quantum systems.
The use of such a quantum system for the learning of functions under finite sampling is then depicted schematically in Fig. 1. For a target function \(f(\mathbf{u})\), an approximation \(f_{\mathbf{W}}(\mathbf{u})\) is obtained via a linear (for reasons clarified shortly) estimator applied to readout features under finite \(S\), \(f_{\mathbf{W}}(\mathbf{u})=\mathbf{W}\cdot\bar{\mathbf{X}}(\mathbf{u})\), where \(\bar{\mathbf{X}}=(\bar{X}_{0},\ldots,\bar{X}_{K-1})^{T}\). To quantify the fidelity of this approximation, we introduce the capacity [14; 17; 20] to construct the target function as the minimum achievable (normalized) mean squared error between the target and its estimate:
\[C[f]=1-\min_{\mathbf{W}\in\mathbb{R}^{K}}\frac{\mathbb{E}_{\mathbf{u}}[|f(\mathbf{u})-f_{ \mathbf{W}}(\mathbf{u})|^{2}]}{\mathbb{E}_{\mathbf{u}}[|f(\mathbf{u})|^{2}]}. \tag{3}\]
In the above, \(\mathbb{E}_{\mathbf{u}}\) refers to the expected value with respect to an input distribution \(p(\mathbf{u})\) over a compact input domain, which can be continuous or discrete: \(\mathbb{E}_{\mathbf{u}}[f]\equiv\int\mathrm{d}\mathbf{u}\,p(\mathbf{u})f(\mathbf{u})\simeq \frac{1}{N}\sum_{n}f(\mathbf{u}^{(n)})\) for i.i.d. sampling obeying \(\mathbf{u}^{(n)}\sim p(\mathbf{u})\) for all \(n\in[N]\). Minimizing error in the approximation of \(f(\mathbf{u})\) by \(f_{\mathbf{W}}(\mathbf{u})\) over the input domain to determine capacity thus requires finding \(\tilde{\mathbf{w}}=\operatorname*{argmin}_{\mathbf{W}}\mathbb{E}_{\mathbf{u}}[|f-f_{\mathbf{W }}(\mathbf{u})|^{2}]\) (via a resource-efficient pseudoinverse). This capacity is constructed such that \(0\leq C[f]\leq 1\).
The choice of a linear estimator and a mean squared error loss function may appear restrictive at first glance, but the generality of our formalism averts such limitations. For example, the use of a linear estimator applied directly to readout features precludes classical nonlinear post-processing of measurements; however, this is simply to ensure the calculated capacity is a measure of the quantum system itself, and not of a classical nonlinear layer. Importantly, our formalism is general enough to incorporate such processing in a calculation of capacity, via a simple redefinition of readout features \(\bar{\mathbf{X}}\). Hence the use of a linear estimator does not necessarily lose generality. Secondly, while higher-order loss functions may be used, the mean squared loss effectively describes the Taylor expansion of a wide range of loss functions (see Appendix C.5).
To extend the notion of capacity to a task-independent measure of the expressivity of a physical system, we can evaluate the function capacity over a complete orthonormal set of basis functions \(\{f_{\ell}\}_{\ell\in\mathbb{N}}\), equipped with the inner product \(\langle f_{\ell},f_{\ell^{\prime}}\rangle_{p}=\int_{-1}^{1}f_{\ell}(\mathbf{u})f_{ \ell^{\prime}}(\mathbf{u})p(\mathbf{u})\mathrm{d}\mathbf{u}=\delta_{\ell\ell^{\prime}}\). The _total Expressive Capacity_ (EC) is then \(C_{T}\equiv\sum_{\ell=0}^{\infty}C[f_{\ell}]\), which effectively quantifies how many linearly-independent functions can be expressed from a linear combination of \(\{\bar{X}_{k}(\mathbf{u})\}\). Our main result, which is proven in Appendix C.4, is that the EC for an \(L\)-qubit system whose readout features are stochastic variables of the form of Eq. (2) is given by
\[C_{T}(\mathbf{\theta})=\mathrm{Tr}\left(\!\left(\mathbf{G}+\frac{1}{S}\mathbf{V} \right)^{\!-\!1}\!\mathbf{G}\right)=\sum_{k=1}^{K}\frac{1}{1+\beta_{k}^{2}(\bm {\theta})/S}. \tag{4}\]
The first equality is written in terms of the expected feature Gram and covariance matrices \(\mathbf{G}\equiv\mathbb{E}_{\mathbf{u}}[\mathbf{x}\mathbf{x}^{T}]\) and \(\mathbf{V}\equiv\mathbb{E}_{\mathbf{u}}[\mathbf{\Sigma}]\) respectively; we later demonstrate that these expected quantities can be accurately estimated under finite \(S\) sampling. The second equality expresses the total capacity in a finite-dimensional linear space, in terms of the eigenvalues \(\{\beta_{k}^{2}\}_{k\in[K]}\) satisfying the generalized eigenvalue problem \(\mathbf{V}\,\mathbf{r}^{(k)}=\beta_{k}^{2}\mathbf{G}\mathbf{r}^{(k)}\). Inspecting this expression, we first note that it is independent of the particular set \(\{f_{\ell}\}_{\ell\in\mathbb{N}}\) chosen, which would have required an evaluation over an infinite set of functions and its numerical evaluation therefore would be subject to inaccuracies due to truncation [17]. Instead, \(C_{T}\) can be interpreted as the sum of capacities to construct \(K\) individual functions living in an otherwise infinite-dimensional function space; the identity of these special functions is closely connected with the generalized eigenvectors \(\{\mathbf{r}^{(k)}\}\), and will be clarified shortly. Secondly, in the absence of noise, \(\lim_{S\rightarrow\infty}C_{T}=\mathrm{Rank}\{\mathbf{G}\}=K=2^{L}\), provided no special symmetries exist (see Appendix C.6). Such theoretical exponential growth in expressive capacity with \(L\) is often cited as a motivator for ML on quantum systems [20; 14; 25]. From the perspective of infinite-shot capacity, this also implies that all \(L\)-qubit systems with \(K\) measured features are equivalent, regardless of encoding. Such universality has also been pointed out for classical dynamical systems subject to zero input and output noise [17].
However, our expression for \(C_{T}\) is also valid for any _noisy_ physical system, corresponding to finite \(S\). In particular, Eq. (4) shows that the EC of a qubit-based physical system satisfies \(C_{T}\leq K\) at finite \(S\), and can be fully characterized in terms of the spectrum \(\{\beta_{k}^{2}\}\), which is ultimately determined by parameters \(\mathbf{\theta}\) governing the physical system and embedding via the Gram (\(\mathbf{G}\)) and covariance (\(\mathbf{V}\)) matrices. Related characterizations of noise-constrained capacity have been attempted for Gaussian quantum systems [22], but to our knowledge no precise formulation exists that also encompasses non-Gaussian systems such as qubit systems. Furthermore, from the perspective of capacity, what makes one embedding or physical system different from another is simply its ability to accurately express functions in the presence of noise. Our expression for \(C_{T}\) thus provides a general, comprehensive, and straightforward metric to assess and compare this capacity across physical systems and their associated embedding under finite \(S\).
Furthermore, via the associated eigenvectors \(\{\mathbf{r}^{(k)}\}\), our analysis uncovers a finite set of orthogonal functions native to a particular encoding that is maximally resolvable through \(S\) measurements. This set of \(K\) orthonormal functions, the _eigentasks_\(y^{(k)}(\mathbf{u})=\sum_{j}r_{j}^{(k)}x_{j}(\mathbf{u})\), can be estimated from measured readout features as described in Appendix D.1. The eigentasks characterize an ordered set of functions that can be constructed with mean squared error \(\beta_{k}^{2}/S\), leading to a natural interpretation of \(\beta_{k}^{2}\) as noise-to-signal (NSR) eigenvalues, determined by fundamental sampling noise. As we will show, this experimentally extractable information can be utilized for optimal learning (with minimal degrees of freedom) with a noisy quantum system.
## III Experimental results
To demonstrate the above results in practice, we now show how the spectrum \(\{\beta_{k}^{2}\}\), the EC, and eigentasks can all be computed for real quantum devices in the presence of parameter fluctuations and device noise.
We emphasize at the outset that our approach for quantifying the EC of a quantum system is very general, and can be applied to a variety of quantum system models. For practical reasons, we perform experiments on IBM Quantum (IBMQ) processors, whose dynamics is described by a parameterized quantum circuit containing single and two-qubit gates. However, as an example of the general validity of our approach, in Appendix E we compute the EC for \(L\)-qubit quantum annealers via numerical simulations, governed by the markedly different model of continuous-time Hamiltonian dynamics.
On IBMQ devices, resource limitations restrict our computation of EC to 1D inputs \(u\) that are uniformly distributed, \(p(u)=\mathrm{Unif}[-1,1]\), see Fig. 2(a). We emphasize that this analysis can be straightforwardly extended to multi-dimensional and arbitrarily-distributed inputs given suitable hardware resources, without modifying the form of the Gram and covariance matrices.
We are only now required to specify the model of the \(L\)-qubit system in Eq. (1), which has been left completely general thus far. The specific ansatz we consider is tailored to be natively implementable on IBMQ processors; more general ansatz can also be considered (see Appendix B). It consists of \(\tau\in\mathbb{N}\) repetitions of the same input-dependent circuit block depicted in Fig. 2(a). The block itself is of the form \(\mathcal{R}_{x}(\mathbf{\theta}^{x}/2)\mathcal{W}(J)\mathcal{R}_{z}(\mathbf{\theta}^ {z}+\mathbf{\theta}^{I}u)\mathcal{R}_{x}(\mathbf{\theta}^{x}/2)\), where \(\mathcal{R}_{x/z}\) are Pauli-rotations applied qubit-wise, e.g. \(\mathcal{R}_{z}=\prod_{l}R_{z}(\theta_{l}^{z}+\theta_{l}^{l}u)\). The entangling gate acts between physically connected qubits in the device and can be written as \(\mathcal{W}(J)=\prod_{(l,l^{\prime})}\exp\{-i\frac{J}{2}\hat{\sigma}_{l}^{z} \hat{\sigma}_{l^{\prime}}^{z}\}\).
Note that for this ansatz, the choice \(J=0\pmod{\pi}\) yields either \(\mathcal{W}=\tilde{I}\) or \(\hat{\sigma}^{z}\otimes\hat{\sigma}^{z}\), both of which ensure \(\hat{\rho}(u)\) is a product state and measured features are simply products of uncorrelated individual qubit observables - equivalent to a noisy classical system. Starting from this _product system_ (PS), tuning the coupling \(J\neq 0\pmod{\pi}\) provides a controllable parameter to realize an _entangled system_ (ES). This control enables us to address a natural question regarding EC of quantum systems under finite \(S\): what is the dependence of EC and realizable eigentasks on \(J\), and hence on quantum correlations?
This calculation of EC requires extracting measured features from the quantum circuit under input \(u\), one example of which is shown for the IBMQ _ibmq_perth_ device in Fig. 2(a), for \(S=2^{14}\). For comparison, we also show ideal-device simulations (no device noise), where slight deviations are observed. The agreement with the experimental feature is improved when the effects of gate and readout errors, and qubit relaxation are included, hereafter referred to as "device noise" simulations, highlighting the non-negligible role of device errors.
The measured features under finite \(S\) are used to estimate the Gram and covariance matrices (see Appendix D), and to thus solve the eigenproblem for NSR eigenvalues \(\{\beta_{k}^{2}\}\). Typical NSR spectra computed for two random encodings on the device are shown in Fig. 2(b), for \(J=0\) (PS) and \(J=\pi/2\) (ES), together with spectra from device noise simulations, with which they agree well. We note that at lower \(k\), the device NSR eigenvalues are larger than those from ideal simulations, due to device noise contributions. For larger \(k\), device results deviate from the pure exponential increase (with order) seen in ideal simulations. The deviation is captured by device noise simulations and can therefore be attributed to device errors. The NSR spectra therefore can serve as effective diagnostic tools for quantum processors and encoding schemes. More examples will be provided later in the discussion.
The NSR spectra can be used to directly compute the EC of the corresponding quantum device for finite \(S\), via Eq. (4). As a rule of thumb, at a given \(S\) only NSR eigenvalues \(\beta_{k}^{2}\lesssim S\) contribute substantially to the EC. An NSR spectrum with a flatter slope therefore has more NSR eigenvalues below \(S\)
Figure 2: (a) IBMQ Perth device and quantum circuit schematic for EC calculation, and classification task in Fig. 3. Here \(\tau=3\) layers, and random qubit rotation parameters are \(\theta_{t}^{\pi/z}\sim\mathrm{Unif}[0,2\pi]\) and \(\theta_{l}^{I}\sim\mathrm{Unif}[0,10\pi]\). On the right, the specific feature plotted is \(\bar{X}_{1}(u)=P_{000001}(u)\) for \(S=2^{14}\) shots. (b) Left panel: Device NSR spectrum \(\beta_{k}^{2}\) for ES, \(J=\pi/2\) (blue crosses) and PS, \(J=0\) (brown diamonds). Ideal (solid) and device noise (dashed) simulations are also shown. Note the agreement between device and simulation, along with distortion from more direct exponential growth in \(\beta_{k}^{2}\) with \(k\) in the ideal case, due to device errors. Right panel: \(C_{T}\) vs. \(S\) calculated from the left panel. At a given \(S\), the \(C_{T}\) can be approximated by performing the indicated sum over all \(\beta_{k}^{2}<S\). (c) EC (top panel) and ETC (lower panel) under \(S=2^{14}\) from the IBM device, and device noise simulations (dashed peach). Average metrics over 8 random encodings for device noise (solid peach) and ideal (solid gray) simulations are also shown. The \(S\rightarrow\infty\) EC of these encodings always attains the \(\max\{C_{T}\}=64\), indicated in dashed red.
which gives rise to a higher capacity. Fig. 2(b) shows that the ES generally exhibits an NSR spectrum with a flatter slope than the PS, yielding a larger capacity for function approximation across all sampled \(S\).
To more precisely quantify the role of entanglement and quantum correlations in EC, we introduce the _expected total correlation_ (ETC) of the measured state over the input domain of \(u\)[26, 27],
\[\bar{\mathcal{T}}=\mathbb{E}_{u}\left[\sum_{l=1}^{L}\mathrm{S}(\hat{\rho}_{l}^ {M}(u))-\mathrm{S}(\hat{\rho}^{M}(u))\right], \tag{5}\]
where \(\hat{\rho}^{M}\) is the measured state: \(\hat{\rho}^{M}(u)\equiv\sum_{k}\hat{\rho}_{kk}(u)\ket{\mathbf{b}_{k}}\langle\mathbf{b} _{k}|\) and \(\mathrm{S}\) is the von Neumann entropy (see Appendix G). We now compute EC and ETC using \(S=2^{14}\) in Fig. 2(c) as a function of \(J\), together with both ideal and device noise simulations of the same. We note that product states by definition have \(\bar{\mathcal{T}}=0\)[28]; this is seen in ideal simulations for \(J=0\;(\text{mod}\;\pi)\). However, the actual device retains a small amount of correlation at this operating point, which is reproduced by device noise simulations. This can be attributed to gate or measurement errors as well as cross-talk, especially relevant for the transmon-based IBMQ platform with a parasitic always-on ZZ coupling.
With increasing \(J\), \(\bar{\mathcal{T}}\) increases and peaks around \(J\sim\pi/2\;(\text{mod}\;\pi)\); interestingly, \(C_{T}\) also peaks for the same coupling range. From the analogous plot of EC, we clearly see that at finite \(S\), increased ETC appears directly correlated with higher EC. We have observed very similar behaviour using completely different models of quantum systems (see Appendix Fig. 5[29, 30]). This indicates the utility of enhancing quantum correlations as a means of improving the general expressivity of quantum systems.
However, we see that at finite \(S\), even with increased quantum correlations, the maximum EC is still substantially lower than the upper bound of \(K=64\). Note that this remains true even for ideal simulations, and over several random encodings, so the underperformance cannot be attributed to device noise or poor ansatz choice respectively. These results clearly indicate that the resulting sampling noise at finite \(S\) is the fundamental limitation for QML applications on this particular IBM device, rather than other types of noise sources and errors.
## IV A Robust Approach to Learning
While we have demonstrated the EC as an efficiently-computable metric of general expressivity of a noisy quantum system, some important practical questions arise. First, does the general EC metric have implications for practical performance on _specific_ QML tasks? Secondly, given the limiting - and unavoidable - nature of correlated sampling noise, does the EC provide any insights on optimal learning using a particular noisy quantum system and the associated embedding?
Our formulation addresses both these important questions naturally, as we now discuss. Beyond being a simple figure of merit, we show in the Appendix C that the EC is precisely the sum of capacities to approximate a particular set of orthogonal functions native to the given noisy quantum system: the eigensh tasks. Crucially, these eigentasks \(\bar{y}^{(k)}(u)=\sum r_{j}^{(k)}\bar{X}_{j}(u)\) can be directly estimated from a noisy quantum system via the generalized eigenvectors \(\{\mathbf{r}^{(k)}\}\), and are ordered by their associated NSR \(\{\beta_{k}^{2}\}\). We show a selection of estimated eigentasks from IBMQ, for an ES \((J=5\pi/3)\) and PS \((J=0)\) in Fig. 3(a). For both systems, the increase in noise with eigentask order is apparent when comparing two sampling values, \(S=2^{10}\) and \(S=2^{14}\). Furthermore, for any order \(k\), eigentasks for the PS are visibly noisier than the ES; this is consistent with NSR eigenvalues for PS being larger than those for ES, as seen in Fig. 2(b). This ability to more accurately resolve eigentasks provides a complementary perspective on the higher expressive capacity of ES in comparison to PS.
The resolvable eigentasks of a finitely-sampled quantum system are intimately related to its performance at specific QML applications. To demonstrate this result, we consider a concrete application: a binary classification task that is not linearly-separable. Samples \(u^{(n)}\), \(n\in[N]\), obeying the same distribution \(p(u)\) for \(u\in[-1,1]\) as considered for the EC evaluation, are separated into two classes, as depicted in Fig. 3(b). A selection of \(N_{\text{train}}=150\) total samples - with equal numbers from each class - are input to the IBMQ device, and readout features \(\bar{\mathbf{X}}(u^{(n)})\) are extracted using \(S=2^{14}\) shots. A linear estimator applied to these features is then trained using logistic regression to learn the class label associated with each input. Finally, the trained IBMQ device is used to predict class labels of \(N_{\text{test}}=150\) distinct input samples for testing.
This task can equivalently be cast as one of learning the likelihood function that discriminates the two input distribu
Figure 3: (a) Device eigentasks for ES (left) and PS (right), constructed from noisy features at \(S=2^{10}\) and \(S=2^{14}\). (b) Classification demonstration on IBMQ Perth. Binary distributions to be classified over the input domain are shown. (c) The classification task can be cast as learning the likelihood function separating the two distributions; this target function is shown in the upper panel. Lower panels show the trained estimate of this target using outputs from the ES and PS respectively, using \(K_{\mathrm{L}}=36\) eigentasks with \(S=2^{14}\).
tions, shown in Fig. 3(c), with minimum error. The set of up to \(K_{\mathrm{L}}\) eigentasks \(\tilde{y}^{(k)}(u)\), where \(K_{\mathrm{L}}\leq K\), serves as the native basis of readout features used to approximate _any_ target function using the quantum system. The noisier eigentasks of the PS therefore limit the accuracy with which it can be used to learn the target, in comparison to the ES. This is clear from the learned estimates shown in Fig. 3(c), using an equal number of \(K_{\mathrm{L}}=36\) eigentasks to ensure a fair comparison. The higher approximation capacity translates to improved classification performance, as we show via the training and testing classification accuracy in Fig. 4(a) for both ES and PS. We plot both as a function of the number of eigentasks \(K_{\mathrm{L}}\) used for learning, from which it is clear that the maximum testing accuracy using the ES exceeds that of the PS.
However, using eigentasks ordered by NSR reveals even more about learning using noisy quantum systems, and provides a path towards optimal learning. While intuition suggests that using more eigentasks can only be beneficial, weights learned when training with noisier eigentasks may not generalize well to unseen samples. For example, using all eigentasks (\(K_{\mathrm{L}}=K\)) yields a test accuracy far lower than that found in training. The observed deviation is a distinct signature of overfitting: the optimized estimator learns noise in the training set, and thus loses generalizability in testing. Crucially, an optimal number of eigentasks clearly emerges, around \(K_{\mathrm{L}}\simeq K_{c}(S)=\max_{k}\{\beta_{k}^{2}<S\}\), for which the NSR eigenvalue is closest to \(S\). Eigentasks \(k>K_{c}\) typically contribute more 'noise' to the function approximation task than'signal'. Excluding these eigentasks therefore limits overfitting without adversely impacting performance.
Fig. 4(b) also shows the classification accuracy as \(J\) is varied, where we highlight the striking similarity with Fig. 2(c): encodings with larger quantum correlations and thus higher expressive capacity will perform generically better on learning tasks in the presence of noise, because they generate a larger set of eigentasks that can be resolved at a given sampling \(S\). The NSR spectra and eigentasks therefore provide a natural truncation scheme to maximise testing accuracy, avoiding overfitting without any additional regularization (see also Appendix H and I).
## V Discussion
We have developed a straightforward approach to quantify the expressive capacity of any qubit-based system in the presence of fundamental sampling noise. Our analysis is built upon an underlying framework that determines the native function set that can be most robustly realized by a finitely-sampled quantum system: its eigentasks. We use this framework to introduce a methodology for optimal learning using noisy quantum systems, which centers around identifying the minimal number of eigentasks required for a given learning task. The resulting learning methodology is resource-efficient and robust to overfitting. We demonstrate that eigentasks can be efficiently estimated from experiments on real devices using a limited number of training points and finite shots. We also demonstrate across two distinct qubit evolution ansatze that the presence of measured quantum correlations enhances expressive capacity. Our work has direct application to the design of circuits for learning with qubit-based systems. In particular, we propose the optimization of expressive capacity as a meaningful goal for the design of quantum circuits with finite measurement resources.
## Acknowledgement
This research was developed with funding from the DARPA contract HR00112190072, AFOSR award FA9550-20-1-0177, and AFOSR MURI award FA9550-22-1-0203. The views, opinions, and findings expressed are solely the authors and not the U.S. government.
|
2309.14615 | Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading
Agents | In recent years, deep reinforcement learning (Deep RL) has been successfully
implemented as a smart agent in many systems such as complex games,
self-driving cars, and chat-bots. One of the interesting use cases of Deep RL
is its application as an automated stock trading agent. In general, any
automated trading agent is prone to manipulations by adversaries in the trading
environment. Thus studying their robustness is vital for their success in
practice. However, typical mechanism to study RL robustness, which is based on
white-box gradient-based adversarial sample generation techniques (like FGSM),
is obsolete for this use case, since the models are protected behind secure
international exchange APIs, such as NASDAQ. In this research, we demonstrate
that a "gray-box" approach for attacking a Deep RL-based trading agent is
possible by trading in the same stock market, with no extra access to the
trading agent. In our proposed approach, an adversary agent uses a hybrid Deep
Neural Network as its policy consisting of Convolutional layers and
fully-connected layers. On average, over three simulated trading market
configurations, the adversary policy proposed in this research is able to
reduce the reward values by 214.17%, which results in reducing the potential
profits of the baseline by 139.4%, ensemble method by 93.7%, and an automated
trading software developed by our industrial partner by 85.5%, while consuming
significantly less budget than the victims (427.77%, 187.16%, and 66.97%,
respectively). | Foozhan Ataiefard, Hadi Hemmati | 2023-09-26T02:07:26Z | http://arxiv.org/abs/2309.14615v1 | # Gray-box Adversarial Attack of Deep Reinforcement Learning-based Trading Agents*
###### Abstract
In recent years, deep reinforcement learning (Deep RL) has been successfully implemented as a smart agent in many systems such as complex games, self-driving cars, and chat-bots. One of the interesting use cases of Deep RL is its application as an automated stock trading agent. In general, any automated trading agent is prone to manipulations by adversaries in the trading environment. Thus studying their robustness is vital for their success in practice. However, typical mechanism to study RL robustness, which is based on white-box gradient-based adversarial sample generation techniques (like FGSM), is obsolete for this use case, since the models are protected behind secure international exchange APIs, such as NASDAQ. In this research, we demonstrate that a "gray-box" approach for attacking a Deep RL-based trading agent is possible by trading in the same stock market, with no extra access to the trading agent. In our proposed approach, an adversary agent uses a hybrid Deep Neural Network as its policy consisting of Convolutional layers and fully-connected layers. On average, over three simulated trading market configurations, the adversary policy proposed in this research is able to reduce the reward values by 214.17%, which results in reducing the potential profits of the baseline by 139.4%, ensemble method by 93.7%, and an automated trading software developed by our industrial partner by 85.5%, while consuming significantly less budget than the victims (427.77%, 187.16%, and 66.97%, respectively).
Deep Reinforcement Learning, Adversarial Attacks, Robustness, Automated Trading.
## I Introduction
Application of deep neural networks in the field of automated trading has gained huge interest in recent years. Given the high capacity of DNNs to approximate complex and nonlinear relations, their integration in reinforcement learning algorithms such as Q-learning has introduced a new family of solutions, i.e., Deep RL. Deep RLs have been successfully applied for control-based tasks such as video games [1], Go [2], automated driving in simulations and real world [3], and trading [4]. Deep RLs in automated trading are a relatively new and under-studied topic. For instance, an ensemble method making decision from three different Deep RL algorithms has been proposed in one study [5] and an inverse reinforcement learning approach has been proposed in another [6].
As efficient as these algorithms have been in solving complicated problems, they are still prone to adversarial perturbations to their inputs. Vision-based Deep RL policies have been shown to be vulnerable against adversarial examples, resulting in mis-classifications [7, 8]. In the prior studies on robustness of Deep RL agents, an attacking method has direct access to its victim's input. However, for many applications such as trading such access is considered to be almost infeasible. For vision-based agents, a study [9] found that it's possible to find an adversarial policy that interacts with the victim's environment acting as another player.
Robustness to adversarial attacks are particularly important in a trading system since an adversary agent can legally act as trader but under the hood manipulate the market for a specific competitor or company/agent under attack. Thus the first step toward building robust Deep RL trader agents is to identify the weak points with respect to attacks, which first requires having a realistic and powerful adversarial sample generator.
Therefore, in this paper, we propose a gray-box framework to create adversarial samples for Deep RL trading agents similar to trading in a real stock market. The gray-box assumption is that, the trading agents' source code, policy architecture, DNN weights, and training algorithms are all unknown for the adversary. The only accessible data is the current state of the market and the decision of the trading agent (its chosen trading action in that given state, which is public in many trading platforms). Our framework uses a real-time agent-based trading market simulation named ABIDES [10]. ABIDES is among the few open source trading simulations, which is capable of mimicking real stock markets and has been used in several studies published in financial venues.
In order to show effectiveness of our adversary policy, we trained three trading agents using three most realistic configurations of the market in the simulator. After training these agents, they are integrated in a trading environment, where the adversary is allowed to trade as well. Three different aspects of the adversary are evaluated through three research questions (RQs), where we look at: (RQ1) how effective the proposed adversary is in changing the trader agent's decisions, (RQ2) to what extent it can change the trader's profits, and (RQ3) can it do it within a reasonable cost, while staying systematic (i.e., indirectly affecting the victim's learnt policy)?
The contributions of this paper can be summarised as:
* Providing an end-to-end solution to create gray-box ad
versarial attacks for Deep RL trader agents.
* Experimentally evaluating the attacks on three agents (including an industrial agent) and three market scenarios.
* Reporting evidence that the proposed approach can create attacks, by a reasonable cost, which are successful in systematically (i.e., by affecting its learnt policy) changing the decisions of the trader for worse.
The replication package of this paper including the network architectures and hyper-parameters is publicly available [11].
## II Background
### _Deep Reinforcement Learning for Trading_
The general optimization problem for stock market trading is definable as a Markov Decision Process, which is solvable using deep reinforcement learning algorithms. Optimization target of the RL agent is defined as maximizing profits. The elements of this RL problem are as follows:
* State(\(s\)): Vector containing agent's remaining balance, owned shares, current price of shares, best bid and ask prices for shares and technical indicators such as RSI.
* Action(\(a\)): RL agent's choice of action according to the current state \(s_{t}\). Actions for a trading agents can be buy, hold or sell a specific number of shares.
* Reward(\(r\)): Reward function of RL agent for taking action \(a\) at current environment state \(s\): \[r(s,a,\hat{s})\in\mathbb{R}\]
* Policy(\(\pi\)): A Deep Neural Network mapping set of environment states \(S\) to the set of possible actions \(A\): \[\pi:S\to A\]
Most popular deep reinforcement learning algorithms employed in financial markets belong to one of categories of actor-critic, actor-only, critic-only approaches or an ensemble of these techniques [12].
Deep Q-learning based algorithms are the most common between critic-only approaches used for trading agents. In this group of algorithms, a deep neural network is trained to approximate a Q-value function. Q-value function tries to provide a close estimation of the expected reward for an action \(a\) based on the current state \(s\). The agent uses the Q-value to optimize a policy for choosing actions that are expected to return the maximum rewards in a given state. Actor-only algorithms are designed to work with discrete action spaces (i.e. buy, hold or neutral, sell), which limits the control over trading actions.
Another family of popular algorithms for trading are Actor-only approaches, also called policy search approaches. These algorithms eliminate the need for predicting future rewards by learning best trading strategies directly from the environment. These algorithms use immediate rewards to optimize parameters of the policy. The policy itself, in essence is a probability distribution of actions representing a trading strategy.
Most recent applications of deep reinforcement learning approaches in trading benefit from actor-critic approaches. In this category of RL algorithms two networks are simultaneously trained. First network learns the policy \(\pi\) (actor) and Second network learns an estimation of value function \(V^{\pi}(s)\) (critic). \(V^{\pi}(s)\) predicts future rewards that will be received from the environment, starting from state \(s\) and taking actions from \(\pi\) network. To reach an efficient Policy, its network is updated using policy gradients according to \(V\).
We test robustness of 3 automated trading RL agents using our adversary approach: Baseline agent, ensemble agent from [5] and an industrial agent as follows:
#### Ii-A1 Baseline Agent
A typical actor-critic model with a two-headed fully-connected neural network. One head acting as policy output (action) and the other head is the value function.
#### Ii-A2 Ensemble Agent
A more sophisticated model consisting of three actor-critic algorithms. Each action is selected from the best performing agent among PPO, A2C and DDPG [13] algorithms. Both of these agents use the same reward function \(R_{t}\) defined as below:
\[R(s_{t},a,s_{t+1})=P_{t+1}-P_{t}\]
\(P_{t}\) or Portfolio value at time \(t\) is total value of agent's assets including value of owned shares and cash balance. All of the agents in this study use the same state vector \(S\), as defined.
#### Ii-A3 Industrial Agent
This is one of the agents developed by our industry partner that outperformed the above two agents. Although the overall architecture is similar to the ensemble agent it includes many detailed optimizations that due to confidentiality we can not reveal. The source code of all the available agents can be found in the replication package [11].
### _Adversarial Policy in Deep Reinforcement Learning_
As discussed in section V, previously, adversary sample generation methods against Deep RL agents assume direct access to the inputs of the victim or its policy. In contrast, finding an adversarial policy with only interacting with a victim environment has been achieved for vision based agents in PvP environments such as simulated robotic games [9].
In this approach, instead of adding a perturbation to the victims input the adversary interacts with the same environment containing its victim trading agent. By embedding the victim in the environment from adversary's point of view, the attack is treated as a single agent RL problem. Training goal for the adversary is to learn actions that changes the victim actions, minimizing its reward \(R_{victim}(s_{t},a_{victim},s_{t+1})\) accumulated throughout the trading episode. These actions may seem unintuitive from the human prospective.
Although finding an adversarial policy for simulation games following a deterministic model is quite different from a trading environment with uncertainty and volatility, we employ our version of this method to find an trading adversary agent that is able to alter victim agent decisions for the worse.
## III Adversarial Policy for Attacking Trading Agents
### _Adversary Policy_
In this research, we aim to demonstrate a gray-box approach for attacking a deep reinforcement learning trading agent,
since the main exchange systems used by traders are very safe and almost unreachable from outside, meaning there is no simple way of manipulating data received by trading algorithms. We also assume no access to the trading agents source code, input, policy network architecture and training algorithm. Our only viable data would be current state of the environment and the decision of the trading agent or it's chosen action in that given state. Adversary agent is provided with the combination of inputs to trading agent's DNN policy and output of the agent. Our proposed adversary agent uses a DNN as its policy. This DNN consists of convolutional and fully connected layers as used in most computer vision tasks. The convolutional part in the networks captures a more appropriate representation of temporal information from the data as well as existing relations between different features. It is also an effective way of canceling out noises appearing in the data similar to noisy pixels in images. Overall, this method helps to increase decision certainty of the fully connected layers of DNN compared to the raw data points. We use Categorical Cross Entropy loss to train the adversary using only 4 days of stock market data (8% of test data). Given the intrinsic temporal dependencies present in stock market data, the application of RNNs may constitute a more apt approach. However, in this study, we chose a less complex architectural design in contrast to competing trading agents. This choice is made with the primary objective of elucidating the apparent influence of the adversary.
### _Reward Function_
The reward function must represent the trading task and maximize the returns, while reducing the certainty of the trading policy decisions. It also need to be easily optimized. We propose \(R\) as reward function for our adversary agent:
\[R=(Balance+P-\hat{P})\times\alpha+|\pi(a|S)-\pi(\hat{a}|\hat{S})| \tag{1}\]
\(Balance\) is the amount of currency available at the agents disposal at each step. \(P\) the value of agents portfolio or the value of its owned shares and \(\hat{P}\) is the changed portfolio value after performing action \(a\) by the adversary. \(\alpha\) is a scaling factor determined in the training process. \(\pi\) is the victims policy making its trading decisions give states \(S\) and \(\hat{S}\). Scaling assets in \(R\) by \(\alpha\) encourages the adversary to have more emphasis on changing the trading agents' decisions hence, not over fitting on other components of reward function such as its own cumulative returns. To maintain the purpose of feasibility in our proposed approach in a real-world setting, we assume a soft constraint on the money spent by adversary agent by giving it a fixed budget at the start of trading. Another important constraint that should not be overlooked by the agent while making buy orders is market liquidity. Market liquidity is total amount of stocks available to buy in the market. The agent should be able to detect if there are no available shares to buy using a given state vector from trading environment. We will describe state vector in Section III-D2.
Finally, since the adversary agent's decisions are trades in a market that can generate profit or cause losses, changing the decision of the trading agent is not a indicator of adversary's performance by itself. To address this issue, Adversary's Reward function takes asset loss caused by the changes in traders decision into account, as well.
### _Advantage Actor Critic (A2C)_
An actor-critic based algorithm in reinforcement learning is a policy gradient algorithm that tries to find an approximation of the value function as well as a policy at the same time. Value function is a prediction of future rewards given the current state of an agent. This tells the agent how good a state is for it to be in. Arbitrary fluctuations in price, volume of trades and other features of the trading market data means it is of stochastic nature with unknown transitions to the agent. To efficiently train the proposed adversary we use A2C, or Advantage Actor Critic algorithm. A2C is a deterministic and synchronous implementation of A3C [14].
A2C benefits from an ensemble technique or the advantage function, reducing policy gradient variance for each update resulting in a more robust policy. This method gathers multiple gradient updates from different instances of the same policy but using different data points During each iteration A2C averages over all of the calculated gradients by different instances and updates the actor and critic networks accordingly. More general gradient updates improves speed and rate model convergence, making this algorithm suitable for the problem of trading by reducing the effect of noisy or uncertain actions.
### _Real-time Trading Environment_
#### Iii-D1 Trading Simulation
Having a dynamic trading market data that reacts according to agents' decisions is a crucial part of our research. We want the agents' orders to have real-time impact on environment to be practical as much as possible. Therefore we have chosen ABIDES which is an agent-based trading market simulator that provides a trading data with very similar latent space to the real market as shown in their experiments [10]. ABIDES provides an API for agents to place orders, cancel or modify them at their desired timing. Then it uses an exchange agent for market making. As we use OpenAI Gym's implementations for our experiments, we have integrated ABIDES into the Gym environment. At each time step, we gather the full Limit Order Book (LOB) from the simulation as it represents market state in the most accurate and detailed way. Trading agents are then provided with top-10 bids and asks appending useful indicators extracted from LOB to decide weather to place an order or not (Figure 1).
#### Iii-D2 Policy Input Encoding
The bids (\(bid_{i}\)) from buyers and asks (\(ask_{j}\)) from sellers that currently exist on the market are maintained in the simulation ordered from best to worst. Each \(bid_{i}\) and \(ask_{j}\) are prices corresponding to a buyer or seller agent. The simulator knows each agent from its id shown as \(agent_{i}\) and \(agent_{j}\). List of bids and asks can be presented as ordered lists of tuples:
\[\begin{split} bids=\langle(bid_{i},agent_{i}),(bid_{i+1},agent_{i+1} ),...\rangle\\,bid_{i}>bid_{i+1}\end{split} \tag{2}\]
\[\begin{split} asks=\langle(ask_{j},agent_{j}),(ask_{j+1},agent_{j+ 1}),...\rangle\\,ask_{j}>ask_{j+1}\end{split}\]
We generate an input vector as State(\(S_{t}\)) of the trading environment at time \(t\). Elements of this vector are calculated using historical stock price, asks and bids vectors collected over time. We define state vector as:
\[V_{t}=\left[B,V,asks,n_{asks},bids,n_{bids},RSI,CCI,MACD\right]\]
Where:
* \(B\in\mathbb{R}\): Is the remaining currency balance available to the agent at a given time step.
* \(V\in\mathbb{R}\): Is the number of shares in agents wallet bought in previous time steps.
* \(asks\in\mathbb{R}^{10}\): 10 best asking prices in the market at time t.
* \(n_{asks}\in\mathbb{N}^{10}\): Amount of available shares to buy at each asking price.
* \(bids\in\mathbb{R}^{10}\): 10 best biding prices in the market at time t
* \(n_{bids}\in\mathbb{N}^{10}\): Amount of available shares to buy at each biding price
* \(RSI\in\mathbb{R}\): Relative Strength Index is calculated from collected stock prices. RSI is a technical indicator that helps traders to analyze recent momentum of a stock and measure whether a stock is oversold or undersold in a trading market. [15].
* \(CCI\in\mathbb{R}\): Commodity Channel Index is also calculated from collected stock prices. CCI is a technical indicator known for its proficiency in detecting cyclical trends in stock markets [16].
* \(MACD\in\mathbb{R}\): MACD is a momentum indicator that shows the relationship between two moving averages of price and is a well-known trend following technical indicator used to analyse stock markets [15].
This vector will be recalculated and fed directly to adversary's neural network based policy by the training environment in each time step.
#### Iii-B3 Training Environment
OpenAI Gym provides a perfect framework for training a vast range of agents on different datasets and simulations. However, it does not offer an environment for agents to bet against each other. We start by training the trading agent using our environment and save the best performing policy checkpoints. In the next step, we train the adversary in a simulated environment where at each time step both agents in play are provided with \(S_{t_{i}}\) Adversary agent is also provided with output of the trading policy or \(a\) and the profit made from this single action to make a decision to place an order. Based on the adversary decision, we update \(S_{t_{i}}\) to measure the impact of market change caused by adversary.
## IV Experimental Evaluation
Our research objective can be addressed with these RQs: **RQ1: How effective is the proposed adversary in changing the trading algorithms' decisions?** To elaborate, in RQ1 we ignore the actual profit loss of the trading agent caused by the adversary and focus only on the trading agent's Softmax policy output. To answer this question we compare the policy output of the trading agent before and after updating the simulation with orders from adversary policy.
We also measured victims' rewards for the natural and under attack actions taken by them. The trading agents in this study are able to use a reward function that represents the quality of their actions in terms of returns during their training. We took advantage of these preexisting functions to collect our data.
**RQ2: To what extent the adversary algorithm is able to change trader's profits?** As mentioned, in section III-B, trader's policy outputs will be treated as trades in the market. Each of these trades can cause loss or generate profit, based on the stock price changes. However, only changing trader's decision in one step does not guarantee a trend in its profit/loss. The trader may be able to compensate for the losses of a single trade (or even become profitable after several consecutive steps) by changing its next decisions.
Fig. 1: Overview of Limit Order Book or LOB impacted by adversarial attacks in the trading environment architecture
Fig. 2: Returns of the industrial trader in a sample episode and returns of the same agent in the same episode while under attack by the proposed adversary.
In a close to real-world trading scenario, a successful attack performed by an adversary should compel the trader to lose profit in the market by changing its decision. To measure the effects of adversarial attacks on trader, we run the same market simulation twice in parallel. Once without the adversary in play, and a second time while adversary is attacking the trader by placing orders. (see Figure 2)
**RQ3: How well does the proposed algorithm able to maximize trading agent's portfolio gain or loss with maintaining reasonable constraints? And is the adversary exploiting any specific patterns of trading to attack victims?** One of the key identifiers of the proposed adversary's efficacy is the amount of sacrificed resources. The adversary might be able to manipulate the trader, however, it should be able to do this by maintaining a feasible loss margin for itself, while trading. That is it should not consume an unforeseeable amount of its budget and impose only a little profit damages to itself. We answer this RQ by tracking adversary's assets consisting of its balance and bought shares. Furthermore, we dive deeper into details of our proposed adversary to explore its trading methods in order to change its victim's decision. Our proposed adversary is trained against three types of trading victims and is tasked with learning strategies fit for attacking these specific type of traders. To study adversarial agents' trading behaviour against each victim, we also track adversary's episode rewards in parallel with it's direct trades with a victim to gain an insight on the strategies of placing adversarial market orders.
### _Evaluation Metrics_
#### Iii-A1 RQ1 Performance Metrics
We seek to measure the severity of changes in trader's behavior in each state. The first metric we report is average change in Softmax output of trader's policy network. The original state of simulation, where there is no attacker placing orders, is denoted as \(S_{t}\) and the state when attacker is present is denoted as \(\hat{S_{t}}\). The trader policy's average output change for an episode of \(N\) steps is defined as follow:
\[\Delta_{episode}=\frac{1}{N}\sum_{t=1}^{N}\big{|}\frac{\pi(a_{t}|S_{t})-\pi( \hat{a}_{t}|\hat{S_{t}})}{\pi(a_{t}|S_{t})}\big{|}\times 100 \tag{3}\]
As the second efficiency metric for the adversary method, we define average rewards over \(N\) steps of an episode as:
\[\bar{R}=\frac{1}{N}\sum_{t=1}^{N}R_{a_{t}}^{S_{t}} \tag{4}\]
We report natural rewards (no attack) alongside reward under adversarial attacks for over 50 episodes for 3 trading agents.To evaluate the difference in the reward distributions of natural and attack rewards, we use a goodness of fit test for each victim.
Since collected reward data from our experiments belong to continuous distributions (\(R\sim D_{R}\)) and includes 50 data points for each experiment, Kolmogorov-Smirnov (KS) statistical test has been chosen to measure the distance between distribution of natural rewards (\(D_{R_{natural}}\)) and distribution of attack rewards \(D_{R_{attack}}\). KS test is non-parametric and distribution-free, meaning it makes no assumption over the distribution of data. The KS test can be used to compare a sample with a reference probability distribution, or to compare two samples. Null hypothesis for KS test is that these two distributions are identical and come from the same distribution(\(D\)), for all data points; the alternative is that they are not identical, in case of rejection of the null hypothesis.
\[R_{natural},R_{attack}\overset{i,i,d}{\sim}D \tag{5}\]
\[R_{natural},R_{attack}\overset{i,i,d}{\sim}D\]
#### Iii-A2 RQ2 Performance Metrics
In RQ2, we evaluate the impact of changes in trading environment states by adversary on trader's portfolio. Two metrics are used as follows:
* Cumulative reduction in returns per episode: Given \(P\) is trader's returns without the presence of the adversary and \(\hat{P}\) is trader's returns under attack, it is calculated as: \[CR=\frac{1}{N_{e}}\sum_{t=1}^{N_{e}}\frac{P-\hat{P}}{P}\times 100\] (6) We measure average of changes in cumulative returns over \(N_{e}=50\) episodes.
* Average reduction in returns per step: Given \(p\) is trader's returns in an individual step without the presence of adversary and \(\hat{p}\) is trader's returns while under attack, we define AOR as: \[AOR=\frac{1}{N}\sum_{t=1}^{N}\frac{p-\hat{p}}{p}\times 100\] (7) We report best AOR for all of the \(50\) episodes with \(N\) number of steps to understand how the portfolio losses are distributed over single episodes.
#### Iii-A3 RQ3 Performance Metrics
This RQ explains the behavior of the adversarial agent. For evaluating costs of the adversary policy trading in the market we report portfolio value returns of the attacker and compare it to the same metric \(CR\) for the victim from RQ2. Here a successfull attacker should have a smaller loss compared to the victim, otherwise the attack would be too costly to worth it, in most scenarios.
For the second part of this RQ, which seeks answer to whether the attacker simply trades directly with the victim or has managed to disrupt it learning process more systematically, we report two performance metrics from adversary. The first metric is Mean Episode Rewards of the adversary gathered from OpenAI Gym. For the second metric, we introduce Loss Hit-Ratio. Essentially, Loss Hit-Ratio is intended to measure what ratio of victim's losses are caused by directly trading with the adversary agent. In order to define loss hit-ratio, we maintain arrays of the agent ids to keep track of each bid and ask present in the trading environment at any given time:
\[Bid_{ids}= [(AGENT_{1},shares_{1},price_{1}),\] \[...,(AGENT_{10},shares_{10},price_{10})]\]
\[\begin{split} Ask_{ids}=&[(AGENT_{1},shares_{1},price_{1}),\\ &...,(AGENT_{10},shares_{10},price_{10})]\end{split}\]
These vectors tell us that which bid and ask belong to which agent. Therefore, we can exactly calculate the profit or loss gained from a specific share bought from an order placed by the adversary. Given a completed exchange, if the victim sells or buys shares directly from the adversary the exchange is counted as a hit. Thus the Loss Hit Ratio for the victim is:
\[\text{Loss Hit Ratio}=\frac{\text{Returns from hits}}{\text{Total returns}}\]
A successful attack is expected to have a low Loss Hit Ratio to show that the victim is not simply trading only with the adversary, as explained above.
### _Real-time Data Generation (Simulation)_
We ran the market simulation embedded in training environment over 50 times for each experiment using 3 distinct market configurations provided by its developers. These configurations use a specific number of different trading agents such as noise agents and momentum agents including a single market maker as well as an exchange agent for handling orders (the details of different configurations can be found in the provided replication package). Each of the episodes start at 9:30am when the market opens and ends at 4pm. The trading and adversary agents under study collect price data and existing orders in LOB every 20ms which is the exact moment when exchange agent wakes up to organize existing orders. They are also allowed to place orders at the same moment since our experiments should mimic a real-time trading market.
### _Experimental Setup_
Training and evaluation of each Deep RL agents for trader and adversary was done on a single machine running Ubuntu 20.04.2 LTS (Linux 5.8.0) equipped with Intel Core i7-9700 CPU, 32 gigabytes of main memory, and 8 gigabytes of GPU memory on a NVIDIA GeForce RTX 2080 graphics card. Implementation is done with Pytorch and OpenAI Gym.
### _Results_
#### Iv-D1 RQ1 results
Table I shows \(\bar{R}\) and \(\Delta_{episode}\) for three trading agents in 2 scenarios: 1) with the adversary in the environment (attack \(\bar{R}\)) and 2) without the adversary (natural \(\bar{R}\)) from equation 4. The highest value of both metrics that we have observed is included as well.
The first observation from the results is that reward function of the traders are showing a considerable amount of negative impact caused by the adversary. All of the trading algorithms show a positive mean reward (\(\bar{R}\)) in the trading environment, meaning their decisions at first are generating orders with acceptable returns in the course of an episode or trading day. But the mean reward received by the trading agent after making decisions under attacks, shows that our proposed adversary was able to force the victim to make incorrect trades and has impaired the ability of the trading agent to make a reliable prediction of the future stock price. Although the trading agents is provided with the same technical indicators, they are still vulnerable to seeing adversary orders in the LOB.
Looking at the \(\Delta\) measurements in Table I we see a wide range (from 16.2% to 47.3%). The overall pattern is as expected: the baseline is easier to fool, then the ensemble method, and the Industrial model is the hardest to manipulate. However, we see that even the smaller manipulations of the policy's Softmax output (e.g., 16.2% in the Industrial-Config2 case) can result in large declines in the reward values (from 0.919 to -1.094 in this example).
Reported distance between the Natural and the Attack reward distributions in table II shows a considerable difference in victims performance, while under attack. Since all of the p-values are extremely smaller than 0.05 which means the distance between \(D(R_{natural})\) and \(D(R_{attack})\) are calculated with confidence, we can safely claim that the defined Null hypotheses from equation (5) has been rejected.
To sum up RQ1, the average Natural \(\bar{R}\) over all 9 trader-config pairs is 0.623 and the average Attack \(\bar{R}\) is -0.711, which shows a (0.623-(-0.711))/0.623 = 214.17% reduction in the reward value. This shows the effectiveness of our proposed adversary in forcing the agent to make non-optimal trades in the market, which are reflected in its reward function.
#### Iv-D2 RQ2 results
In RQ2, we report \(CR\) and and \(AOR\) from equations 6 and 7. Both metrics are measured for various settings of environment, against different trading algorithms similar to RQ1. The results are presented in Table III.
Considering the \(CR\)s reported in the experiment, we can see that the proposed adversary is able to target victims' returns, by manipulating their trade decision effectively. Results show that the adversary not only is able to predict its victim's decision boundary (RQ1), but also learns to predict the trend of market price (represented as the returns and their reductions), by integrating a good representation of the market and victims trading strategy (RQ2). This makes our method efficient in generating targeted attacks (on profits) against trading agents as well as un-targeted attacks (only altering victims output).
Looking at the reported \(AOR\)s, it is clear that our adversary is able to force the victim into making trading decisions that tend to work against the market trend. It reduces even the best trading agent's returns not only in the course of trading, but also in individual steps. \(AOR\) gives us a better understanding of intensity of attacks, where they were able to reduce immediate profits (on average over the three market configs per trader) by 139.4% for baseline trader, 93.7% for ensemble, and 85.5% for industry trader in its weakest attack.
#### Iv-D3 RQ3 results
To answer this RQ, we first look at the losses of the victims (\(CR\)) vs. the adversary's (Adversary Portfolio Loss), in table IV. We can see that the adversary is able to reach its goal by spending a small percentage of its starting budget (100% loss would mean using all the assigned budget to be able to fool the trader - The initial budget of the adversary is set equal to the victim to have a fair comparison).
Looking at example results against the baseline victim, our adversary (on average over the three market configs) had to consume \((87.94/16.66)-1=427.77\%\) less budget compared to its victim and \((74.46/25.93)-1=187.16\%\) less budget against the ensemble trading victim and \((66.82/40.02)-1=66.97\%\) less against the best trading victim. The table also shows that although the adversary has to place larger and possibly more trades subject to negative returns in the market in order to manipulate better trading victims, but even with the better victims, it was still able to reach its preferable outcome with less budget compared to the victim.
To have a better insight on how the adversary operates and analyze its learned strategy, we have represented mean episode rewards of the adversary alongside loss hit ratio in table IV. Note that the rewards are relatively high in all of the experiments against victims even where the adversary has performed worse, which means the value function of the agent perceives the adversary trades to be efficient enough.
Another interesting finding that is verified by loss hit ratio is that a small percentage of victims' loss is caused by directly trading with the adversary, which is an indicator of the adversary's strategy to disrupt the natural trading course of the victim. By combining two observations of high rewards and low loss hit ratio, we conclude that the adversary has learned a wining strategy. That is, rather than interacting with the victim directly, in most scenarios, it changes the limit order book to a more out-of-distribution observation compared to the training observation that victim is more familiar with.
## V Related Work
Previous studies on adversary sample generation for DNNs mostly focuses on directly modifying the inputs. Some studies found that deep neural networks are prone to mis-classification by adding perturbation undetectable by human vision to the input [7]. Furthermore, they show that these examples can be generalized over a variety of DNN architectures and training sets [17]. Later studies introduced Fast Gradient Sign Method or FGSM [18]. FGSM exploits gradients of the DNN, approximating the model to generate adversarial examples.
An early study on applications of adversary example generation using FGSM was done on several deep reinforcement learning algorithms (DQN [19], A3C [14], TRPO [20]). They found that FGSM is able to decrease the agents policy regardless of the environment, architecture and training algorithm. This method was applied in a white-box manner to generate the FGSM perturbation. Then they proceeded to use transferability of adversarial examples to attack RL agents in a black-box manner with only access to the DNN structure and training environment [8].
Gradient based adversarial example generation methods have been studied for RL application in the trading domain as well [21, 22]. Both of these methods attack the input channel of the victim directly and use historical stock exchange datasets. However, these assumptions renders both approaches non-feasible for the real-world trading scenario.
A universal adversarial perturbations threat model was introduced by studying vulnerability of RL by generating fake orders in the stock market dataset [23]. They apply the perturbations to the test dataset by reiterating over all orders. This approach is still assuming a low-level access to the inputs by making custom changes to entries of the trading dataset.
In an interesting study, authors benchmarks collision avoidance ability of autonomous driving agents [24]. Their approach tests the robustness of RL agent behaviours in environments
where they interact with other agents. Trading in an stock market is very similar to such environments specially zero-sum games where money lost by an agent is another agent's profit. In addition, some studies showed RL agents trained in collaboration or against other agents might get closely dependent on them and fail against different agents [25]. We deal with this issue by using numerous noise agents in the stock exchange simulation that is used to train the victims.
## VI Conclusion and Future Work
This paper introduces a Deep RL adversary trading agent that can be used to test the lower-bound of trading agents in a very close to real-world stock market scenario. The proposed approach also shows that despite using complex deep neural networks in a trading agent's policy, they are still prone to natural, but out of distribution attacks by an adversary. We tested our approach on three different settings for market simulation against three different trading agents.
Some potential extensions to this work include: (a) using the adversary to generate a defence method against such threats and (b) to train anomaly detection methods to alert automated trading agent or even the exchanges of such possible risks.
|
2301.00056 | A Bayesian Neural Network Approach to identify Stars and AGNs observed
by XMM Newton | In today's era, a tremendous amount of data is generated by different
observatories and manual classification of data is something which is
practically impossible. Hence, to classify and categorize the objects there are
multiple machine and deep learning techniques used. However, these predictions
are overconfident and won't be able to identify if the data actually belongs to
the trained class. To solve this major problem of overconfidence, in this study
we propose a novel Bayesian Neural Network which randomly samples weights from
a distribution as opposed to the fixed weight vector considered in the
frequentist approach. The study involves the classification of Stars and AGNs
observed by XMM Newton. However, for testing purposes, we consider CV, Pulsars,
ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict
with higher accuracy as opposed to the frequentist approaches wherein these
objects are predicted as either Stars or AGNs. The proposed algorithm is one of
the first instances wherein the use of Bayesian Neural Networks is done in
observational astronomy. Additionally, we also make our algorithm to identify
stars and AGNs in the whole XMM-Newton DR11 catalogue. The algorithm almost
identifies 62807 data points as AGNs and 88107 data points as Stars with enough
confidence. In all other cases, the algorithm refuses to make predictions due
to high uncertainty and hence reduces the error rate. | Sarvesh Gharat, Bhaskar Bose | 2022-12-30T21:29:50Z | http://arxiv.org/abs/2301.00056v1 | # A Bayesian Neural Network Approach to identify Stars and AGNs observed by XMM Newton +
###### Abstract
In today's era, a tremendous amount of data is generated by different observatories and manual classification of data is something which is practically impossible. Hence, to classify and categorize the objects there are multiple machine and deep learning techniques used. However, these predictions are overconfident and won't be able to identify if the data actually belongs to the trained class. To solve this major problem of overconfidence, in this study we propose a novel Bayesian Neural Network which randomly samples weights from a distribution as opposed to the fixed weight vector considered in the frequentist approach. The study involves the classification of Stars and AGNs observed by XMM Newton. However, for testing purposes, we consider CV, Pulsars, ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict with higher accuracy as opposed to the frequentist approaches wherein these objects are predicted as either Stars or AGNs. The proposed algorithm is one of the first instances wherein the use of Bayesian Neural Networks is done in observational astronomy. Additionally, we also make our algorithm to identify stars and AGNs in the whole XMM-Newton DR11 catalogue. The algorithm almost identifies 62807 data points as AGNs and 88107 data points as Stars with enough confidence. In all other cases, the algorithm refuses to make predictions due to high uncertainty and hence reduces the error rate.
keywords: methods: data analysis - methods: observational - methods: miscellaneous
## 1 Introduction
Since the last few decades, a large amount of data is regularly generated by different observatories and surveys. The classification of this enormous amount of data by professional astronomers is time-consuming as well as practically impossible. To make the process simpler, various citizen science projects (Desjardins et al., 2021) (Cobb, 2021) (Allf et al., 2022) (Faherty et al., 2021) are introduced which has been reducing the required time by some extent. However, there are many instances wherein classifying the objects won't be simple and may require domain expertise.
In this modern era, wherein Machine Learning and Neural Networks are widely used in multiple fields, there has been significant development in the use of these algorithms in Astronomy. Though these algorithms are accurate with their predictions there is certainly some overconfidence (Kristiadi et al., 2020) (Kristiadi et al., 2021) associated with it. Besides that, these algorithms tend to classify every input as one of the trained classes (Beaumont and Haziza, 2022) irrespective of whether it actually belongs to those trained classes ege: The algorithm trained to classify stars will also predict AGNs as one of the stars. To solve this major issue, in this study we propose a Bayesian Neural Network (Jospin et al., 2022) (Charnock et al., 2022) which refuses to make a prediction whenever it isn't confident about its predictions. The proposed algorithm is implemented on the data collected by XMM-Newton (Jansen et al., 2001). We do a binary classification to classify Stars and AGNs (Malek et al., 2013) (Golob et al., 2021). Additionally to test our algorithm with the inputs which don't belong to the trained class we consider data observed from CV, Pulsars, ULX, and LMX. Although, the algorithm doesn't refuse to predict all these objects, but the number of objects it predicts for these 4 classes is way smaller than that of trained classes.
For the trained classes, the algorithm gives its predictions for almost 64% of the data points and avoids predicting the output whenever it is not confident about its predictions. The achieved accuracy in this binary classification task whenever the algorithm gives its prediction is 98.41%. On the other hand, only 14.6% of the incorrect data points are predicted as one of the classes by the algorithm. The percentage decrease from 100% to 14.6% in the case of different inputs is what dominates our model over other frequentist algorithms.
## 2 Methodology
In this section, we discuss the methodology used to perform this study. This section is divided into the following subsections.
* Data Collection and Feature Extraction
* Model Architecture
* Training and Testing
### Data Collection and Feature Extraction
In this study, we make use of data provided in "XMM-DR11 SEDs" Webb et al. (2020). We further cross-match the collected data with different vizier (Ochsenbein et al., 2000) catalogs. Please refer to Table 1 to view all the catalogs used in this study. As the proposed algorithm is a "supervised Bayesian algorithm", this happens to be one of the important steps for our algorithm to work.
The provided data has 336 different features that can increase computational complexity by a larger extent and also has a lot of missing data points. Therefore in this study, we consider a set of 18 features corresponding to the observed source. The considered features for all the sources are available on our Github repository, more information of which is available on the official webpage 1 of the observatory. After cross-matching and reducing the number of features, we were left with a total of 19136 data points. The data distribution can be seen in Table 2. We further also plot the sources (Refer Figure1) based on their "Ra" and "Dec" to confirm if the data coverage of the considered sources matches with the actual data covered by the telescope.
Footnote 1: [http://mmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc.html](http://mmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc.html)
The collected data is further classified into train and test according to the \(80:20\) splitting condition. The exact number of data points is mentioned in Table 2
### Model Architecture
The proposed model has 1 input, hidden and output layers (refer Figure 2) with \(18,512\), and 2 neurons respectively. The reason for having 18 neurons in the input layer is the number of input features considered in this study. Further, to increase the non-linearity of the output, we make use of "Relu" (Fukushima, 1975)(Agarap, 2018) as an activation function for the first 2 layers. On the other hand, the output layer makes use of "Softmax" to make the predictions. This is done so that the output of the model will be the probability of image belonging to a particular class (Nwankpa et al., 2018)(Feng and Lu, 2019).
The "optimizer" and "loss" used in this study are "Adam" (Kingma et al., 2020) and "Trace Elbo" (Wingate and Weber, 2013)(Ranganath et al., 2014) respectively. The overall idea of BNN (Izmailov et al., 2021)(Jospin et al., 2022)(Goan and Fookes, 2020) is to have a posterior distribution corresponding to all weights and biases such that, the output distribution produced by these posterior distributions is similar to that of the categorical distributions defined in the training dataset. Hence, convergence, in this case, can be achieved by minimizing the KL divergence between the output and the categorical distribution or just by maximizing the ELBO (Wingate and Weber, 2013)(Ranganath et al., 2014). We make use of normal distributions which are initialized with random mean and variance as prior (Fortuin et al., 2021), along with the likelihood derived from the data to construct the posterior distribution.
### Training and Testing
The proposed model is constructed using Pytorch (Paszke et al., 2019) and Pyro (Bingham et al., 2019). The training of the model is conducted on Google Colaboratory, making use of NVIDIA K80 GPU (Carneiro et al., 2018). The model is trained over 2500 epochs with a learning rate of 0.01. Both these parameters i.e number of epochs and learning rate has to be tuned and are done by iterating the algorithm multiple times with varying parameter values.
The algorithm is further asked to make 100 predictions corresponding to every sample in the test set. Every time it makes the prediction, the corresponding prediction probability varies. This is due to random sampling of weights and biases from the trained distributions. Further, the algorithm considers the "mean" and "standard deviation" corresponding to those probabilities to make a decision as to proceed with classification or not.
\begin{table}
\begin{tabular}{l c} \hline Class & Catalogue \\ \hline \hline AGN & VERONCAT (VΓ©ron-Cetty and VΓ©ron, 2010) \\ \hline LMX & NG531JSCKO (Lin et al., 2015) \\ & RITTERLMXB (Ritter and Kolb, 2003) \\ & LMXBCAT (Liu et al., 2007) \\ & INTREFCAT (Ebisawa et al., 2003) \\ & M31XMMKRAY (Stiole et al., 2008) \\ & M31CFCXO (Hofmann et al., 2013) \\ & RASSMASS (Hakakonsen and Rutledge, 2009) \\ \hline Pulsars & ATNF (Manchester et al., 2005) \\ & FERMIL2PSR (Abdo et al., 2013) \\ \hline CV & CVC (Drake et al., 2014) \\ \hline ULX & XSEG (Drake et al., 2014) \\ \hline Stars & CSSC (Skiff, 2014) \\ \hline \end{tabular}
\end{table}
Table 1: Catalogues used to create labeled data
\begin{table}
\begin{tabular}{l c c} \hline Class & Training Data & Test Data \\ \hline \hline AGN & 8295 & 2040 \\ \hline LMX & 0 & 49 \\ \hline Pulsars & 0 & 174 \\ \hline CV & 0 & 36 \\ \hline ULX & 0 & 261 \\ \hline Stars & 6649 & 1628 \\ \hline Total & 14944 & 4188 \\ \hline \end{tabular}
\end{table}
Table 2: Data distribution after cross-matching all the data points with catalogs mentioned in Table 1
Figure 1: Sky map coverage of considered data points
## 3 Results and Discussion
The proposed algorithm is one of the initial attempts to implement "Bayesian Neural Networks" in observational astronomy which has shown significant results. The algorithm gives the predictions with an accuracy of more than 98% whenever it agrees to make predictions for trained classes.
Table 3 represents confusion matrix of classified data. To calculate accuracy, we make use of the given formula.
\[\text{Accuracy}=\frac{a_{11}+a_{22}}{a_{11}+a_{12}+a_{21}+a_{22}}\times 100\]
In our case, the calculated accuracy is
\[\text{Accuracy}=\frac{1312+986}{1312+6+31+986}\times 100=98.4\%\]
As accuracy is not the only measure to evaluate any classification model, we further calculate precision, recall and f1 score corresponding to both the classes as shown in Table 4
Although, the obtained results from simpler "BNN" can be obtained via complex frequentist models, the uniqueness of the algorithm is that it agrees to classify only 14% of the unknown classes as one of the trained classes as opposed to frequentist approaches wherein all those samples are classified as one of these classes. Table 5 shows the percentage of data from untrained classes which are predicted as a Star or a AGN.
As the algorithm gives significant results on labelled data, we make use of it to identify the possible Stars and AGNs in the raw data 2. The algorithm almost identifies almost 7.1% of data as AGNs and 10.04% of data as AGNs. Numerically, the number happens to be 62807 and 88107 respectively. Although, there's high probability that there exists more Stars and AGNs as compared to the given number the algorithm simply refuses to give the prediction as it isn't enough confident with the same.
Footnote 2: [http://xrmsssc.irap.omp.eu/Catalogue/4DMM-DR11/col_unsrc.html](http://xrmsssc.irap.omp.eu/Catalogue/4DMM-DR11/col_unsrc.html)
## 4 Conclusions
In this study, we propose a Bayesian approach to identify Stars and AGNs observed by XMM Newton. The proposed algorithm avoids making predictions whenever it is unsure about the predictions. Implementing such algorithms will help in reducing the number of wrong predictions which is one of the major drawbacks of algorithms making use of the frequentist approach. This is an important thing to consider as there always exists a situation wherein the algorithm receives an input on which it is never trained. The proposed algorithm also identifies 62807 Stars and 88107 AGNs in the data release 11 by XMM-Newton.
## 5 Conflict of Interest
The authors declare that they have no conflict of interest.
## Data Availability
The raw data used in this study is publicly made available by XMM Newton data archive. All the codes corresponding to the algorithm and the predicted objects along with the predictions will be publicly made available on "Github" and "paperswithcode" by June 2023.
|
2309.04412 | Cosmology from Cross-Correlation of ACT-DR4 CMB Lensing and DES-Y3
Cosmic Shear | Cross-correlation between weak lensing of the Cosmic Microwave Background
(CMB) and weak lensing of galaxies offers a way to place robust constraints on
cosmological and astrophysical parameters with reduced sensitivity to certain
systematic effects affecting individual surveys. We measure the angular
cross-power spectrum between the Atacama Cosmology Telescope (ACT) DR4 CMB
lensing and the galaxy weak lensing measured by the Dark Energy Survey (DES) Y3
data. Our baseline analysis uses the CMB convergence map derived from ACT-DR4
and $\textit{Planck}$ data, where most of the contamination due to the thermal
Sunyaev Zel'dovich effect is removed, thus avoiding important systematics in
the cross-correlation. In our modelling, we consider the nuisance parameters of
the photometric uncertainty, multiplicative shear bias and intrinsic alignment
of galaxies. The resulting cross-power spectrum has a signal-to-noise ratio $=
7.1$ and passes a set of null tests. We use it to infer the amplitude of the
fluctuations in the matter distribution ($S_8 \equiv \sigma_8 (\Omega_{\rm
m}/0.3)^{0.5} = 0.782\pm 0.059$) with informative but well-motivated priors on
the nuisance parameters. We also investigate the validity of these priors by
significantly relaxing them and checking the consistency of the resulting
posteriors, finding them consistent, albeit only with relatively weak
constraints. This cross-correlation measurement will improve significantly with
the new ACT-DR6 lensing map and form a key component of the joint 6x2pt
analysis between DES and ACT. | S. Shaikh, I. Harrison, A. van Engelen, G. A. Marques, T. M. C. Abbott, M. Aguena, O. Alves, A. Amon, R. An, D. Bacon, N. Battaglia, M. R. Becker, G. M. Bernstein, E. Bertin, J. Blazek, J. R. Bond, D. Brooks, D. L. Burke, E. Calabrese, A. Carnero Rosell, J. Carretero, R. Cawthon, C. Chang, R. Chen, A. Choi, S. K. Choi, L. N. da Costa, M. E. S. Pereira, O. Darwish, T. M. Davis, S. Desai, M. Devlin, H. T. Diehl, P. Doel, C. Doux, J. Elvin-Poole, G. S. Farren, S. Ferraro, I. Ferrero, A. FertΓ©, B. Flaugher, J. Frieman, M. Gatti, G. Giannini, S. Giardiello, D. Gruen, R. A. Gruendl, G. Gutierrez, J. C. Hill, S. R. Hinton, D. L. Hollowood, K. Honscheid, K. M. Huffenberger, D. Huterer, D. J. James, M. Jarvis, N. Jeffrey, H. T. Jense, K. Knowles, J. Kim, D. Kramer, O. Lahav, S. Lee, M. Lima, N. MacCrann, M. S. Madhavacheril, J. L. Marshall, J. McCullough, Y. Mehta, J. Mena-FernΓ‘ndez, R. Miquel, J. J. Mohr, K. Moodley, J. Myles, A. Navarro-Alsina, L. Newburgh, M. D. Niemack, Y. Omori, S. Pandey, B. Partridge, A. Pieres, A. A. Plazas MalagΓ³n, A. Porredon, J. Prat, F. J. Qu, N. Robertson, R. P. Rollins, A. Roodman, S. Samuroff, C. SΓ‘nchez, E. Sanchez, D. Sanchez Cid, L. F. Secco, N. Sehgal, E. Sheldon, B. D. Sherwin, T. Shin, C. SifΓ³n M. Smith, E. Suchyta, M. E. C. Swanson, G. Tarle, M. A. Troxel, I. Tutusaus, C. Vargas, N. Weaverdyck, P. Wiseman, M. Yamamoto, J. Zuntz | 2023-09-08T16:22:36Z | http://arxiv.org/abs/2309.04412v1 | # Cosmology from Cross-Correlation of ACT-DR4 CMB Lensing and DES-Y3 Cosmic Shear
###### Abstract
Cross-correlation between weak lensing of the Cosmic Microwave Background (CMB) and weak lensing of galaxies offers a way to place robust constraints on cosmological and astrophysical parameters with reduced sensitivity to certain systematic effects affecting individual surveys. We measure the angular cross-power spectrum between the Atacama Cosmology Telescope (ACT) DR4 CMB lensing and the galaxy weak lensing measured by the Dark Energy Survey (DES) Y3 data. Our baseline analysis uses the CMB convergence map derived from ACT-DR4 and _Planck_ data, where most of the contamination due to the thermal Sunyaev Zel'dovich effect is removed, thus avoiding important systematics in the cross-correlation. In our modelling, we consider the nuisance parameters of the photometric uncertainty, multiplicative shear bias and intrinsic alignment of galaxies. The resulting cross-power spectrum has a signal-to-noise ratio \(=7.1\) and passes a set of null tests. We use it to infer the amplitude of the fluctuations in the matter distribution (\(S_{8}\equiv\sigma_{8}(\Omega_{\rm m}/0.3)^{0.5}=0.782\pm 0.059\)) with informative but well-motivated priors on the nuisance parameters. We also investigate the validity of these priors by significantly relaxing them and checking the consistency of the resulting posteriors, finding them consistent, albeit only with relatively weak constraints. This cross-correlation measurement will improve significantly with the new ACT-DR6 lensing map and form a key component of the joint 6x2pt analysis between DES and ACT.
keywords: gravitational lensing: weak, cosmology: large-scale structure of Universe, observations, cosmological parameters
## 1 Introduction
Observations of the \(z\sim 1100\) Cosmic Microwave Background (CMB) and the Large Scale Structure (LSS) at \(z\lesssim 3\) give a remarkably consistent picture of the physics and contents of the Universe. Measurements of the primary CMB temperature and polarization anisotropies from _Planck_ 2018 (Planck Collaboration, 2020), ACT Data Release 4 (DR4) (Aioja et al., 2020) and SPT-3G (Dutcher et al., 2021) achieve sub-percent precision on the six main parameters of the spatially flat Lambda Cold Dark Matter (\(\Lambda\)CDM) cosmological model. This model allows us to predict several derived parameters, which can be measured using different probes at lower redshifts. One such derived parameter is the matter clustering parameter \(\sigma_{8}\), which describes the amplitude of fluctuations in the over-density of matter on scales of \(8\,h^{-1}\,\)Mpc. Large photometric and spectroscopic surveys of galaxies have recently begun to place constraints on this parameter comparable in precision to those obtained from CMB predictions. The most recent results from the Dark Energy Survey (DES-Y3, Abbott et al., 2022), the Kilo-Degree Survey (KiDS-1000, Heymans et al., 2021) and the Hyper-Suprime Cam survey (HSC-Y3, More et al., 2023; Miyatake et al., 2023; Sugiyama et al., 2023) all combine galaxy clustering and galaxy weak lensing measurements to infer the value of \(\sigma_{8}\) and the total matter abundance \(\Omega_{\rm m}\), with the best-constrained parameter combination given by \(S_{8}\equiv\sigma_{8}\,(\Omega_{\rm m}/0.3)^{0.5}\).
As the statistical uncertainty from these two different sets of experiments shrank, a discrepancy emerged: high redshift CMB observations favour a value scattering around \(S_{8}\approx 0.83\) (ACT-DR4: \(0.830\pm 0.043\); _Planck_ PR3: \(0.834\pm 0.016\); SPT-3G 2018: \(0.797\pm 0.041\)), whilst low redshift galaxy and lensing observations appear close to a lower value of \(S_{8}\approx 0.77\) (DES-Y3: \(0.776\pm 0.017\); KiDS-1000: \(0.766^{+0.020}_{-0.014}\), HSC-Y3: \(0.775^{+0.043}_{-0.038}\)). This disagreement is marginally statistically significant but remains consistent when comparing different experiments (see Abdalla et al., 2022, for a review; here, we attempt to include a representative sub-sample of the latest results). This disagreement could be due to unaccounted-for systematics in one (or both) types of experiment or due to a missing piece of physics affecting structure growth at different redshifts and/or physical scales. The prospect of modifications to the current understanding of non-linear structure formation and baryonic feedback contributions is pointed to by Amon and Robertson et al. (2023), Amon and Efstathiou (2022), Gu et al. (2023) and references therein. A number of other explanations include new dark sector physics, including interacting dark energy and dark matter (e.g. Poulin et al., 2023), and ultra-light axions (e.g. Rogers et al., 2023).
Along with these two principal probes, several other probes are sensitive to an intermediate range of redshifts. Gravitational lensing of the primary CMB is sensitive to a broad range of redshifts and large angular scales. It agrees largely on the value of \(S_{8}\) with the primary CMB itself: the latest ACT results from the newly produced DR6 lensing map (Qu et al., 2023; Madhavacheril et al., 2023; MacCrann et al., 2023) find \(S_{8}=0.840\pm 0.028\). Cross-correlations of this CMB lensing signal with galaxy surveys are beginning to be detected at increasing signal-to-noise, hence their ability to provide useful constraints. These cross-correlations are sensitive to lower redshifts and smaller scales compared to the CMB lensing auto-spectrum and generally prefer values of \(S_{8}<0.8\), in agreement with the galaxy clustering and weak lensing measurements (e.g. Robertson et al., 2021: \(0.64\pm 0.08\), Krolewski et al., 2021: \(0.784\pm 0.015\), Chang et al., 2023: \(0.74^{+0.034}_{-0.029}\), Marques et al., 2023: \(0.75^{+0.04}_{-0.05}\)).
Here, we focus specifically on one of these cross-correlations: the one between CMB lensing (\(\kappa_{\rm C}\)) and galaxy weak lensing (\(\gamma_{\rm E}\)), which we will refer to as \(C_{\ell}^{s_{\rm C}\gamma_{\rm E}}\). To measure this, we use a combination of the ACT-DR4 CMB lensing map (Darwish et al., 2021) and the DES-Y3 galaxy shape catalogue (Gatti and Sheldon et al., 2021). The cross-correlation lensing kernel peaks between those of each probe individually (see lower panel of Figure 1) and hence probes somewhat different redshift range than the galaxy weak lensing alone. CMB lensing-galaxy weak lensing cross-correlations are not sensitive to galaxy bias and also provide useful information on the systematics of both probes. Specifically, the extra high-redshift lensing bin from the CMB has long been proposed as a useful way of calibrating multiplicative biases in the difficult measurement of galaxy lensing shear and shift biases in the estimated mean photometric redshift of the galaxy samples (e.g. Das et al., 2013).
A number of analyses have already detected this cross-correlation signal (Hand et al., 2015; Liu and Hill, 2015; Kirk et al., 2016; Singh et al., 2017; Harnois-Deraps et al., 2016, 2017; Omori et al., 2019; Marques et al., 2020; Robertson et al., 2021; Chang et al., 2023). Some of these early works focus the signal-to-noise available from their data onto a single phenomenological parameter \(A_{\rm cross}\), which is the amplitude of the cross-correlation power spectrum relative to that predicted by primary CMB data. Note that we denote this parameter by \(A_{\rm cross}\) to distinguish it from the parameter measuring the smearing of the peaks in the primary CMB power spectrum, \(A_{\rm lens}\), as introduced in Calabrese et al. (2008). Robertson et al. (2021) also explicitly measure \(S_{8}\) jointly with other cosmological and systematics parameters, finding a 1D marginalised constraint of \(S_{8}=0.64\pm 0.08\), which is consistent with low redshift weak lensing only constraints but inconsistent with results derived from high redshift CMB measurements. Omori et al. (2023) and Chang et al. (2023) measure the real-space equivalents of the \(C_{\ell}^{s_{\rm C}\gamma_{\rm E}}\) data vector and the CMB lensing-galaxy clustering cross-correlation between SPT and DES-Y3, finding \(S_{8}=0.74^{+0.034}_{-0.029}\). They then combine these cross-correlations with the three DES-Y3 data vectors and one SPT lensing data vector for a full '6x2pt' 1 analysis using information from this wide range of kernels spanning a large range of redshifts, finding \(S_{8}=0.792\pm 0.012\)(Abbott et al., 2023).
Footnote 1: So called because it involves six combinations of the two-point correlation functions of CMB lensing \(\kappa\), galaxy lensing \(\gamma\) and galaxy positions \(g\): \(\langle\kappa\kappa\rangle,\langle\gamma\gamma\rangle,\langle gg\rangle, \langle\kappa\gamma\rangle,\langle\kappa g\rangle,\langle\gamma g\rangle\).
In addition, Robertson et al. (2021) and Marques et al. (2020) also assess the consistency of their \(C_{\ell}^{s_{\rm C}\gamma_{\rm E}}\) only data with the priors on multiplicative shear and redshift calibration biases, which are derived by the weak lensing experiments using a combination of simulations and deep ancillary observational data. For these two types of parameters, there is very little constraining power available from current \(C_{\ell}^{s_{\rm C}\gamma_{\rm E}}\) data, but the results are indeed consistent with the priors derived without the assistance of the high redshift CMB lensing bin (which is independent of the calibration parameters).
Another physical effect that affects the amplitude of the \(C_{\ell}^{\rm sC\gamma_{E}}\) signal is the intrinsic alignment of galaxies (IA), which can mimic the alignment caused by the weak lensing cosmic shear signal (for a review see Troxel and Ishak, 2015). The amplitude of the power spectrum of intrinsic alignments is highly degenerate with the lensing amplitude and forms a contribution to the observed power spectrum of \(\mathcal{O}(10\%)\)(Hall and Taylor, 2014). Models for the power spectrum of IAs motivated by galaxy formation physics are relatively uncertain but are expected to have redshift and scale dependencies which help to break this degeneracy (Vlah et al., 2020, and references therein).
With \(450\,\mathrm{deg}^{2}\) of overlapping ACT-DR4 and DES-Y3 data, we have the necessary ingredients to perform a full tomographic analysis using the four redshift bins defined by DES-Y3. We include a set of four redshift calibration parameters, four shear calibration parameters, and two parameters describing the IA amplitude and redshift dependence. The current signal-to-noise from the ACT-DR4 and DES-Y3 allows us to put constraints on \(S_{8}\) from \(\kappa_{\rm C}\gamma_{\rm E}\) which, although weaker than those of Abbott et al. (2023) (primarily due to a smaller available overlapping sky area) provides an opportunity to favour or disfavour the somewhat inconsistent values for \(S_{8}\) from \(C_{\ell}^{\rm sC\gamma_{E}}\) currently found in the literature. Furthermore, we are careful to ensure our methods are adequate for the incoming three-fold increase in constraining power available from the ACT-DR6 lensing map relative to ACT-DR4. The ACT-DR6 lensing map covers most of the DES survey footprint, allowing for a factor of \(\approx 9\) increase in the area of overlap between the two surveys, compared to the ACT-DR4 lensing map considered in this work. This will bring our constraining power up to a level comparable to the best current measurements of \(C_{\ell}^{\rm sC\gamma_{E}}\) from Chang et al. (2023). Our analysis is performed in harmonic space, rather than real space as in that work, and thus has different sensitivity to behaviour at different redshifts and scales, and may thus provide useful verification of earlier results.
We have structured the paper in the following manner:
* In Section 2, we describe the theory predicting our observable: the angular cross-power spectrum between CMB weak lensing and galaxy weak lensing.
* In Section 3, we briefly describe the overall features of the ACT and DES surveys. We discuss the ACT-DR4 lensing map and DES-Y3 cosmic shear catalogue, which we use as inputs to our analysis.
* In Section 4, we describe cross-power spectrum estimation from these inputs, including the generation of the simulations we use for pipeline validation and estimating the covariance matrix for the data.
* In Section 5, we describe the framework in which we compare the data vector to theory predictions, including the parameterisation of the cosmological model and galaxy weak lensing nuisance model. We also describe our inference pipeline in terms of likelihood, prior, and sampling methodology choices.
* In Section 6, we describe the validation of this pipeline. We conduct a series of null tests on the blinded data vector to ensure there is no significant detectable contamination from un-modelled observational and astrophysical effects. We also inject simulated data into our inference pipeline and show we can recover the input model parameters in an unbiased way. We demonstrate the stability of our measurement of the cosmological parameters to different choices of the underlying modelling and splitting our data vector into sub-samples in a number of ways.
* In Section 7, we show our constraints on cosmological and weak lensing galaxy nuisance parameters. We first infer the value of the lensing amplitude \(A_{\rm cross}\) with respect to the prediction from a standard \(\Lambda\)CDM cosmology. We then show our measurement of the parameters in the full model, including cosmology and galaxy weak lensing nuisance parameters. We also explore our constraining power on the nuisance parameters when DES simulation- and deep data-derived priors are relaxed and when using only high- and low-redshift sub-samples of our data.
* In Section 8, we review our conclusions and discuss their implications.
## 2 Theory
Gravitational lensing of the light from cosmic sources such as the CMB and galaxies allows us to probe the distribution of matter intervening between these sources and the observer. Weak lensing convergence (\(\kappa\)) is the weighted integral of the matter density contrast \(\delta(z,\hat{n})\)(e.g. Schneider, 2005, and references therein)
\[\kappa(\hat{n})=\int W(z)\delta(z,\hat{n})dz, \tag{1}\]
where \(W(z)\) is the lensing weight as a function of redshift and \(\hat{n}\) is the direction on the sky. \(W(z)\) represents the lensing efficiency of the matter distribution along the line of sight. Weak lensing shear (\(\boldsymbol{\gamma}\)), which is a spin-2 quantity with two components, \((\gamma_{1},\gamma_{2})\), is related to \(\kappa\) through the following harmonic space relation
\[\gamma_{\ell m}^{\rm E}=-\sqrt{\frac{(\ell-1)(\ell+2)}{\ell(\ell+1)}}\kappa_{ \ell m}, \tag{2}\]
where \(\gamma_{\ell m}^{\rm E}\) are the E-mode spherical harmonic coefficients of the \(\boldsymbol{\gamma}(\hat{n})\) map (Castro et al., 2005). At linear order in deflection, weak lensing by large-scale structure only contributes to the E-mode signal in the shear. This work uses the correlation between the convergence reconstructed from the observed CMB (\(\kappa_{\rm C}\)) and the weak lensing shear measured by galaxy imaging surveys (\(\boldsymbol{\gamma}\)). \(\kappa_{\rm C}\) is reconstructed from the observed CMB maps using quadratic estimators (Darwish et al., 2021), whereas \(\boldsymbol{\gamma}\) is estimated from the measurement of galaxy ellipticities \(\boldsymbol{e}\equiv(e_{1},e_{2})\), with \(e_{1}\) and \(e_{2}\) being two components of the galaxy ellipticities (Gatti and Sheldon et al., 2021). Even though, in principle, shear can be estimated from a simple average of ellipticities, DES-Y3 analysis uses the METACALI-BRATION method (Huff and Mandelbaum, 2017):
\[\langle\boldsymbol{\gamma}\rangle\approx\langle\boldsymbol{R}\rangle^{-1} \langle\boldsymbol{e}\rangle, \tag{3}\]
where the matrix \(\boldsymbol{R}\) is the shear response for the galaxies, measured by repeating the ellipticity measurement on sheared versions of the galaxy images:
\[R_{i,j}=\frac{e_{i}^{+}-e_{i}^{-}}{\Delta\gamma_{j}}, \tag{4}\]
where \(e^{\pm}\) is the measurement on an image sheared by a small amount \(\pm\gamma\) and \(\Delta\gamma=2\gamma\).
We model the correlation between \(\kappa_{\rm C}\) and \(\gamma_{\rm E}\) in spherical harmonic space. The angular power spectrum between the CMB convergence \(\kappa_{\rm C}\) and the E-mode of the galaxy shear \(\gamma_{\rm E}\) at multipole \(\ell\), under the Limber approximation (Limber, 1953; LoVerde & Afshordi, 2008), is (e.g. Kaiser, 1992)
\[C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}=\int_{0}^{z_{H}}dz\frac{ H(z)}{\chi^{2}(z)c}W_{\kappa}^{\rm CMB}(z)W_{\gamma}^{8}(z)P_{\delta\delta} \Big{(}k=\frac{\ell+0.5}{\chi(z)},z\Big{)}, \tag{5}\]
where \(P_{\delta\delta}(k,z)\) is the matter power spectrum at redshift \(z\), \(\chi(z)\) and \(a(z)\) denote the comoving distance and the scale factor at \(z\), respectively, \(c\) is the speed of light, and \(H(z)\) is the Hubble parameter as a function of \(z\). \(W_{\kappa}^{\rm CMB}(z)\) and \(W_{\gamma}^{8}(z)\) are the lensing weights for the CMB and the source galaxies, respectively. The lensing weight for the CMB is given by:
\[W_{\kappa}^{\rm CMB}(z)=\frac{3H_{0}^{2}\Omega_{\rm m,0}}{2H(z)c} \frac{\chi(z)}{a(z)}\frac{\chi(z^{*})-\chi(z)}{\chi(z^{*})}, \tag{6}\]
where \(z^{*}\) is the redshift of the surface of the last scattering of the CMB, \(\Omega_{\rm m,0}\) and \(H_{0}\) are matter density and Hubble parameters at the current epoch. The lensing weight for the source galaxies depends on their redshift distribution, \(n(z)\):
\[W_{\gamma}^{8}(z)=\frac{3H_{0}^{2}\Omega_{\rm m,0}}{2H(z)c}\frac{ \chi(z)}{a(z)}\int_{z}^{z_{H}}dz^{\prime}n(z^{\prime})\frac{\chi(z^{\prime})- \chi(z)}{\chi(z^{\prime})}. \tag{7}\]
We use the Core Cosmology Library (CCL, Chisari et al., 2019) to compute \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\).2 We model the non-linear contributions to \(P_{\delta\delta}(k)\) using the halofit model (Smith et al., 2003; Takahashi et al., 2012). We also include contributions to the observed power spectrum from astrophysical and experimental effects, which we fully describe in Section 5.2.
Footnote 2: [https://github.com/LSSTDESC/CCL](https://github.com/LSSTDESC/CCL)
In Figure 1, we show the source redshift distribution \(n(z)\) used in this work and the product of the lensing weight function \(W_{\kappa}^{\rm CMB}(z)W_{\gamma}^{8}(z)\). The latter shows the redshift range of the matter distribution that contributes to the cross-correlation \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\).
## 3 Data
We use overlapping CMB weak lensing and galaxy weak lensing data from the ACT and DES, respectively. We extensively use the individual work of these collaborations in reducing their raw data and preparing science-ready CMB lensing maps and cosmic shear catalogues, but we perform our own analyses to generate the cross-correlation \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) data vector.
### ACT CMB lensing data
We use the ACT-DR4 CMB lensing convergence maps from Darwish et al. (2021). These lensing maps are reconstructed using CMB temperature and polarization measurements by ACT in two frequency channels (98 and 150 GHz) during the 2014 and 2015 observing seasons (Aiola et al., 2020; Mallaby-Kay et al., 2021). The arcminute-resolution maps produced by the ACT Collaboration are described in Choi et al. (2020); Aiola et al. (2020); Madhavacheril et al. (2020). ACT-DR4 consists of lensing maps in two sky regions, Deep-56 (D56) and BOSS-North (BN), with respective sky areas 456 deg\({}^{2}\) and 1633 deg\({}^{2}\)(Darwish et al., 2021). We use the lensing map in the D56 region, which overlaps with the DES-Y3 footprint, as shown in Figure 2.
CMB lensing maps are obtained using the quadratic estimator (Hu & Okamoto, 2002). Signatures of extragalactic astrophysical processes present in the individual frequency maps, such as the Cosmic Infrared Background (CIB) and thermal Sunyaev-Zeldovich (tSZ) effect, lead to biases in the reconstructed convergence map (Osborne et al., 2014; van Engelen et al., 2014). These signals trace the large-scale structure and can lead to biases in the cross-correlation of \(\kappa_{\rm C}\) with other large-scale structure probes, such as galaxy weak lensing. For the range of redshifts (\(z\lesssim 1.0\)) probed by ACT-DR4 and DES-Y3 \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\), the biases due to tSZ are expected to be more prominent than those due to the CIB (Baxter et al., 2019), which is sourced by galaxies spanning a broad range of redshift with the peak between \(z\sim 1\) to 2 (Schmidt et al., 2015). ACT-DR4 provides two lensing maps: a _tSZ-free_\(\kappa_{\rm C}\) map where the contamination due to the tSZ effect is deprojected (Madhavacheril & Hill, 2018), and _with
Figure 1: Top panel shows the DES-Y3 source galaxy redshift distribution, \(n(z)\), in four tomographic bins. The bottom panel shows the product of the respective galaxy weak lensing kernel with the CMB weak lensing kernel.
Figure 2: DES-Y3 and ACT-DR4 D56 footprints and their common footprint. The sky area common between them is around 450 deg\({}^{2}\).
_tSZ_\(\kappa_{\rm C}\) map where the tSZ deprojection is not performed. We refer to results obtained using this latter map as 'ACT-only'. The tSZ-free \(\kappa_{\rm C}\) map uses _Planck_ frequency maps along with the ACT data to perform the internal linear combination step required to deproject tSZ contamination and obtain the tSZ-free CMB map (Madhavacheril and Hill, 2018; Madhavacheril et al., 2020). Hence, we refer to results derived from this map as 'ACT+_Planck_'. In the ACT-DR4 analysis, the CMB lensing maps are reconstructed using Fourier modes between \(\ell^{\rm CMB}_{\rm min}\) and \(\ell^{\rm CMB}_{\rm max}\). The lower multipole, \(\ell^{\rm CMB}_{\rm min}\), is chosen to mitigate the effects of the atmospheric noise and the ACT mapmaker transfer function (Darwish et al., 2021). \(\ell^{\rm CMB}_{\rm max}\) is chosen to avoid contamination due to extragalactic foregrounds. The ACT-only convergence map is reconstructed with \(\ell^{\rm CMB}_{\rm min}=500\) and \(\ell^{\rm CMB}_{\rm max}=3000\). The tSZ-cleaned CMB map obtained using _Planck_ frequency maps contains information on large angular scales, below \(\ell<500\). Hence, using ACT+_Planck_ data and tSZ deprojection makes a wider range of CMB multipoles suitable for lensing reconstruction with \(\ell^{\rm CMB}_{\rm min}=100\) and \(\ell^{\rm CMB}_{\rm max}=3350\). In Figure 3, we show the ACT+_Planck_\(\kappa_{\rm C}\) map over the D56 region. This map is smoothed using a Gaussian kernel of 12 arcmin FWHM for visual purposes only. We use the \(\kappa_{\rm C}\) map without any additional smoothing in the analysis.
ACT lensing reconstruction is performed using a lensing analysis mask applied to the individual frequency maps or CMB maps. While computing the angular power spectrum, we use the square of this mask as the mask implicit in the reconstructed \(\kappa_{\rm C}\) map. We also use 511 lensing reconstruction simulations made available by Darwish et al. (2021) to obtain the lensing reconstruction noise.
The ACT \(\kappa_{\rm C}\) maps are in the equirectangular plate carree (CAR) projection. We use the pixel package to convert the maps from CAR projection to the HEALPix pixelization at resolution \(\texttt{Nside}=2048\)(Gorski et al., 2005).3
Footnote 3: [https://github.com/simonso/pixell](https://github.com/simonso/pixell)
### DES-Y3 galaxy weak lensing data
DES is a photometric survey that carried out observations using the Dark Energy Camera (Flaugher et al., 2015) on Cerro Tololo Inter-American Observatory (CTIO) Blanco 4-meter Telescope in Chile. We use the weak lensing source galaxies catalogue of the DES-Y3 data. The catalogue is derived from the DES-Y3 GOLD data products (Sevilla-Noarbe et al., 2021). The shape measurement of source galaxies is performed using the METACALIRATION algorithm (Huff and Mandelbaum, 2017; Sheldon and Huff, 2017) and is discussed in Gatti and Sheldon et al. (2021). After various selection cuts are applied to reduce systematic biases, the catalogue contains the shape measurement \((e_{1},e_{2})\) of \(\sim 1\times 10^{8}\) galaxies. It spans an effective (unmasked) area of 4143 deg\({}^{2}\) with effective number density \(n_{\rm eff}=5.59\) gal/arcmin\({}^{2}\).
The galaxies in the source catalogue are distributed in four tomographic redshift bins shown in Figure 1. Photometric redshifts of these galaxies are estimated using the SOMPZ algorithm (Myles and Alarcon et al., 2021) using deep observations and additional colour bands from the DES deep fields (Hartley and Choi et al., 2022). The effective number of sources and the uncertainty in one component of the ellipticity measurement (\(\sigma_{e}\)) for each redshift bin are shown in Table 1.
The catalogue provides the inverse variance weight (\(w\)) for the shape measurement of each galaxy. When computing the angular power spectrum, we use these weights to form the mask to be applied to the shear maps. We use the sum-of-weights scheme discussed in Nicola et al. (2021) to prepare this mask. We discuss this procedure in Section 4.2.
#### 3.2.1 Blinding
We use catalogue-level blinding to guard ourselves against experimenter bias which may drive our analysis towards known values of cosmological parameters from existing experiments. We transform the shape catalogue in the same way as in the DES-Y3 analysis (Gatti and Sheldon et al., 2021). This blinding method involves changing ellipticity values with the transformation:
\[|\eta| \equiv 2\mathrm{arctan}|\mathrm{e}|\] \[\to f|\eta|\]
where \(f\) is an _unknown factor_ between 0.9 and 1.1 (Gatti and Sheldon et al., 2021), which we keep the same for all four tomographic bins. This transformation limits the ellipticity values within unity and re-scales the estimated shear. Note that DES-Y3 analysis uses two-stage blinding; the first stage is at the catalogue level, and the second is at the level of summary statistics. In this work, we only perform catalogue-level blinding.
We performed all of our null and validation tests and an initial round of internal collaboration review of the manuscript
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Redshift Bin & Bin-1 & Bin-2 & Bin-3 & Bin-4 \\ \hline \(z_{1}^{\rm PZ}-z_{2}^{\rm PZ}\) & 0.0-0.36 & 0.36-0.63 & 0.63-0.87 & 0.87-2.0 \\ \hline \(n_{\rm eff}\) & 1.476 & 1.479 & 1.484 & 1.461 \\ \hline \(\sigma_{\rm e}\) & 0.243 & 0.262 & 0.259 & 0.310 \\ \hline \(\bar{R}_{1}\) & 0.767 & 0.726 & 0.701 & 0.629 \\ \hline \(\bar{R}_{2}\) & 0.769 & 0.727 & 0.702 & 0.630 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of source galaxy catalogue. \(z_{1}^{\rm PZ}-z_{2}^{\rm PZ}\) is the range of photometric redshifts of the given tomographic bin (Myles and Alarcon et al., 2021), \(n_{\rm eff}\) is the effective number density of source galaxies in units of gal/arcmin\({}^{2}\), and \(\sigma_{\rm e}\) is uncertainty in the measurement of one component of the shape (Amon and Gruen et al., 2022). \(\bar{R}_{1}\) and \(\bar{R}_{2}\) are the average METACALIRATION responses for two galaxy ellipticity components (Gatti and Sheldon et al., 2021).
Figure 3: ACT-DR4 \(\kappa_{\rm C}\) map reconstructed using ACT and _Planck_ data in the D56 region. The map shown here is smoothed with a Gaussian kernel of 12 arcmin FWHM for visual purposes. The x-axis indicates right ascension, and the y-axis indicates declination.
with the blinding factor still included. During this stage, we did not plot or compare the data bandpowers with the theoretical \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\). We plotted the figures showing parameter inference from the blinded data without the axis values. After we finalized the analysis pipeline, we removed the unblinding factor and updated the manuscript accordingly to discuss the results.
## 4 Method
In this work, we infer the cosmological, astrophysical and observational systematic parameters using the angular cross-power spectrum between the CMB lensing convergence and the tomographic galaxy weak lensing fields. In this section, we discuss the analysis methodology.
### Simulations
We use simulations of CMB convergence \(\kappa_{\rm C}\) and galaxy shape \(\mathbf{\gamma}\) with realistic noise to validate the analysis pipeline and obtain the covariance matrices for \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\). Weak lensing convergence and shear are not expected to be exact Gaussian random fields. A lognormal distribution provides a good approximation of weak lensing convergence and shear fields (Hilbert et al., 2011). Generating lognormal simulations is computationally cheap compared to N-body simulations and/or ray tracing. The feasibility of the lognormal simulations for the covariance matrices of the power spectrum is discussed in Friedrich & Gruen et al. (2018).
We simulate \(\kappa_{\rm C}\) and \(\mathbf{\gamma}\) signal maps as correlated lognormal random fields with zero mean using the publicly available code package FLASK(Xavier et al., 2016). We generate full sky, correlated signal maps of both convergence and shear at \(\texttt{Nside}=2048\), corresponding to 1.7 arcmin pixel resolution. To generate signal-only map realizations, inputs to FLASK are (1) the theory angular power spectra describing the auto and cross spectra of convergence field (\(\kappa_{\rm C/g}\)) for the CMB and the source galaxies (\(C_{\ell}^{\kappa_{\rm C}/\kappa_{\rm C}/\kappa}\)), (2) the galaxy source redshift distribution \(n(z)\), and (3) the lognormal shift parameter which determines the skewness of the lognormal distribution for a given variance. The auto and cross power spectra are computed using CCL with the halofit matter power spectrum. \(n(z)\) is the DES-Y3 source galaxy redshift distribution, which is also used as input to CCL while computing \(C_{\ell}^{\kappa_{\rm C}/\kappa_{\rm C}/\kappa}\). We use the same lognormal shift parameter values as used in Friedrich & Andrade-Oliveira et al. (2021) and Omori & Baxter et al. (2023). These are 0.00453, 0.00885, 0.01918, and 0.03287 for four DES-Y3 source redshift bins and 2.7 for the CMB. We then apply the ACT-D56 mask to the convergence fields and the DES-Y3 mask to the shear field to obtain signal-only maps over the respective survey footprints. The DES-Y3 mask used at this stage is a binary mask with a pixel value equal to zero if the pixel does not contain any source galaxy and a value of one otherwise.
We use the following procedure to obtain the \(\kappa_{\rm C}\) and \(\mathbf{\gamma}\) maps with noise that has the correlated signal part. In the simulations, a particular realization of the reconstructed \(\kappa_{\rm C}\) is generally obtained by reconstructing the lensing convergence from a simulated CMB map that has been lensed by a given \(\kappa_{\rm C}\) signal realization. However, in this work, we do not perform such an end-to-end \(\kappa_{\rm C}\) reconstruction with our log-normal signal-only \(\kappa_{\rm C}\) maps. Instead, we use existing ACT-DR4 \(\kappa_{\rm C}\) signal realizations and reconstruction simulations. The signal in these simulations is not correlated with our large scale structure simulations, so they cannot be used directly. Instead, we subtract the signal realization from these reconstructed \(\kappa_{\rm C}\) maps to obtain a realization of \(\kappa_{\rm C}\) reconstruction noise. We then add these resultant noise maps to our lognormal \(\kappa_{\rm C}\) signal-only map generated using FLASK. For these FLASK simulations, we have also generated \(\kappa_{\rm C}\) signal maps which are correctly correlated with the \(\mathbf{\gamma}\) signal. We generate simulations of the noise in the shear (the uncertainty caused by the intrinsic galaxy shape) using the random rotation of galaxy ellipticities in the DES-Y3 shear catalogue: \((e_{1}+ie_{2})\rightarrow\exp{(2i\phi)}(e_{1}+ie_{2})\), where \(\phi\) is a uniform random number in the range \([0,2\pi)\). A shear noise map is obtained using this catalogue where the galaxy shapes are rotated. We add these shear noise maps to the shear signal-only maps to obtain shear maps with realistic noise.
### Shear map making
We perform our analysis in harmonic space on the maps prepared in the HEALPix pixelization. Along with the shape measurements, the DES-Y3 shape catalogue contains weights and METACALIRATION response \((R_{1},R_{2})\) for each galaxy. The \(R_{1}\) and \(R_{2}\) are the diagonals of the response matrix \(\mathbf{R}\) discussed in Section 2. While estimating shear from the shape measurement, we do not use the response for each galaxy, but the average response as used in Gatti & Sheldon et al. (2021). We first subtract the non-zero mean of each ellipticity component for each galaxy using the weighted average and then correct for the response using the following expression:
\[\hat{e}_{i}=\frac{1}{R}\Big{(}e_{i}-\frac{\sum_{j}w_{j}e_{j}}{\sum_{j}w_{j}} \Big{)}, \tag{8}\]
where the average response \(\bar{R}\) for each ellipticity component of four tomographic bins is given in Table 1 and the labels \(i\) and \(j\) run over all of the galaxies in a given tomographic bin. The above subtraction is carried out for each tomographic bin separately. The shear map for a given bin is obtained from these mean subtracted and response-corrected galaxy shapes. The shear estimate for a given pixel \(p\) is the inverse variance weighted average of galaxy ellipticities
\[\mathbf{\gamma}(n_{p})=\frac{\sum_{i\in p}w_{i}\hat{\mathbf{e}}_{i}}{\sum_{i\in p}w_{ i}}, \tag{9}\]
where the summation is over all the galaxies that fall within the area of pixel \(p\).
To obtain the mask to be used with the shear maps, we use the sum-of-weights scheme, where we form the map from the inverse variance weights (\(w_{i}\)) given in the DES-Y3 catalogue (Nicola et al., 2021),
\[W(n_{p})=\sum_{i\in p}w_{i}. \tag{10}\]
We show the representative shear mask in Figure 4, along with the source galaxy number density map. These maps indicate the inhomogeneity in the galaxy count and their weights. Figure 5 show the maps of the shear component obtained using the procedure discussed in this section.
### Power spectrum bandpowers and covariance matrix
While computing the angular power spectrum with the partial sky map, one needs to consider the correlations between the spherical harmonic coefficients induced by the mask. This problem is addressed by the pseudo-\(C_{\ell}\) formalism, such as in the MASTER algorithm (Hivon et al., 2002). The algorithm deconvolves the effect of the mask and provides an estimate of the power spectrum (\(\hat{C_{\mathbf{q}}}\)) binned over a certain range of multipoles \(\ell\in\mathbf{q}\). The ensemble average of \(\hat{C_{\mathbf{q}}}\) is equal to the weighted average of the underlying angular power spectrum of the full sky map over a range of multipoles. The bandpower window function specifies the range of multipoles and the multipole weights. To compute these power spectrum bandpowers on the partial sky maps, we use the MASTER algorithm and its application for spin-2 fields (Hikage et al., 2011), as implemented in NAMASTER(Alonso et al., 2019).
In NAMASTER, we specify separate masks for the \(\kappa_{\mathrm{C}}\) and \(\mathbf{\gamma}\) fields. For \(\kappa_{\mathrm{C}}\), we use the ACT-DR4 analysis mask for the D56 region. We convert this mask from CAR pixelization to HEALPix pixelization at \(\texttt{Nside}=2048\) resolution using the re-project module of the pixell package. The original mask in CAR projection is apodized. We do not introduce any extra apodization in the mask after reprojecting to HEALPix. This analysis mask is applied to CMB maps used in the quadratic estimator while reconstructing \(\kappa_{\mathrm{C}}\); hence, the reconstructed \(\kappa_{\mathrm{C}}\) map has the mask implicit in it. We use the square of the analysis mask as the mask implicit in reconstructed \(\kappa_{\mathrm{C}}\).4 For the shear field, we use the sum of inverse variance weights mask as expressed in Equation (10) and depicted in Figure 4. This procedure is equivalent to dividing by the variance of the shear estimate in Equation (9). Compared to the \(\kappa_{\mathrm{C}}\) mask, the shear mask is highly non-uniform, as evident from Figure 4. We do not apodize this mask because apodization would lead to losing a substantial sky fraction. Using the simulations described above, we verify that our masking choices do not affect the recovered data vector. In the remaining section, we discuss the computation of pseudo-\(C_{\ell}\) and validation of the simulations at the power spectrum level.
Footnote 4: This information is specified in NAMASTER using masked_on_input = True keyword argument.
In the reconstructed \(\kappa_{\mathrm{C}}\) map, lower multipoles are affected by the mean-field bias caused by statistical anisotropy due to non-lensing effects, such as the analysis mask, inhomogeneous noise and other non-idealities in the data. For ACT D56 \(\kappa_{\mathrm{C}}\), multipoles below \(\ell\approx 50\) are affected by the mean-field (Darwish et al., 2021). Hence, we neglect the first bandpower of \(C_{\ell}^{\kappa_{\mathrm{C}}\mathrm{TE}}\) computed in the range \(\ell=0-100\). This analysis uses the \(C_{\ell}^{\kappa_{\mathrm{C}}\mathrm{TE}}\) computed at multipoles above \(\ell_{\mathrm{min}}=100\). The distribution of matter by the baryonic processes also affects the weak lensing angular power spectrum at small scales. To accurately model these scales, one needs to consider the effect of baryons in the modelling. In this work, we do not consider the modelling of the baryons and choose \(\ell_{\mathrm{max}}=1900\) so that the effect of baryons on \(C_{\ell}^{\kappa_{\mathrm{C}}\mathrm{TE}}\) is negligible at the given statistical uncertainty. To assess the effect of baryons, we use the halo model with baryon modelling considered in HMCODE(Mead et al., 2015) as implemented in CAMB(Lewis et al., 2000; Howlett et al., 2012).
Figure 4: The source galaxy number density map in gal/arcmin\({}^{2}\) unit (\(top\)) and the weight map (\(bottom\)) for the galaxies in DES-Y3 Bin-4. For the weight map, the value in each pixel is the summation of the inverse variance weights of all the galaxies that fall within that pixel. The weight map is used as the shear field mask without apodization.
Figure 5: Map of the magnitude of shear (\(\sqrt{\gamma_{1}^{2}+\gamma_{2}^{2}}\)) for the tomographic Bin-4, where the value of shear in each pixel is estimated using Equation (9). The map is smoothed with a Gaussian kernel of 12 arcmin FWHM for visual purposes.
Modelling of baryons in HMCODE is done through two parameters: halo concentration parameter \(A_{\rm HM}\) (HMCode_A_baryon) and the halo profile parameter \(\eta\) (HMCode_eta_baryon). We compute theory \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) for the fiducial cosmological parameters and over the range of values of \(A_{\rm HM}\) and \(\eta\). For \(A_{\rm HM}\), we consider the range \(A_{\rm HM}=2\) to \(4.5\) and \(\eta\) is determined by the empirical relation \(\eta=1.03-0.11A_{\rm HM}\)(Mead et al., 2015). We compare theory \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) with the uncertainty on \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) with ACT-DR4 and DES-Y3. We find that, over the range of baryon parameters considered here, the relative effect of baryons on \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) for \(\ell>1000\) can be up to \(20\%\). At the redshift where \(W_{\kappa}^{\rm CMB}(z)W_{\delta}^{\rm S}(z)\) has the peak, \(\ell_{\rm max}=1900\) corresponds to the comoving wavenumber of \(k_{\rm max}\equiv\ell_{\rm max}/\chi(z)=0.96,0.72,0.53,0.44\) Mpc\({}^{-1}\) for four DES-Y3 tomographic redshift bins, respectively. The matter perturbations at these scales are non-linear and sensitive to baryonic processes (Mead et al., 2015). However, the effect of baryons is still well within two per cent of the statistical uncertainty on \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) up to \(\ell=1900\). Moreover, for the given noise level, the expected SNR of \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) is saturated beyond \(\ell\approx 1900\). Hence, we choose \(\ell=1900\) as the optimal choice for \(\ell_{\rm max}\) in this analysis. We choose the multipole bin width \(\Delta\ell=300\) with uniform weights for the power spectrum binning.
We compare the pseudo-\(C_{\ell}\) computed from simulated maps with the input theory power spectrum. In the left column of Figure 6, we compare the FLASK signal-only simulation bandpowers computed over the survey footprint and the input theory \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\). We compare the mean of the pseudo-\(C_{\ell}\) from 511 simulations with the binned input theory power spectrum. We perform the binning of the theory \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) while properly taking into account the effect of bandpower window as discussed in Section 2.1.3 of Alonso et al. (2019). As shown in the left panel of Figure 6, we see no significant bias between pseudo-\(C_{\ell}\) computed from simulated maps and the input \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) and conclude that our simulated signal maps are consistent with the appropriate cosmological signal. This also verifies that the mode decoupling by NAMASTER for the given masks gives an unbiased power spectrum estimate. We then compute the pseudo-\(C_{\ell}\) of the 511 simulations with signal and noise. In the right column of Figure 6, we compare the mean of these 511 bandpowers with the input theory. Here also, we do not see a significant bias and find that the bandpowers computed from the noisy simulations are consistent with the input theory. This validates the power spectrum computation part of the analysis pipeline.
We use these 511 simulation bandpowers to obtain the covariance matrix for the \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) data vector. We expect this covariance matrix to accurately capture features of real data relevant to the power spectrum analysis. These include the non-Gaussianity of the signal modelled as the lognormal field, the inhomogeneous and non-Gaussian nature of \(\kappa_{\rm C}\) reconstruction noise, the inhomogeneous nature of shear noise arising from variations in the number count and the inverse variance weights. Each simulation bandpower realization is obtained using NAMASTER with the same mask treatment as applied to the data and hence captures the effect of using the partial sky.
We also construct a theoretical covariance matrix for the pseudo-\(C_{\ell}\) using NAMASTER. This covariance matrix takes into account the effect of the mask, the Gaussian contribution based on the auto and cross theory \(C_{\ell}\) of \(\kappa_{\rm C}\) and \(\gamma_{\rm E}\), and the noise power spectrum \(N_{\ell}\) of the respective field. For the \(\kappa_{\rm C}\) noise power spectrum, we use the mean of the noise power
Figure 6: Comparison of mean of \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) (\(\bar{C}_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\)) from signal only (_left panel_) and signal \(+\) noise (_right panel_) simulations with the theory \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\), enabling us to compare our pipeline against expectations. _Left top panel_: Comparison between input theory power spectra (dashed line) and the mean of the power spectrum of 511 signal-only simulated maps (points). _Left bottom panel_: The relative difference between the two. The error bars represent the uncertainty on the mean of 511 simulations. _Right top panel_: Comparison between input theory power spectra and the mean of the power spectrum of 511 simulated maps with signal and noise. _Right bottom panel_: The difference between the two in units of the uncertainty on the mean of \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\). The error bars are the standard deviation of the mean, i.e. \(\sigma[\bar{C}_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}]=\sigma[C_{\ell}^{\kappa_ {\rm C}\gamma_{\rm E}}]/\sqrt{\kappa_{\rm sims}}\).
spectra obtained from 511 maps of the \(\kappa_{\rm C}\) noise simulation. We obtain the shear noise power spectrum, \(N_{\ell}^{\gamma\gamma}\), using the following analytical expression (Nicola et al., 2021):
\[N_{\ell}^{\gamma\gamma}=A\frac{\sum_{i}w_{i}^{2}\sigma_{e,i}^{2}}{(\sum_{i}w_{i })^{2}}, \tag{11}\]
where \(A\) is the sky area and \(\sigma_{e,i}^{2}=(e_{i,1}^{2}+e_{i,2}^{2})/2\). The summation in the above equation is carried out only for the galaxies within the common region between DES-Y3 and ACT D56 region. We find that using \(N_{\ell}^{\gamma\gamma}=\sigma_{e}^{2}/n_{\rm eff}\), with the \(\sigma_{e}^{2}\) and \(n_{\rm eff}\) values given in Table 1, leads to relatively lower \(N_{\ell}^{\gamma\gamma}\) than the one obtained using Equation (11) evaluated for galaxies only over the ACT D56 region. This is because \(\sigma_{e}^{2}\) and \(n_{\rm eff}\) given in Table 1 are obtained from all the galaxies within the respective tomographic redshift bin. In contrast, for cross-correlation, we only need \(\sigma_{e}^{2}\) and \(n_{\rm eff}\) for the region that overlaps with the ACT D56 region. In Figure 7, we compare the diagonal of two covariance matrices. We find good agreement between the two covariance matrix estimates. In Figure 8, we show the correlation matrix obtained from the simulation covariance matrix as well as the Gaussian covariance matrix. With the choice of \(\Delta\ell=300\), we see no significant correlation between nearby bandpowers. Some of the matter lensing the source galaxies in two different redshift bins is the same. This leads to a correlation between the \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) corresponding to these redshift bins. The Gaussian covariance part in Figure 8 clearly shows the non-zero off-diagonal terms that arise due to these correlations, and as expected, the correlations between the two highest redshift bins, Bin 3 and Bin 4, are relatively larger. We include this inter-redshift bin correlation in our simulations, which is considered in the parameter inference.
## 5 Likelihood and Inference
To evaluate the likelihood for the \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) bandpowers, we make use of the Simons Observatory Likelihoods and Theories SOLikeT framework.5 SOLikeT is a unified framework for analysing cosmological data from CMB and LSS experiments being developed for the Simons Observatory (SO, Ade et al., 2019). Here we use the KappaGamma=Likelihood module to compute the theory \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) bandpowers at a given set of parameters \(\mathbf{\theta}\). This computation uses CAMB (Lewis et al., 2000; Howlett et al., 2012) matter power spectra with Limber integrals evaluated by CCL (Chisari et al., 2019). The KappaGamma=Likelihood module has been verified to reproduce published results.6
Footnote 5: [https://github.com/simonsobs/SOLikeT/](https://github.com/simonsobs/SOLikeT/)
Footnote 6: [https://github.com/simonsobs/SOLikeT/pull/S8#](https://github.com/simonsobs/SOLikeT/pull/S8#)
issuecomment=1213989444 where the measurement of Hand et al. (2015) is reproduced.
### Cosmological model and parameters
We consider a cosmology with fiducial parameters as given by the Planck Collaboration (2020) _Planck_ "base-\(\Lambda\)CDM" TT,TE,EE+lowE+lensing model, with values as described in Table 2. Parameters are held at these fixed values for simulations, while parameter inference runs are initialised centred around these values, then sampled within the prior ranges shown and marginalised over to give results on other parameters. Where no prior is shown, the values are kept fixed throughout. Our main results are the posterior of the parameters \(\Omega_{m}\), \(\sigma_{8}\) and \(S_{8}\equiv\sigma_{8}\,(\Omega_{m}/0.3)^{0.5}\), where \(S_{8}\) is the standard parameter optimally constrained by galaxy lensing, in contrast to \(S_{8}^{\Lambda\rm CMBL}\equiv\sigma_{8}\,(\Omega_{m}/0.3)^{0.25}\) which is optimally constrained by CMB lensing alone. The leftmost panel of Figure 9 shows a simulation of our data vector (as described
Figure 8: Correlation matrix obtained from the covariance matrix over the multipole range of \(\ell=100\) to 1900 with \(\Delta\ell=300\). The upper triangle shows the elements of the correlation matrix obtained from the analytical covariance matrix, and the lower triangle shows that from the simulation covariance matrix. Note that the colour scale is saturated at \(\pm 0.4\) to clearly show the fluctuations in the off-diagonal terms.
Figure 7: _Top:_ Square root of the diagonal of the analytical (dot) and simulation (cross) covariance matrices for four tomographic bins. _Bottom:_ The ratio of the diagonal of two covariance matrices, indicating agreement between the two within \(\pm 5\%\).
in Section 4.1) plotted against predictions from cosmologies with different relevant \(S_{8}\) values, giving an idea of the constraining power of the data.
### Nuisance model and parameters
Systematic uncertainties on galaxy cosmic shear power spectra are frequently dealt with by marginalising over simple parameterised models. Here, we consider models of three cases, following the choices in the baseline DES-Y3 analyses:
* **Multiplicative shear bias**: The process of measuring weak lensing shear from noisy images can induce biases on the inferred power spectrum (see, e.g. MacCrann et al., 2022). For current experiments, including DES-Y3, it has been shown to be adequate to model these using a single multiplicative parameter per tomographic bin \(m_{i}\)(Heymans et al., 2006; Huterer et al., 2006; Kitching et al., 2020), which modifies the power spectra as: \[C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E},i}\to(1+m_{i})C_{\ell}^{\kappa_{\rm C} \gamma_{\rm E},i}.\] (12)
In the second from the left panel of Figure 9 we show the effect of varying the \(m\) nuisance parameter on the \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) spectra alongside our simulated measurements for tomographic bin 4.
\(\bullet\)**Source redshift distribution calibration**: Galaxy shear power spectra are highly sensitive to the redshift distribution function \(n(z)\) within each tomographic bin of the sources used. Where the samples are selected using photometric information, as is the case in DES-Y3, the estimated \(n(z)\) may have significant uncertainties in both the overall mean redshift and detailed shape. Though more sophisticated parameterisations of these uncertainties exist and are expected to be important for near-future experiments, it has been shown that for the weak lensing source galaxies in DES-Y3, it is adequate only to consider the uncertainty on the mean of the \(n(z)\) within each tomographic bin (e.g. Cordero and Harrison et al., 2022). We include four additional nuisance parameters \(\Delta z_{i}\) for a shift in the mean of each tomographic bin: at each likelihood evaluation step, we shift the distribution in each tomographic bin \(i\) according to:
\[n_{i}(z)\to n_{i}(z+\Delta z_{i}). \tag{13}\]
The second from the right panel of Figure 9 shows the effect of varying the \(z\) nuisance parameter on the \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) spectra alongside our simulated measurements for tomographic bin 4.
* **Galaxy Intrinsic Alignments**: The use of galaxy images as a proxy for gravitational shear relies on the assumption that intrinsic galaxy shapes are randomly oriented, which is not the case in reality. Physically close pairs of galaxies will tend to align their major axes towards overdensities local to them in positively correlated 'intrinsic-intrinsic' (II) alignments. Negative'shear-intrinsic' (GI) correlations are also created when distant galaxies are tangentially sheared by lensing from foreground overdensities, which more nearby galaxies are gravitationally aligned towards. The power spectrum of the contaminating Intrinsic Alignments (IA) can be physically modelled in a number of ways (see Samuroff et al., 2023, and references therein). Here, we adopt the Nonlinear Linear Alignment (NLA) model (Hirata et al., 2007; Bridle and King, 2007) which makes the simplifying assumption that IAs are from E-mode GI alignments only, neglecting the intrinsic-intrinsic B-mode term which is also possible to consider in the \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) observable. The NLA model treats the GI alignment power spectrum as a simple scaling of the matter power spectrum with a redshift evolution. We infer the two parameters \(A_{\rm IA}\) and \(\eta_{\rm IA}\) across all tomographic bins, corresponding to a substitution in the galaxy lensing kernel: \[W_{\gamma}^{g}(z)\to W_{\gamma}^{g}(z)-A_{\rm IA}C_{1}\rho_{\rm cir}\,\frac{ \Omega_{\rm m}}{G(z)}n(z)\Big{(}\frac{1+z}{1+z_{0}}\Big{)}^{\eta_{\rm IA}}.\] (14) Here, \(z_{0}\) is pivot redshift fixed to 0.62 as in Secco and Samuroff et al. (2022), \(G(z)\) is the linear growth factor and \(C_{1}=5\times 10^{14}\,M_{\odot}^{-1}h^{-2}\)Mpc\({}^{3}\) is the normalisation constant. The rightmost panel of Figure 9 shows the effect of varying \(A_{\rm IA}\) on the theory spectra for tomographic bin 4. Whilst the more sophisticated Tidal Alignment and Tidal Torquing (TATT) model was adopted as fiducial for the DES-Y3 3x2pt analysis of Abbott et al. (2022), when considering only the shear part of the data, Secco and Samuroff et al. (2022) find a mild preference for the simpler NLA model (row three of Table 3 in that work) which we therefore choose to adopt for reasons of both model and implementation simplicity.
Figure 9: Illustrative changes in predicted data vectors as the cosmological and nuisance parameters are individually varied, compared to the simulated data vector described in Section 4.1 (to be concise, we only show tomographic Bin 4 as this is the highest SNR bin). The left-most panel also shows cosmologies with \(S_{8}\) as found by _Planck_ 2018 primary CMB (Planck Collaboration, 2020) and the KiDS-1000+BOSS+2dFLenS galaxy clustering and weak lensing survey (Heymans et al., 2021) as examples of the range of values present in the current literature.
### Likelihood computation
We compute a simple Gaussian likelihood (\(\mathcal{L}\)) between our data vector bandpowers and binned theory vector at a given set of cosmological and nuisance parameters \(\mathbf{\theta}\) using the covariance matrix, \(\mathbb{C}\), calculated in Section 4.3:
\[-2\ln\mathcal{L}=\sum_{\ell\ell^{\prime}}\big{[}\hat{C}_{\ell}^{ \kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}-C_{\ell}^{\kappa_{\mathrm{C}}\gamma_{ \mathrm{E}}}(\mathbf{\theta})\big{]}\mathbb{C}_{\ell\ell^{\prime}}^{-1}\big{[} \hat{C}_{\ell\ell^{\prime}}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}-C_{\ell^{ \prime}}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}(\mathbf{\theta})\big{]}, \tag{15}\]
where \(\hat{C}_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}\) is the data vector and \(C_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}\) is the model power spectrum. The posterior probability for the parameters is then proportional to the likelihood multiplied by the priors (\(\Pi\)): \(P(\mathbf{\theta}|\hat{C}_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}})\propto \mathcal{L}(\hat{C}_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}|\mathbf{ \theta})\Pi(\mathbf{\theta})\). The choices of prior distributions are detailed in Section 5.4.
#### 5.3.1 Hartlap correction
Because the fiducial covariance matrix is estimated from a finite number of simulations, the inverse covariance matrix used in the likelihood computation is known to be a biased estimate of the true inverse covariance matrix (Anderson, 2003; Hartlap et al., 2007). To account for this, we apply the well-known Hartlap correction to the inverse covariance matrix:
\[\mathbb{C}^{-1}\rightarrow\alpha\mathbb{C}^{-1};\alpha\equiv\frac{N_{\mathrm{ sims}}-N_{\mathrm{data}}-2}{N_{\mathrm{sims}}-1}, \tag{16}\]
where \(N_{\mathrm{sims}}\) is the number of simulations and \(N_{\mathrm{data}}\) is the length of the data vector. We use the corrected covariance matrix for computing our likelihood. For our 511 simulations and 24 data points, the size of the Hartlap correction is \(\alpha=0.951\). The choice of \(\Delta\ell=300\) reduces the total number of data points in the data vector, which is optimal compared to \(\Delta\ell<300\) because, for a given number of simulations, fewer data points minimize the impact of the Hartlap correction.
### Prior choice
In Table 2, we show the set of cosmological and nuisance parameters varied in our Monte Carlo chains. Fiducial cosmological parameters (the values at which simulations are performed and that in inference sampling runs are used to initialise the chains) are chosen to coincide with those of the _Planck_ Collaboration's "base-\(\Lambda\)CDM" TT,TE,EE+lowE+lensing model from Planck Collaboration (2020) and priors are wide enough to capture all reasonable cosmologies at the time of writing. Whilst our \(C_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}\) observable depends only weakly on the Hubble expansion parameter \(H_{0}\), we found in initial runs based on a simulated data vector that when this parameter was kept fixed, a sharp boundary appeared in the two-dimensional (\(\sigma_{8},\Omega_{\mathrm{m}}\)) plane, with the lower right section of the 'banana' shape being cut off. This did not affect the posterior on \(S_{8}\) but did lead to an artificial bi-modality in the one dimensional \(\Omega_{\mathrm{m}}\) constraint. Allowing \(H_{0}\) to vary removed this effect and is in line with the prior treatments of CMB lensing and cross-correlation data vectors (e.g. Chang et al., 2023; Madhavacheril et al., 2023).
### Posterior sampling
We sample from the posterior using the Markov Chain Monte Carlo Metropolis sampler distributed with Cobaya(Lewis & Bridle, 2002; Lewis, 2013; Torrado & Lewis, 2019, 2021). We first run a chain in our fiducial parameterisation and with the simulated data vector to convergence without defined scales for the mixed Gaussian-exponential proposal distribution used for taking steps7. We subsequently use the proposal covariance matrix learned during this chain to speed up convergence for all subsequent chains. We regard chains as converged when the Gelman-Rubin criteria reach a value \(R-1<0.01\) and the first 30% of chains are removed as burn-in. For all of our chains, this results in a number of effective samples in the range of 1500-2000.
Footnote 7: As described in [https://cobaya.readthedocs.io/em/latest/sampler_ncnc.html#covariance-matrix-of-the-proposal-pdf](https://cobaya.readthedocs.io/em/latest/sampler_ncnc.html#covariance-matrix-of-the-proposal-pdf).
## 6 Validation of data and method
We validate the data vector using two null tests and check for systematic contamination from Galactic dust and stars. Before applying it to the data, we validated the parameter inference methodology using simulations. This includes checking for the absence of bias in the inferred parameters, robustness to the choice of the covariance matrix, and robustness to the effect of the intrinsic galaxy alignment modelling.
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Fiducial & Prior \\ \hline \multicolumn{3}{c}{**Cosmology Sampled**} \\ \(\Omega_{\mathrm{c}}h^{2}\) & \(0.120\) & \(\mathcal{U}[0.05,0.99]\) \\ \(\log(A_{\mathrm{s}}10^{10})\) & \(3.042\) & \(\mathcal{U}[1.6,4.0]\) \\ \(H_{0}\) & \(67.36\) & \(\mathcal{U}[40,100]\) \\ \hline \multicolumn{3}{c}{**Cosmology Fixed**} \\ \(\Omega_{\mathrm{b}}h^{2}\) & \(0.0224\) & - \\ \(n_{s}\) & \(0.9649\) & - \\ \(\sum m_{\nu}\,[\mathrm{eV}]\) & \(0.06\) & - \\ \hline \multicolumn{3}{c}{**Galaxy Intrinsic Alignment**} \\ \(A_{\mathrm{IA}}\) & \(0.35\) & \(\mathcal{N}(0.35,0.65)\) \\ \(\eta_{\mathrm{IA}}\) & \(1.66\) & \(\mathcal{N}(1.66,4)\) \\ \hline \multicolumn{3}{c}{**Galaxy redshift calibration**} \\ \(\Delta z_{1}\) & \(0.0\) & \(\mathcal{N}(0.0,0.018)\) \\ \(\Delta z_{2}\) & \(0.0\) & \(\mathcal{N}(0.0,0.015)\) \\ \(\Delta z_{3}\) & \(0.0\) & \(\mathcal{N}(0.0,0.011)\) \\ \(\Delta z_{4}\) & \(0.0\) & \(\mathcal{N}(0.0,0.017)\) \\ \hline \multicolumn{3}{c}{**Galaxy shear calibration**} \\ \(m_{1}\) & \(-0.006\) & \(\mathcal{N}(-0.006,0.009)\) \\ \(m_{2}\) & \(-0.020\) & \(\mathcal{N}(-0.020,0.008)\) \\ \(m_{3}\) & \(-0.024\) & \(\mathcal{N}(-0.024,0.008)\) \\ \(m_{4}\) & \(-0.037\) & \(\mathcal{N}(-0.037,0.008)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The parameters and priors used in the model specification within a \(\Lambda\)CDM cosmology. Fiducial values are used for simulations and initialisation of inference chains. Priors are either Uniform \(\mathcal{U}[\min,\max]\) or Gaussian \(\mathcal{N}(\mu,\sigma)\). Unlisted other cosmological parameters and model choices are fixed to their default values in CAMB v1.3.5 (Lewis et al., 2022)
### Data vector null tests
We check the data for some non-idealities that may be present. We compute the \(\chi^{2}\) of the statistic under consideration, for which we set PTE = 0.05 as a threshold for considering the test failed. Our unblinding decision was based on the \(\chi^{2}\) and PTEs computed with bandpowers of four redshift bins considered together. We consider a null test to be passed if the PTE exceeds this threshold.8
Footnote 8: Unblinding of the data vector was performed assuming a one-sided PTE threshold, with PTE \(<\) 0.05 indicating a failed null test. However, as demonstrated in this section, the data vectors for the full dataset pass the null test even when considering two-sided PTE distributions.
At linear order and under the Born approximation, weak lensing of galaxies by large-scale structure is not expected to give rise to B-modes in the shear map. Therefore, we do not expect any significant B-mode signal of cosmological origin in the shear map. Moreover, obtaining E and B modes from the partial sky shear maps can cause mixing between E and B modes. In the absence of such spurious B-modes, the correlation of the B-modes in the shear maps with the \(\kappa_{\rm C}\) is expected to be consistent with zero. In Figure 10, we show the \(C_{\ell}^{\kappa_{\rm C}\cap\eta}\) bandpowers for four redshift bins with their error bars. With the blinded data vector, we find the four data vectors together are consistent with zero with PTE = 0.81, indicating the absence of spurious B-modes in the shear data. The PTEs for the individual redshift bin bandpowers are 0.47, 0.75, 0.57, and 0.59, respectively.
We also correlate the \(\kappa_{\rm C}\) map with the shear map obtained from the DES-Y3 catalogue, where ellipticities are randomly rotated. The random rotation is expected to wash out any cosmological signal in the shear maps and, indeed, is how we obtain the shear noise for the mock shear simulations, as discussed in Section 4.1. Hence, the correlation of these maps with the \(\kappa_{\rm C}\) map tests for any non-cosmological features in shear maps that may correlate with the \(\kappa_{\rm C}\) map. From Figure 10, with the blinded data vector, we find this correlation is also consistent with zero with PTE = 0.19. The PTE values for individual redshift bin bandpowers are 0.07, 0.20, 0.19, and 0.96. Note that we do not regard the 0.96 as a failure, as these per-bin PTE numbers were not part of our unblinding criteria. For the larger set of PTEs generated by including per-bin calculations, it is more likely that a failure appears by chance from sampling this part of the Uniform distribution expected for the PTE values. These tests provide an important check of the analysis pipeline.
We obtain the covariance matrices for both tests using 511 simulations. For the B-mode null test, the covariance matrix is obtained using \(C_{\ell}^{\kappa_{\rm C}\cap\eta}\) bandpowers computed using 511 simulations. For the rotation null test, we compute \(C_{\ell}^{\kappa_{\rm C}\cap\eta_{\rm E}}\) bandpowers; hence the covariance matrix is the same as used for the signal \(C_{\ell}^{\kappa_{\rm C}\cap\eta_{\rm E}}\) bandpower.
### Diagnostic tests using survey property maps
Any systematic effect or contamination (\(S\)) that simultaneously affects both the observables, \(\kappa_{\rm C}\) reconstructed from the observed CMB and \(\mathbf{\gamma}\) estimated using galaxy shape measurements, can lead to a bias in the measurement of \(C_{\ell}^{\kappa_{\rm C}\cap\eta_{\rm E}}\). For example, the Galactic dust can affect CMB lensing reconstruction through its presence in the CMB map, and the extinction by dust can affect the measurement of galaxy properties. The following statistic captures the amplitude of contamination to \(C_{\ell}^{\kappa_{\rm C}\cap\eta_{\rm E}}\) coming from a given survey property \(S\)(Omori et al., 2019; Chang et al., 2023):
\[X_{\ell}^{S}=\frac{C_{\ell}^{\kappa_{\rm C}\cap S}C_{\ell}^{\kappa_{\rm C} \cap S}}{C_{\ell}^{\kappa_{\rm C}\cap S}}. \tag{17}\]
In this work, we consider two survey properties: dust extinction, where \(S\) is the map of E(B-V) reddening (Schlegel et al., 1998) and stellar density, where \(S\) is the map of stellar density (Sevilla-Noarbe et al., 2021; Abbott et al., 2021; Sanchez et al., 2023). In Figure 11, we show \(X_{\ell}^{S}\) for both survey properties with its error bar. We obtained the error bar using the delete-one patch jackknife method, where we used 28 jackknife samples of the data and the survey property maps. For both survey properties, the effect on \(C_{\ell}^{\kappa_{\rm C}\cap\eta_{\rm E}}\) is within a few per cent of the error on \(C_{\ell}^{\kappa_{\rm C}\cap\eta_{\rm E}}\) and hence is of less concern. For dust extinction \(X_{\ell}\), we obtain the PTE for four bins as 0.895, 0.698, 0.769, and 0.659, indicating no significant detection of dust contamination in cross-correlation. For the stellar density, the PTE values are 0.993, 0.999, 0.999, and 0.992. Whilst these PTE values are high and close to one, they indicate an over-consistency with zero according to the estimated error bars. The jackknife method overestimates the error bar in general (Norberg et al., 2009; Favole et al., 2021). Therefor, we do not regard high PTE as problematic in the case of a diagnostic test.
### Model validation
#### 6.3.1 IA model robustness
In order to assess the robustness of our inference to the model chosen for intrinsic galaxy alignments, we create two simulated data vectors, one without any Intrinsic Alignment (IA)
Figure 10: _Top:_ The power spectrum between \(\kappa_{\rm C}\) and the B-mode of the shear (\(C_{\ell}^{\kappa_{\rm C}\cap\eta}\)). _Bottom:_ The correlation between \(\kappa_{\rm C}\) and the E-mode of the shear map (\(C_{\ell}^{\kappa_{\rm C}\cap\eta_{\rm E}}\)) obtained from the catalogue in which DES-Y3 catalogue ellipticities are randomly rotated. The error bar indicates 1\(\sigma\) uncertainty of the statistic. We find both sets of bandpowers to be consistent with zero.
signal and one in which the observed angular power spectrum \(C_{\ell 8}\) have additional power added from a Non-linear Linear Alignment (NLA) model as described in Equation (14) with the fiducial parameter values \(\{A_{\rm IA},\eta_{\rm LA}\}=\{0.35,1.66\}\) (corresponding to the mean posterior values from DES-Y3 shear-only analysis in Table III of Secco and Samuroff et al., 2022) as shown in Table 2. In Figure 12, we show the stability of our measurement of \(S_{8}\) to the choice of IA model, with no significant parameter shifts observed when considering mismatched choices of model and data (e.g. when the NLA model is used on a data vector with no IA signal and vice-versa). We chose to include the full NLA model parameterisation for our fiducial inference runs on the data.
Within this parameterization, we choose to include an informative prior on \(A_{\rm IA}\sim\mathcal{N}(0.35,0.65)\) and \(\eta_{\rm LA}\sim\mathcal{N}(1.66,4)\), with prior widths a factor four wider than the posterior on NLA IA parameters found in DES-Y1 data from the 3x2pt analysis of Abbott et al. (2018). Note that the DES-Y1 data are in a sky region not included in our analysis so this prior can be regarded as independent. We refer to this prior as our 'fiducial prior' and it is shown throughout as black unfilled contours. In order to assess the impact of this choice, we also run an inference chain on the simulated data vector with broad priors \(\mathcal{U}(-5,5)\) on both parameters, matching the _priors_ used in the DES-Y3 analyses. Note that the latter is a broader prior in the sense that it is less localized in the \(A_{\rm IA}\) direction compared to the fiducial prior. The results can be seen in Figure 13. Here, it can be seen that whilst the 1D posteriors on \(\Omega_{\rm m}\) and \(\sigma_{8}\) are not affected, a mild degeneracy between \(A_{\rm IA}\) and \(S_{8}\) causes a widening of the posterior on \(S_{8}\) and shift of the peak to lower values. For positive values of \(A_{\rm IA}\), the constraint is relatively unaffected by the choice of prior, and we can indeed see some constraining power of the data appearing due to the similar upper limits from both the wide and the informative prior. For negative values of \(A_{\rm IA}\), it can be seen that the lower limit is dominated by the prior in the fiducial case, with the posterior extending significantly further for the wide prior. This lack of constraining power causes a 'projection effect', which lowers the inferred value of \(S_{8}\). However, the galaxy formation physics represented by the NLA IA model is expected to result in \(A_{\rm IA}>0\) for red galaxies, and this has been observationally shown to be the case (with KiDS+GAMA Johnston et al. 2019 find \(A_{\rm IA}^{\rm Red}=3.18^{+0.47}_{-0.46}\) and with DES-Y1 Samuroff et al. 2019 find \(A_{\rm IA}^{\rm Red}=2.38^{+0.32}_{-0.31}\)). For blue galaxies in the NLA model, \(A_{\rm IA}<0\) is possible, but observations have so far been consistent with zero and inconsistent with large negative values (Johnston et al. 2019 find \(A_{\rm IA}^{\rm Blue}=0.21^{+0.37}_{-0.36}\) and Samuroff et al. 2019 find \(A_{\rm IA}^{\rm Blue}=0.05^{+0.10}_{-0.09}\)). For weak lensing samples such as DES-Y3, which contain a mixture of red and blue galaxies, with red fraction \(f_{\rm Red}\sim 20\%\) this means \(A_{\rm IA}=A_{\rm IA}^{\rm Red}f_{\rm Red}+A_{\rm IA}^{\rm Blue}(1-f_{\rm Red})\) is even more constrained to be positive. Motivated by these considerations, we keep the informative prior on IA parameters for the fiducial analysis.
#### 6.3.2 Neutrino model robustness
We chose as our baseline model three neutrinos of degenerate mass, consistent with the model choice of the Abbott et al. (2022), with \(\sum m_{\nu}=0.06\,\rm eV\) in our fiducial analysis. We find that differences between our \(C_{\ell}^{\rm NC\gamma E}\) observable are at
Figure 11: Survey property correlation statistics \(X_{\ell}\) in units of error bar of \(C_{\ell}^{\rm NC\gamma E}\), for the dust extinction map (top panel) and the stellar density (bottom panel). The error bars are obtained using \(X_{\ell}\) computed over 28 jackknife samples of the data. For both survey properties, \(X_{\ell}^{S}\) is less than 5% of the error bar on \(C_{\ell}^{\rm NC\gamma E}\) and consistent with zero.
Figure 12: The stability of the recovery of the \(S_{8}\) parameter from our simulated data vector as we change the model used for the inference. The dashed vertical line represents the true value of the input to the simulation, and the shaded band is the error bar in the fiducial model setup. Note that the slightly (but not significantly) high value recovered in the upper rows is consistent with what is expected for this single realisation (see Figure 14). The final row shows the result for one model using a data vector which is the mean of 511 realisations.
most \(\sim 0.5\%\) when comparing three degenerate neutrinos to the normal hierarchy case and at most \(\sim 2\%\) when comparing to a single massive neutrino scenario.
The relevant row of Figure 12 shows the stability of our \(S_{8}\) measurement to the marginalisation over the sum of neutrino mass, with a uniform prior \(\sum m_{\nu}\left[\mathrm{eV}\right]\sim\mathcal{U}[0.0,1.0]\), in this model.
### Covariance matrix validation
In addition to the covariance matrix computation with FLASK simulations, as detailed in Section 4.3, we also construct an analytical Gaussian covariance matrix for the pseudo-\(C_{\ell}\) estimator. The covariance matrix estimated using simulation bandpowers is expected to model non-Gaussian contributions more correctly than the theoretical case but suffers from realisation noise effects since it is an average over the finite number of simulations. The results of parameter constraints obtained using the analytical covariance matrix are shown in the relevant row of Figure 12. As can be seen, both methods give consistent posteriors on our simulated data vector. Therefore, we choose to continue with the simulation-based covariance matrix. This choice is somewhat arbitrary, but on the understanding that as the constraining power of the data improves with future data releases, the effects modelled in the simulations will become more significant.
### Recovery of input model from mock data
In order to verify that our inference pipeline is capable of making an unbiased recovery of cosmological parameters, we run it on a data vector recovered from one of the 511 FLASK simulations described in Section 4.1. The results of this analysis are shown in Figure 13 for cosmological and IA parameters. In Figures 13 and 12, we show the posterior and prior, including those for observational nuisance parameters, but zoomed out to show the full shapes of the prior. The priors are specified in Table 2 and are shown as unfilled contours in the space of the inferred parameters, indicating the level of information gained from the data (or lack of it in the case of the prior-dominated nuisance parameters). As can be seen, all inferred parameters are recovered with biases smaller than the 68% credible interval, as may be expected from realisation noise. In addition to this full parameter inference on a single simulation, we have also inferred only the \(A_{\mathrm{cross}}\) parameter for this simulation _and_ a data vector which is the mean of the 511 simulations. \(A_{\mathrm{cross}}\) is a phenomenological parameter which modifies the overall amplitude of the lensing spectra with respect to that predicted by a model with a fixed set of cosmological parameters (here, the true input parameters to the simulation):
\[C_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}{}_{\mathrm{obs}}=A_{\mathrm{ cross}}C_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}{}_{\mathrm{true}} \tag{18}\]
As shown in Figure 14, this gives a mean value and 68% credible interval of \(A_{\mathrm{cross}}=1.00\pm 0.13\) for the mean data vector and \(A_{\mathrm{cross}}=1.10\pm 0.15\) for the fiducial realisation, confirming the finding that this single realisation has a random fluctuation towards higher clustering amplitude \(S_{8}\), as seen in the full inference in Figure 12.
### Internal consistency
In order to test the robustness of our result, we perform the inference on a number of different splittings of the full data set. These involve i) leaving out data from individual tomographic bins in turn and ii) only using data from an individual tomographic bin in turn. The results of these runs are shown in Figure 15. These checks were performed _before_ the blinding factor applied to the DES shear catalogue (see Section 3.2.1 for details) was removed in order to act as an
Figure 14: Recovery of the \(A_{\mathrm{cross}}\) parameter from our simulated data vector. All cosmological and nuisance parameters are held at their true input values, and \(A_{\mathrm{cross}}\) is an overall amplitude parameter for the \(C_{\ell}^{\kappa_{\mathrm{C}}\gamma_{\mathrm{E}}}\) spectra, which has the value one at the true input model.
Figure 13: Posteriors showing recovery of the input model from data simulations under the fiducial prior (blue) and an alternate prior choice on galaxy intrinsic alignment (IA) parameters (red), that is less localized in \(A_{\mathrm{IA}}\) direction. The fiducial prior (black) is informed by the DES-Y1 cosmic shear analyses (which use data independent from those used here) and expectations for the NLA model. The wide uninformative prior causes the mild correlation between \(A_{\mathrm{IA}}\) and \(S_{8}\) to drag the posterior on the latter to lower values. See text for further discussion.
extra confirmation of the adequacy of the analysis pipeline. As can be seen, each data split is consistent with all others within the expected scatter, and the behaviour of the error bars matches physical expectations, with progressively larger amounts of cosmic shear lensing signal contained in higher redshift tomographic bins.
## 7 Results
With the above work demonstrating that we have a data vector passing null tests for systematics contamination and that we have a working unbiased measurement pipeline, we now present the results using our fiducial ACT-DR4+_Planck_-tSZ deprojected data vector. We unblind the data vector by obtaining the numerical value of the blinding factor \(f\) (as described in Section 3.2.1) and applying the inverse of the blinding transformation to the catalogue shear values. We then re-make our maps and two-point data vectors and proceed. Section 7 shows the data for four tomographic bins along with the best-fit theory model \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\). As a validation, we also compute the full set of results for \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) with the ACT-DR4-only \(\kappa_{\rm C}\), and these are presented in Appendix A.
### Lensing amplitude \(A_{\rm cross}\)
We first consider the case in which we fix all other parameters to our fiducial values as in Table 2 and vary only the normalisation (\(A_{\rm cross}\)) of the observed \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) spectrum relative to the prediction from this model:
\[C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}{}_{\rm obs}=A_{\rm cross}C_{\ell}^{ \kappa_{\rm C}\gamma_{\rm E}}{}_{\rm Planck} \tag{19}\]
Under a uniform prior of \(A_{\rm cross}\sim\mathcal{U}[0.0,2.0]\) we find a measurement of \(A_{\rm cross}=0.84^{+0.16}_{-0.13}\) indicating agreement with the _Planck_ result but mildly favouring a lower amplitude. This compares to previous determinations of \(A_{\rm cross}\) using \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) data: from POLARBEAR (polarisation lensing)\(\times\)HSC of \(1.70\pm 0.48\)(Namikawa et al., 2019); from _Planck_\(\times\)HSC of \(0.81\pm 0.25\)(Marques et al., 2020) and ACT+_Planck_\(\times\)KiDS-1000 of \(0.69\pm 0.14\)(Robertson et al., 2021). Other previous measurements of \(A_{\rm cross}\) from \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) have used earlier _Planck_ data releases for the baseline cosmology so are not directly comparable: from ACT\(\times\)CS82 (Hand et al., 2015); from _Planck_\(\times\)CFHTLenS (Liu and Hill, 2015); from SPT+_Planck_\(\times\)DES-Science Verification (Kirk et al., 2016); from _Planck_\(\times\)RCSLenS and CFHTLenS (Harnois-Deraps et al., 2016); from _Planck_\(\times\)KiDS-450 (Harnois-Deraps et al., 2017); from _Planck_\(\times\)SDSS (Singh et al., 2017) and from SPT+_Planck_\(\times\)DES-Y1 (Omori et al., 2019).
### Matter clustering \(S_{8}\) and other parameters
In Figure 17, we show the inferred posterior on cosmological parameters from the fiducial ACT-DR4+_Planck_\(\times\) DES-Y3 data vector. The ten galaxy weak lensing nuisance parameters (galaxy intrinsic alignment, redshift calibration, and shear calibration) are also varied but are prior-dominated and omitted in the plot, as is the \(H_{0}\) parameter (see Appendix B for the plot including them). We also show the constraints on the cosmological parameters from other experiments. We chose these experiments as being external data sets using very different techniques to measure the same cosmological parameters: the _Planck_ 2018 primary CMB result from Planck Collaboration (2020), and the KiDS-1000 3x2pt result from Heymans et al. (2021). Though of lower constraining power, our result contains fully independent data and probes a different set of redshift and physical scales to the two other experiments (see Figure 1). Additionally, we show in Figure 18 our 1D marginalised measurement of the \(S_{8}\) parameter alone against a further catalogue of other measurements (SPT+_Planck_\(\times\)DES-Y3 Chang et al., 2023; ACT-DR4+_Planck_\(\times\)KiDS Robertson et al., 2021; DES-Y3 shear only Amon et al., 2022, Secco and Samurot et al., 2022; KiDS-100 shear only Asgari et al., 2021; DES-Y3 xxpt Abbott et al., 2022; KiDS-1000 3x2pt Heymans et al., 2021; _Planck_ 2018 Primary CMB Planck Collaboration, 2020). Summary statistics from our inference for the full set of parameters are shown in Table 3. The marginalised mean values and 1D 68% credible regions for the matter density parameter and the amplitude of the fluctuations in the matter distribution are:
\[\begin{array}{l}\Omega_{\rm m}=0.338^{+0.05}_{-0.16}\\ \sigma_{8}=0.79^{+0.05}_{-0.19}\\ S_{8}=0.782\pm 0.059\end{array};\]
This inference is drawn when informative priors are used on the nuisance parameters. Constraints on the density of the matter \(\Omega_{\rm m}\), even if not most precise, are still a significant improvement over the assumed prior on \(\Omega_{\rm m}\). The value of \(S_{8}\) inferred in this analysis is consistent with that inferred from the _Planck_ TT,TE,EE+lowE CMB measurements \(S_{8}=0.834\pm 0.016\)(Planck Collaboration, 2020) with the difference of \(0.85\sigma\), when adding the statistical uncertainties in quadrature to obtain the uncertainty on the difference. It is also consistent with the \(S_{8}=0.766^{+0.020}_{-0.014}\) inferred using KiDS-1000 3x2pt analysis (Heymans et al., 2021), with around \(0.3\sigma\) difference. A companion study performs the cross-correlation analysis of ACT-DR4 D56 \(\kappa_{\rm C}\) and DES-Y3 MAGLIM galaxies finding \(S_{8}=0.75^{+0.04}_{-0.03}\)(Marques et al., 2023), which differs only by \(0.4\sigma\) with the \(S_{8}\) inferred in this work.
Figure 15: Stability of our 1D marginalised measurement of \(S_{8}\) when using different parts of the full fiducial data vector, with the fiducial result shown as the shaded band. In each row, we either remove a single DES tomographic bin or use the data from only one.
This work in context of the range of external CMB and LSS \(S_{8}\) measurements
ACT-DR4+_Planck_\(\times\)DES-Y3
\(\kappa\gamma\)-only (this work)
_Planck_ 2018 Primary CMB
KiDS-1000+BOSS+2dFLenS 3x2pt
Figure 16: The points with error bar show \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) bandpowers with ACT+_Planck_ tSZ-free \(\kappa_{\rm C}\) and DES-Y3 shear, with four redshift bins. Error bars are the square root of the diagonal of the simulation covariance matrix, and \(z_{\rm mean}\) is the mean redshift of the source galaxy distribution taken from (Abbott et al., 2022). The curves show the best-fit theory \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) corresponding to best-fit parameters in Table 3.
Figure 17: The inferred cosmological parameters \(\sigma_{8}\), \(\Omega_{\rm m}\) and \(S_{8}=\sigma_{8}\,(\Omega_{\rm m}/0.3)^{0.5}\) from our fiducial DES-Y3\(\times\)ACT-DR4+_Planck_-tSZ deprojected data vector. To give context for our result, we also show results from other experiments with which we minimally share any data and cover the prominent range of \(S_{8}\) values available in the literature. These are from the CMB at high redshift (_Planck_ primary CMB) and LSS at lower redshifts (KiDS-1000+BOSS+2dFLenS 3x2pt). The full plot of this posterior, including nuisance parameters, is shown in Figure B3.
### \(S_{8}\) at different redshifts
As discussed in Section 1, across the multiple measurements of \(S_{8}\) from various observables, it has been noted that higher redshift probes often favour a higher value (e.g. the primary CMB in Planck Collaboration, 2020; Aiola et al., 2020; Dutcher et al., 2021), whilst lower redshift ones favour a lower value (e.g. galaxy weak lensing in Heymans et al., 2021; Abbott et al., 2022; More et al., 2023; Miyatake et al., 2023; Sugiyama et al., 2023). In light of this, we split our data vector into two different sub-sets and constrain the \(S_{8}\) parameter independently in each one. One subset contains only the spectra made with DES-Y3 tomographic bins 1 and 2 (covering redshifts \(0<z\leq 0.63\) and with the resulting \(C_{\ell}^{\kappa\circ\gamma\rm k}\) kernel peaking below \(z=0.5\)), and the other contains only tomographic bins 3 and 4 (covering redshifts \(0.63<z<2.0\) and with the resulting \(C_{\ell}^{\kappa\circ\gamma\rm k}\) kernel peaking above \(z=0.5\)). In Figure 19, we show the two posteriors on cosmological parameters, along with the one from our fiducial analysis with all four tomographic bins. For the sample at lower redshift (bins 1 and 2), we obtained \(\Omega_{\rm m}=0.385^{+0.073}_{-0.22}\) and \(S_{8}=0.85^{+0.17}_{-0.13}\), \(\sigma_{8}=0.80^{+0.19}_{-0.23}\). Consistently, for the sample at higher redshift (bins 3 and 4), we found \(\Omega_{\rm m}=0.357^{+0.052}_{-0.20}\), \(S_{8}=0.779\pm 0.073\), \(\sigma_{8}=0.77^{+0.15}_{-0.19}\). Our analysis reveals that the constraining power is significantly stronger at higher redshifts, primarily due to the better overlap with the CMB lensing kernel. This suggests that the dominant contribution to the overall constraining power when utilizing the entire sample stems from these bins.
### Weak lensing nuisance parameters
The priors on weak lensing galaxy redshift and shear calibration detailed in Table 2 and used in the above inference runs are derived from a series of simulations and deep training data implemented as part of the DES-Y3 analysis pipelines. They are therefore informative and dominate the posterior for the nuisance parameters (as seen in Figure 10). It is interesting to use the CMB lensing from ACT-DR4 as an extra high redshift lensing bin to attempt to independently calibrate these nuisance parameters and validate the priors available from simulations. This has been previously advocated as a productive use of \(C_{\ell}^{\kappa\circ\gamma\rm k}\) data sets (e.g. Das et al., 2013). Though these simulation-derived priors are often given as uncorrelated, wider priors may result in degeneracies in 3x2pt analyses. In such a case, the \(C_{\ell}^{\kappa\circ\gamma\rm k}\) observable may provide useful degeneracy breaking thanks to the differences in red
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameter & Prior & Posterior \\ \hline \multicolumn{3}{l}{**Cosmology**} \\ \(\Omega_{c}h^{2}\) & \(\mathcal{U}[0.05,0.99]\) & \(0.161^{+0.042}_{-0.073}\) \\ \(\log(A_{\rm 4}10^{10})\) & \(\mathcal{U}[1.6,4.0]\) & β \\ \(H_{0}\) & \(\mathcal{U}[40,100]\) & β \\ \(\sigma_{8}\) & β & \(0.79^{+0.16}_{-0.19}\) \\ \(\Omega_{\rm m}\) & β & \(0.338^{+0.05}_{-0.17}\) \\ \(S_{8}=\sigma_{8}\left(\Omega_{\rm m}/0.3\right)\).\({}^{0.5}\) & β & \(0.782\pm 0.059\) \\ \hline \multicolumn{3}{l}{**Galaxy Intrinsic Alignment**} \\ \(A_{\rm AIA}\) & \(\mathcal{N}(0.35,0.65)\) & \(0.31\pm 0.57\) \\ \(\eta_{\rm AIA}\) & \(\mathcal{N}(1.66,4)\) & \(-1.0^{+3.8}_{-3.1}\) \\ \hline \multicolumn{3}{l}{**Galaxy redshift calibration**} \\ \(\Delta z_{1}\) & \(\mathcal{N}(0.0,0.018)\) & \(0.001\pm 0.018\) \\ \(\Delta z_{2}\) & \(\mathcal{N}(0.0,0.015)\) & \(0.001\pm 0.015\) \\ \(\Delta z_{3}\) & \(\mathcal{N}(0.0,0.011)\) & \(-0.001\pm 0.011\) \\ \(\Delta z_{4}\) & \(\mathcal{N}(0.0,0.017)\) & \(0.000\pm 0.017\) \\ \hline \multicolumn{3}{l}{**Galaxy shear calibration**} \\ \(m_{1}\) & \(\mathcal{N}(-0.006,0.009)\) & \(-0.0062\pm 0.0089\) \\ \(m_{2}\) & \(\mathcal{N}(-0.020,0.008)\) & \(-0.0198\pm 0.0080\) \\ \(m_{3}\) & \(\mathcal{N}(-0.024,0.008)\) & \(-0.0240\pm 0.0080\) \\ \(m_{4}\) & \(\mathcal{N}(-0.037,0.008)\) & \(-0.0370\pm 0.0080\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: 1D Marginalised posterior mean and 68% credible interval for the parameters sampled during our main analysis.
Figure 19: Measurement of the cosmological parameters in two subsets of the data, one covering galaxy redshifts \(0<z\leq 0.63\) and with the resulting \(C_{\ell}^{\kappa\circ\gamma\rm k}\) kernel peaking below \(z=0.5\) and the other covering redshifts \(0.63<z<2.0\) and with the resulting \(C_{\ell}^{\kappa\circ\gamma\rm k}\) kernel peaking above \(z=0.5\). We find both subsets of the inferred parameters to be consistent.
shift and scale dependence. Here, we make use of only the highest redshift and highest signal tomographic bin (Bin 4), fix all other cosmological and nuisance parameters to their fiducial values and infer only the redshift and shear calibration parameters (\(\delta z_{4},m_{4}\)) with broad priors \(\delta z_{4}\sim\mathcal{U}[-1,2]\) and \(m_{4}\sim\mathcal{U}[-1,1]\). These priors are a factor 100 wider than the Gaussian priors applied in the main analysis and span the plausible range of possible calibration uncertainties. The inferred posterior is shown in Figure 20. Though the constraining power of our data is far lower than that of the DES-Y3 prior, the posterior is consistent, meaning the informative prior passes as accurate within the terms of this low precision test.
## 8 Conclusions
In this analysis, we measured \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\), the angular power spectrum between the CMB weak lensing map of ACT-DR4 in the D56 region and the DES-Y3 cosmic shear catalogue consisting of around 100 million galaxies. The measurement is over the common sky area of around 450 deg\({}^{2}\) between the two surveys. To avoid one of the main extragalactic foreground biases which originate from the tSZ contamination in \(\kappa_{\rm C}\), we used the tSZ-free \(\kappa_{\rm C}\) map obtained using ACT-DR4 and _Planck_ data for the baseline analysis (Darwish et al., 2021). The analysis is carried out in harmonic space over the multipole range of \(\ell=100\) to 1900. Over this range, we measure the cross-correlation at SNR = 7.1. As demonstrated in Section 6.1, the measured data vector passes specific null tests, indicating the lack of significant detection of some of the non-idealities that generally affect \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) measurements. We also tested for contamination due to stars and Galactic dust. We found their effect is negligible compared with the statistical uncertainty and saw no significant evidence of their contamination. We performed the initial analysis with the blinding procedure described in Section 3.2.1. After the data vector passed the null tests and we confirmed that the analysis pipeline recovered the unbiased input values from simulations, we unblinded the catalogue and the parameters inferred from the unblinded data vector.
We used this \(C_{\ell}^{\kappa_{\rm C}\gamma_{\rm E}}\) measurement to infer the matter density parameter (\(\Omega_{\rm m}\)) and the amplitude of fluctuations in the matter distribution (\(S_{8}\)). We inferred \(\Omega_{\rm m}=0.338^{+0.05}_{-0.17}\), and \(S_{8}=0.782\pm 0.059\). Our main result is shown in Figure 17. These values were inferred using informative but well-motivated priors on the observational and astrophysical nuisance parameters, which were marginalized while inferring \(\Omega_{\rm m}\) and \(S_{8}\). We investigated the validity of the priors on galaxy intrinsic alignment parameters by significantly relaxing them and checking the consistency of the resulting posteriors. As depicted in Figure 13, we found the posteriors with this broader IA prior are consistent with those with the fiducial IA priors but have relatively weak constraints on the cosmological parameters. We also assessed the consistency between the inference obtained using various subsets of the data, shown in Figure 19. Our results are statistically consistent with many recent cosmic shear studies, including those utilizing the DES-Y3 data alone (Doux et al., 2022; Abbott et al., 2022, Secco & Samuroff et al., 2022, Amon et al., 2022) and cross-correlation with SPT and _Planck_ CMB lensing (Chang et al., 2023). Furthermore, our results are in agreement with \(S_{8}\) inferred using KiDS data (Heymans et al., 2021), although slightly higher than the value inferred using the cross-correlation with ACT-DR4 BOSS North and _Planck_ CMB lensing (Robertson et al., 2021). However, we note that these differences fall within \(\sim 1.4\sigma\), indicating a relatively minor deviation. We summarized this comparison in Figure 18.
Measurements of the clustering of matter from combinations of CMB and optical weak lensing are rapidly growing in precision, with the highest-yet SNR achieved being that of SPT+_Planck_\(\times\) DES-Y3 at 18\(\sigma\)(Chang et al., 2023). We note that this analysis is carried out on a substantially larger sky area of 3920 deg\({}^{2}\) than the 450 deg\({}^{2}\) area of the ACT D56 region considered in this work. For a given sky area, comparatively higher SNR obtained in this work is owing to the lower \(\kappa_{\rm C}\) reconstruction noise of the ACT D56 observations. We find the \(S_{8}\) inferred in these two studies in statistical agreement, as depicted in Figure 18. Recently, the ACT Collaboration has completed an analysis of the reconstructed CMB lensing map using the DR6 sky area of 9400 deg\({}^{2}\), which overlaps with nearly the entire DES-Y3 survey footprint (Qu et al., 2023; Madhavacheril et al., 2023; MacCrann et al., 2023). These data will provide a great opportunity to continue the work done here by performing cross-correlation with various probes of large-scale structure, including galaxy lensing and galaxy density (Marques et al., 2023). Further on the horizon, correlations between the Simons Observatory (Ade et al., 2019) and CMB-S4 (Abazajian et al., 2016) lensing maps with shear data from the _Euclid_ satellite (Amendola et al., 2018) and the Rubin Observatory Legacy Survey of Space and Time (Ivezic et al., 2019) will be even more precise. These analyses, with their higher statistical precision, will need to be carried out with more careful theoretical modelling of astrophysical effects of baryons and galaxy intrinsic alignment along with the modelling of observational systematics (see, e.g. DES Col
Figure 20: Constraints on the galaxy weak lensing nuisance parameters in DES-Y3 tomographic bin 4 (\(0.87<z<2.0\)). The DES-Y3 prior used in the main analysis is shown as unfilled contours and is consistent with the filled contours obtained with fixed cosmological parameters but broad priors on nuisance parameters.
laboration & Kilo-Degree Survey Collaboration 2023) than required for the data analysed in this work.
## Acknowledgements
SS acknowledges support from the Beus Center for Cosmic Foundations. IH, EC, SG, and HJ acknowledge support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 849169). CS acknowledges support from the Agencia Nacional de Investigacion y Desarrollo (ANID) through FONDECYT grant no. 11191125 and BASAL project FB210003. KM acknowledges support from the National Research Foundation of South Africa. GSF acknowledges support through the Isaac Newton Studentship, the Helen Stone Scholarship at the University of Cambridge, and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 851274). KM acknowledges support from the National Research Foundation of South Africa. LBN acknowledges support from the Simons Foundation. JCH acknowledges support from NSF grant AST-2108536, NASA grants 21-ATP21-0129 and 22-ADAP22-0145, the Sloan Foundation, and the Simons Foundation. SKC acknowledges support from NSF award AST-2001866. KMH acknowledges NSF award number 1815887. OD acknowledges support from SNSF Eccellenza Professorial Fellowship (No. 186879).
We acknowledge the support of the Supercomputing Wales project, which is part-funded by the European Regional Development Fund (ERDF) via Welsh Government. We thank Agnes Ferte and Jessie Muir for help loading MultiNest chains.
Support for ACT was through the U.S. National Science Foundation through awards AST-0408698, AST-0965625, and AST-1440226 for the ACT project, as well as awards PHY-0355328, PHY-0855887 and PHY-1214379. Funding was also provided by Princeton University, the University of Pennsylvania, and a Canada Foundation for Innovation (CFI) award to UBC. ACT operated in the Parque Astronomico Atacama in northern Chile under the auspices of the Agencia Nacional de Investigacion y Desarrollo (ANID). The development of multichroic detectors and lenses was supported by NASA grants NNX13AE56G and NNX14AB58G. Detector research at NIST was supported by the NIST Innovations in Measurement Science program. Computing for ACT was performed using the Princeton Research Computing resources at Princeton University, the National Energy Research Scientific Computing Center (NERSC), and the Niagara supercomputer at the SciNet HPC Consortium.
Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF's NOIRLab, the University of Nottingham, The Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, Texas A&M University, and the OzDES Membership Consortium.
Based in part on observations at Cerro Tololo Inter-American Observatory at NSF's NOIRLab (NOIRLab Prop. ID 2012B-0001; PI: J. Frieman), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
The DES data management system is supported by the National Science Foundation under Grant Numbers AST-1138766 and AST-1536171. The DES participants from Spanish institutions are partially supported by MICINN under grants ESP2017-89838, PGC2018-094773, PGC2018-102012, SEV-2016-0588, SEV-2016-0597, and MDM-2015-0509, some of which include ERDF funds from the European Union. IFAE is partially funded by the CERCA program of the Generalitat de Catalunya. Research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Program (FP7/2007-2013) including ERC grant agreements 240672, 291329, and 306478. We acknowledge support from the Brazilian Instituto Nacional de Ciencia e Tecnologia (INCT) do e-Universo (CNPq grant 465376/2014-2).
This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics.
## Data availability
The data underlying this article are available in a GitHub repository at [https://github.com/itkharrison/actdr4kappa-x-desy3gamma-data](https://github.com/itkharrison/actdr4kappa-x-desy3gamma-data), in the Zenodo repository referenced there (the DOI generated will be added here on publication), and will be made available on the LAMBDA data service [https://lambda.gsfc.nasa.gov/](https://lambda.gsfc.nasa.gov/) upon publication. |
2309.09136 | Enhancing Quantised End-to-End ASR Models via Personalisation | Recent end-to-end automatic speech recognition (ASR) models have become
increasingly larger, making them particularly challenging to be deployed on
resource-constrained devices. Model quantisation is an effective solution that
sometimes causes the word error rate (WER) to increase. In this paper, a novel
strategy of personalisation for a quantised model (PQM) is proposed, which
combines speaker adaptive training (SAT) with model quantisation to improve the
performance of heavily compressed models. Specifically, PQM uses a 4-bit
NormalFloat Quantisation (NF4) approach for model quantisation and low-rank
adaptation (LoRA) for SAT. Experiments have been performed on the LibriSpeech
and the TED-LIUM 3 corpora. Remarkably, with a 7x reduction in model size and
1% additional speaker-specific parameters, 15.1% and 23.3% relative WER
reductions were achieved on quantised Whisper and Conformer-based
attention-based encoder-decoder ASR models respectively, comparing to the
original full precision models. | Qiuming Zhao, Guangzhi Sun, Chao Zhang, Mingxing Xu, Thomas Fang Zheng | 2023-09-17T02:35:21Z | http://arxiv.org/abs/2309.09136v1 | # Enhancing Quantised End-to-End ASR Models via Personalisation
###### Abstract
Recent end-to-end automatic speech recognition (ASR) models have become increasingly larger, making them particularly challenging to be deployed on resource-constrained devices. Model quantisation is an effective solution that sometimes causes the word error rate (WER) to increase. In this paper, a novel strategy of personalisation for a quantised model (PQM) is proposed, which combines speaker adaptive training (SAT) with model quantisation to improve the performance of heavily compressed models. Specifically, PQM uses a 4-bit NormalFloat Quantisation (NF4) approach for model quantisation and low-rank adaptation (LoRA) for SAT. Experiments have been performed on the LibriSpeech and the TEDLIUM 3 corpora. Remarkably, with a 7x reduction in model size and 1% additional speaker-specific parameters, 15.1% and 23.3% relative WER reductions were achieved on quantised Whisper and Conformer-based attention-based encoder-decoder ASR models respectively, comparing to the original full precision models.
Qiuming Zhao\({}^{1}\), Guangzhi Sun\({}^{2}\), Chao Zhang\({}^{1}\), Mingxing Xu\({}^{1}\), Thomas Fang Zheng\({}^{1}\)+\({}^{1}\)Tsinghua University, China; \({}^{2}\)University of Cambridge, United Kingdom [email protected]; [email protected]; {cz277,xumx,fzheng}@tsinghua.edu.cn speaker adaptive training, quantisation, LoRA, Whisper, end-to-end ASR
Footnote β : Correspondence
## 1 Introduction
End-to-end neural network models have achieved state-of-the-art results in various Automatic Speech Recognition (ASR) tasks [1, 2, 3, 4]. However, these advancements in accuracy usually come at the cost of increasing model size, which not only substantially increases operational costs on the server but also presents significant challenges in deploying them on resource-constrained edge devices. More recently, universal large speech models, such as OpenAI Whisper [1] have become increasingly popular. These models adopt a very large amount of model parameters, making the deployment even more challenging and demanding.
Model quantisation has been widely adopted as an effective way to reduce model sizes, and has been widely studied and applied in both academia and industry [5, 6, 7]. Model quantisation replaces floating-point weights with low-precision values to considerably reduce the model size and inference time without altering the model architecture. However, model quantisation often results in a degradation in model performance due to the loss of precision. Some studies employ quantisation-aware training (QAT) schemes [8, 9, 10] that consider the effects of quantisation during training, while others utilize more sophisticated quantisation methods [11, 12, 13]. Although these approaches have mitigated the performance degradation issue directly from the generic data perspective, the fact that the edge devices to deploy quantised models are often personalised is under-explored. For these devices, such as personalised voice assistants or smart door locks, improving performance for the target speaker is the critical objective rather than the generic performance. Consequently, this paper investigates the use of personalisation to compensate for the degradation due to quantisation.
This paper proposes a novel strategy of personalisation for a quantised model (PQM) which performs speaker adaptive training on a quantised end-to-end ASR model. The PQM strategy adopts the block-wise NormalFloat (NF4) quantisation [14] for model compression, which incurs a smaller performance loss compared to conventional uniform quantisation. The speaker adaptive training is performed using the Low-Rank Adaptation (LoRA) [15] approach. As the adaptation data for a particular speaker is often very limited, PQM includes a LoRA pretraining stage before the speaker adaptive training using existing similar data to achieve faster and better convergence. Moreover, to further mitigate the data scarcity issue, semi-supervised training is also explored in PQM where the quantised model can be trained with labels generated by the model itself.
The PQM strategy was implemented for the Conformer attention-based encoder-decoder (AED) model and the Whisper model as two examples of end-to-end ASR models in this paper. Experiments performed on the LibriSpeech and TED-LIUM 3 datasets demonstrated that, with nearly a 7-fold compression of the model, the PQM strategy achieves a relative WER reduction of 15.1% and 23.3% on quantised Whisper and Conformer AED models respectively, compared to the original full-precision models. The main contribution of this paper can be summarised as follows.
* The PQM strategy, to the best of our knowledge, is the first work that investigates personalisation to compensate for quantisation degradation on edge devices.
* A LoRA pretraining-based initialisation and a semi-supervised training approach are proposed for speaker adaptive training.
* PQM has been validated on both Conformer AED and Whisper models across two datasets, with large improvements achieved compared to fully fine-tuned baselines.
The rest of this paper is organised as follows: Sec. 2 summarises related work. Sec. 3 explains the three stages and some details of the PQM strategy. Sec. 4 describes the experimental setup and results are provided in Sec. 5. The paper concludes in Sec. 6.
## 2 Related Work
### Quantisation
Quantisation is the process of discretising an input with higher information content to one with lower information content. Uniform quantisation is a commonly used quantisation method where the value range is uniformly divided into multiple intervals. This approach is widely supported in mobile inference frameworks such as TFLite [16], MNN [17], and NCNN [18]. Recently, Quantile Quantisation was proposed based on the _Lossy Minimum Entropy
_Encoding_[19], and QLoRA was introduced [14] which employs NF4 quantisation on the weights of pretrained neural networks to achieve near-lossless quantisation. Unlike uniform quantisation, Quantile Quantisation determines the quantisation steps and levels based on the data distribution. In data-dense regions, more quantisation levels are allocated, ensuring an equal number of quantised values in each quantisation bin. This allows for a more refined representation of the data, as well as better handling of outliers and adaptation to non-uniform distributions.
### Speaker Adaptation
The objective of speaker adaptation is to minimize the mismatch between speakers in training and testing conditions. Current neural network-based methods for speaker adaptation can be broadly categorized into two types: Embedding-based and Model-based.
Speaker embeddings map speakers into a continuous space using techniques like i-vectors [20, 21, 22] or neural network bottlenecks [23, 24, 25]. Model-based adaptation methods include three primary methods: Structured Transforms, Speaker Adaptive Training (SAT), and Regularization. Structured Transforms, such as the Learning Hidden Unit Contributions (LHUC) scheme [26] and parameterised activation functions [27], modify the model structure or its activations. SAT methods adjust model parameters to individual speakers using approaches like SAT-embedding [28] and SAT-LHUC [29]. Regularization techniques, such as L2 loss or KL divergence [30, 31], work to prevent overfitting to specific speakers.
## 3 Methodology
### Strategy Overview
The PQM strategy is illustrated in Fig. 1, which is divided into three stages. In stage 1, we apply block-wise NF4 quantisation to the model's primary weight parameters. In stage 2, we pretrain the randomly initialised LoRA using data from a large number of speakers, providing a more robust starting point for subsequent speaker adaptation. In stage 3, we perform speaker adaptive training on speaker-specific data, during which the entire model is frozen, and only the LoRA parameters corresponding to each speaker are updated.
Stage 2 of PQM is particularly beneficial for the application scenario where reasonable-sized training data of the target domain is available, e.g. assuming there is some in-house data, with a very limited amount of target speaker data.
### k-bit NormalFloat Quantisation
The block-wise NF4 quantisation is adopted in this paper, which is applied to the weight matrices that are the primary parameters of the model. While standard floating point quantisation applies the same set of quantisation bins to all weight matrices, the dynamic range of parameter values is not taken into account, resulting in heavily unbalanced quantisation bins. NF4, on the contrary, ensures each bin has an equal number of values by estimating the quantile of the input matrices using the empirical cumulative normal distribution. This leveraged the fact that the parameters of a weight matrix, in general, follow a normal distribution [14].
Specifically, for a \(k\)-bit quantisation, \(2^{k}+1\) quantiles (i.e. \(2^{k}\) quantisation bins) of a theoretical \(\mathcal{N}(0,1)\) distribution are first estimated which equally divides the area under the distribution curve. Then, these quantisation bins are normalised to be within the range \([-1,1]\), as illustrated in Fig. 2. Finally, the parameters of a weight matrix are normalised into the range \([-1,1]\) to find their corresponding quantiles by dividing the maximum absolute value of those parameters. In this way, a similar number of quantised values are obtained for each bin, allowing for a more refined representation of the model parameters. To ensure zero is exactly quantised to zero which is important for padding, asymmetric quantiles are used that ensure the mid bin has the quantisation level of zero (see Fig. 2).
To reduce the influence of extreme values in weight matrices (i.e. outliers) on the maximum absolute value normalisation, block-wise quantisation is applied which divides the weight matrices into small blocks and quantises each block with separate normalisation factors. In this way, outliers in the input tensor are confined to individual
Figure 1: Overview of the PQM strategy.
Figure 2: Illustration of the construction of quantiles for NF4 quantisation. It comprises 16 quantisation bins, where the midpoint of each bin represents the quantisation level.
blocks, reducing their overall impact on quantisation. As a result, block-wise quantisation allows for individual normalisation factors for each block, resulting in a more fine-grained overall quantisation.
### LoRA for Speaker Adaptation
Compared to full fine-tuning, LoRA adjusts only the low-rank subspace parameters of the model, thereby achieving higher computational efficiency and lower costs for computation and storage. In scenarios with limited speaker data, full fine-tuning methods may be prone to model overfitting, whereas LoRA can alleviate this issue.
For the pretrained ASR model with weight matrix \(W_{0}\in\mathbb{R}^{d\times k}\), its update is expressed through the following equation, where \(B\in\mathbb{R}^{d\times r}\) and \(A\in\mathbb{R}^{r\times k}\), with the rank \(r\ll\min(d,k)\).
\[W_{0}+\Delta W=W_{0}+BA \tag{1}\]
In speaker adaptive training, only the LoRA parameters corresponding to each speaker are updated. In this way, effective adaptation to different speakers can be achieved by updating a minimal set of parameters. Moreover, in cases where the base model is quantised, full-precision LoRA serves to some extent to restore full-precision performance to the base model.
Although the target speaker data is always limited, in reality, the target domain data of other speakers is usually available. Therefore, PQM leverages those data to find a better initialisation point for LoRA weights before performing speaker adaptation, referred to as LoRA pretraining. This allows the LoRA parameters to adapt to the target-domain ASR task, providing a more robust starting point for subsequent speaker adaptation. When using LoRA, each speaker corresponds to one set of LoRA parameters. The number of parameters is given by \(|\Theta|=2\times L_{\text{LoRA}}\times d_{\text{model}}\times r\), where \(L_{\text{LoRA}}\) is the number of weight matrices to apply LoRA, \(d_{\text{model}}\) is the attention layer dimension, and \(r\) is the rank.
## 4 Experimental Setup
### Data
**LibriSpeech** is an English audiobook dataset. We selected 5 male speakers and 5 female speakers with the largest number of utterances from train-clean-360 as speaker adaptation data. Each speaker contributes approximately 150 utterances, resulting in a total speech duration of roughly 25 minutes. For LoRA pre-training, the train-clean-100 set was used which does not have any speaker overlap with the selected speakers.
**TED-LIUM 3** (TL3) is a TED talks dataset. We selected 16 speakers from the test set as speaker adaptation data. On average, each speaker has 161 utterances (14 minutes).
Speaker adaptation data for LibriSpeech and TL3 was divided randomly, where 2/5 was divided into the train set, 1/5 was divided into the dev set, and 2/5 was divided into the test set. On average, each speaker has 6-10 minutes of training data, while the dev and test data remains constant across all experiments. We denote the partitioned test sets as _LibriSpeech-SA_ and _TL3-SA_ respectively in the results. Data partition details are provided1.
Footnote 1: Data partition details: [https://github.com/qmgzhao/PQM.git](https://github.com/qmgzhao/PQM.git)
### Model and training specifications
In order to verify the effectiveness of PQM, we use Whisper and Conformer AED models as two widely used models as examples.
**Whisper** is a transformer-based AED model released by OpenAI trained on 680k hours of audio. The base.en model with a full model size of 278MB was used. The encoder has 6 Transformer blocks with 2048 hidden dimensions, and the output size is 512. The decoder has 6 Transformer blocks with 2048 hidden dimensions. The Transformer-related weight matrices are all 512 by 512 dimensional. Feature processing and model training followed [1, 32].
**Conformer AED** is a hybrid CTC/attention-based encoder-decoder model, whose FP32 model size is about 131MB. The training follows ESPnet [33] with 0.3 CTC weight and 80-dim FBank features. The Conformer encoder has 12 blocks with 1024 hidden dimensions. The decoder uses a 6-block transformer architecture with 2048-dim linear units. The Transformer-related weight matrices are all 256 by 256.
The baseline system is the quantised system without personalisation. When training LoRA from scratch (rank=4), the LoRA parameter sizes of Whisper and Conformer are 1.2MB and 0.8MB. When using pretrained LoRA (rank=1), the LoRA parameter sizes of Whisper and Conformer are 0.3MB and 0.2MB. Identical hyper-parameter settings were used for all speakers. Models are evaluated using WER averaged across all utterances from the test set speakers.
## 5 Evaluation Results and Analysis
First, WER and model compression ratios of systems after quantising different parts are shown in Table 1. The majority of parameters in both the Whisper and Conformer models reside in the linear layers. Notably, applying NF4 quantisation to these layers has a very
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**System**} & \multicolumn{3}{c}{**Quantise**} & \multirow{2}{*}{**WER(\%)**} & \multirow{2}{*}{\begin{tabular}{c} **Model Size** \\ **(MB)** \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{c} **Comp.** \\ **Ratio** \\ \end{tabular} } \\ \cline{2-2} \cline{5-6} & linear & conv & & & & \\ \hline \hline \multirow{6}{*}{Whisper} & \(\times\) & \(\times\) & \(\times\) & 10.02 & 277.8 & - \\ & β & \(\times\) & \(\times\) & 10.46 & 130.8 & 2.12 \\ & β & β & \(\times\) & 10.60 & 127.8 & 2.17 \\ & β & \(\times\) & β & 11.37 & 42.2 & 6.58 \\ & β & β & β & 11.22 & 38.3 & 7.25 \\ \hline \multirow{6}{*}{Conformer} & \(\times\) & \(\times\) & \(\times\) & 12.43 & 130.9 & - \\ & β & \(\times\) & \(\times\) & 12.51 & 31.6 & 4.14 \\ \cline{1-1} & β & β & \(\times\) & 12.69 & 23.4 & 5.59 \\ \cline{1-1} & β & \(\times\) & β & 12.61 & 27.3 & 4.79 \\ \cline{1-1} & β & β & β & 12.77 & 19.1 & 6.85 \\ \hline \hline \end{tabular}
\end{table}
Table 1: WER on the LibriSpeech-SA using quantised Whisper and Conformer for the linear, convolution, and embedding layers.
Figure 3: LoRA. Initially, the pretrained weight parameters \(W_{0}\) are frozen. For \(A\), random Gaussian initialisation is employed, whereas \(B\) is initialised with zeros.
small impact on the performance. Furthermore, WER increased by 1.20% for Whisper and only 0.34% for Conformer upon NF4 quantisation. This suggests that models trained on smaller datasets are more robust to the quantisation noises under NF4 quantisation. In the following experiments, models with the highest compression ratios (i.e. the last row of each model in Table 1) are used.
Table 2 shows the performance of PQM on the Whisper base.en model. Compared to the baseline, the WER reduction achieved by fine-tuning all model parameters at full precision on target speaker data was largely reduced after model quantisation. As a result, the Whisper-FFT-NF4 model only achieved around 6.3% relative WER reduction after speaker adaptive training. When LoRA was applied in conjunction with NF4 quantisation (i.e. the PQM strategy), without LoRA pretraining, the performance was already on par with the Whisper-FFT-FP32 full precision model and achieved a 13.8% relative WER reduction on LibriSpeech-SA and a 9.9% relative WER reduction on TL3-SA tasks. Moreover, when LoRA pretraining was applied in PQM, the improvements were further enlarged, with 24.2% and 12.8% relative WER reductions on LibriSpeech-SA and TL3-SA sets respectively. Note that the LoRA pretraining for TL3-SA was in fact cross-data, as the pretraining was done on the LibriSpeech clean-100 training set while directly applied to the TL3-SA data for adaptive training. This underscores the effectiveness of pretraining LoRA on data that resembles speaker-specific data.
The same set of experiments was also performed for the Conformer model as shown in Table 3. Note that as the Conformer AED is trained on train-clean-100 already, we select 250 speakers from LibriSpeech train-clean-360 for LoRA pretraining. As before, the Conformer-LoRA-scratch-NF4 performed on par with the Conformer-FFT-FP32 full precision model, which achieved a 19.7% relative WER reduction compared to the baseline. The Conformer-LoRA-pretrain-NF4 model achieved a further WER reduction, resulting in a relative 25.3% WER reduction compared to the baseline.
Next, the influence of the number of training utterances was explored as shown in Fig. 4. It is evident that LoRA pretraining serves as a favourable starting point for speaker adaptive training. Moreover, the cost-effectiveness was maximized when 20 to 30 utterances were used for adaptive training with PQM.
Finally, to further alleviate the data scarcity issue, semi-supervised training for PQM was investigated for the Whisper and Conformer model and results are shown in Table 4. As shown in the results, both Whisper-large and Whisper-base generated labels yielded improvements. Remarkably, with the training guided by Whisper-large labels showing, 5.4% and 11.4% relative WER reduction were achieved on the Whisper and Conformer model compared to without doing any speaker adaptive training. Note that this process did not require any human-annotated labels.
## 6 Conclusions
This paper proposes the PQM strategy to compensate for the performance loss due to model quantisation via personalisation. PQM adopted the NF4 quantisation approach together with LoRA-based speaker adaptive training and was applied to both the Conformer-based AED model and the Whisper model. Experiments on LibriSpeech and TL3 datasets using speaker adaptive training data partitions showed that personalisation largely improved the performance of quantised models. Specifically, using PQM, 15.1% and 23.3% relative WER reductions were achieved on quantised Whisper and Conformer-based AED models respectively, compared to the full precision models.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**System** & **Label Source** & **WER(\%)** \\ \hline \multirow{3}{*}{Whisper-LoRA-pretrain-NF4} & No adaptive training & 9.52 \\ & Ground truth & 8.51 \\ & Whisper-large & 9.01 \\ & Whisper-base.en & 9.44 \\ \hline \multirow{3}{*}{Conformer-LoRA-pretrain-NF4} & No adaptive training & 11.81 \\ & Ground truth & 9.54 \\ \cline{1-1} & Whisper-large & 10.46 \\ \cline{1-1} & Whisper-base.en & 10.71 \\ \hline \hline \end{tabular}
\end{table}
Table 4: WER on the LibriSpeech-SA using Whisper-LoRA-pretrain-NF4 and Conformer-LoRA-pretrain-NF4 under different different values of utterances. Uit-num=0 refers to Whisper-LoRA-pretrain-NF4 without speaker adaptive training.
Figure 4: WER on the LibriSpeech-SA using Whisper-LoRA-pretrain-NF4 under different numbers of utterances. Uit-num=0 refers to Whisper-LoRA-pretrain-NF4 without speaker adaptive training.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**System** & \multicolumn{2}{c}{**WER(\%)**} \\ & LibriSpeech-SA & TL3-SA \\ \hline Whisper-baseline & 11.22 & 7.71 \\ Whisper-FFT-FP32 & 9.59 & 6.85 \\ Whisper-FFT-NF4 & 10.51 & 7.19 \\ Whisper-LoRA-scratch-NF4 & 9.67 & 6.95 \\ Whisper-LoRA-pretrain-NF4 & **8.51** & **6.72** \\ \hline \hline \end{tabular}
\end{table}
Table 2: WER on the LibriSpeech-SA and TL3-SA using quantised Whisper models. Whisper-baseline: Whisper after NF4 quantisation without adaptive training. FFT refers to full fine-tuning which trains all model parameters. Scratch refers to initialising LoRA weight randomly, and pretrain refers to the full PQM strategy with LoRA pretraining. |
2309.06563 | On Robust Recovery of Signals from Indirect Observations | Our focus is on robust recovery algorithms in statistical linear inverse
problem. We consider two recovery routines - the much studied linear estimate
originating from Kuks and Olman [42] and polyhedral estimate introduced in
[37]. It was shown in [38] that risk of these estimates can be tightly
upper-bounded for a wide range of a priori information about the model through
solving a convex optimization problem, leading to a computationally efficient
implementation of nearly optimal estimates of these types. The subject of the
present paper is design and analysis of linear and polyhedral estimates which
are robust with respect to the uncertainty in the observation matrix. We
evaluate performance of robust estimates under stochastic and deterministic
matrix uncertainty and show how the estimation risk can be bounded by the
optimal value of efficiently solvable convex optimization problem; "presumably
good" estimates of both types are then obtained through optimization of the
risk bounds with respect to estimate parameters. | Yannis Bekri, Anatoli Juditsky, Arkadi Nemirovski | 2023-09-12T20:01:03Z | http://arxiv.org/abs/2309.06563v1 | # On Robust Recovery of Signals from Indirect Observations
###### Abstract
Our focus is on robust recovery algorithms in statistical linear inverse problem. We consider two recovery routines--the much-studied linear estimate originating from Kuks and Olman [42] and polyhedral estimate introduced in [37]. It was shown in [38] that risk of these estimates can be tightly upper-bounded for a wide range of a priori information about the model through solving a convex optimization problem, leading to a computationally efficient implementation of nearly optimal estimates of these types. The subject of the present paper is design and analysis of linear and polyhedral estimates which are robust with respect to the uncertainty in the observation matrix. We evaluate performance of robust estimates under stochastic and deterministic matrix uncertainty and show how the estimation risk can be bounded by the optimal value of efficiently solvable convex optimization problem; "presumably good" estimates of both types are then obtained through optimization of the risk bounds with respect to estimate parameters.
_2020 Mathematics Subject Classification:_ 62G05, 62G10, 90C90
_Keywords:_ statistical linear inverse problems, robust estimation, observation matrix uncertainty
## 1 Introduction
In this paper we focus on the problem of recovering unknown signal \(x\) given noisy observation \(\omega\in\mathbf{R}^{m}\),
\[\omega=Ax+\xi, \tag{1}\]
of the linear image \(Ax\) of \(x\); here \(\xi\in\mathbf{R}^{m}\) is observation noise. Our objective is to estimate the linear image \(w=Bx\in\mathbf{R}^{\nu}\) of \(x\) known to belong to given convex and compact subset \(\mathcal{X}\) of \(\mathbf{R}^{n}\). The estimation problem above is a classical linear inverse problem. When statistically analysed, popular approaches to solving (1) (cf., e.g., [55, 31, 32, 50, 73, 40, 27, 65]) usually assume a special structure of the problem, when matrix \(A\) and set \(\mathcal{X}\) "fit each other," e.g., there exists a sparse approximation of the set \(\mathcal{X}\) in a given basis/pair of bases, in which matrix \(A\) is "almost diagonal" (see, e.g. [16, 13] for detail). Under these assumptions, traditional results focus on estimation algorithms which are both numerically straightforward and statistically (asymptotically) optimal with closed form analytical description of estimates and corresponding risks. In this paper, \(A\) and \(B\) are "general" matrices of appropriate dimensions, and \(\mathcal{X}\) is a rather general convex and compact set. Instead of deriving closed form expressions for estimates and risks (which under the circumstances seems to be impossible), we adopt an "operational" approach initiated in [15] and further developed in [34, 36, 37, 38], within which both the estimate and its risk are yielded by efficient computation, rather than by an explicit analytical description.
In particular, two classes of estimates were analyzed in [36, 37, 38] in the operational framework.
* Linear estimates. Since their introduction in [43, 44], _linear estimates_ are a standard part of the theoretical statistical toolkit. There is an extensive literature dealing with the design and performance analysis of linear estimates (see, e.g., [63, 17, 20, 18, 30, 71, 74]). When applied in the estimation problem we consider here, linear estimate \(\widehat{w}_{\rm lin}^{H}(\omega)\) is of the form \(\widehat{w}_{H}(\omega)=H^{T}\omega\) and is specified by a contrast matrix \(H\in{\bf R}^{m\times\nu}\).
* Polyhedral estimates. The idea of a _polyhedral estimate_ goes back to [60] where it was shown (see also [58, Chapter 2]) that such estimate is near-optimal when recovering smooth multivariate regression function known to belong to a given Sobolev ball from noisy observations taken along a regular grid. It has been recently reintroduced in [23] and [65] and extended to the setting to follow in [37]. In this setting, a polyhedral estimate \(\omega\mapsto\widehat{w}_{\rm poly}^{H}(\omega)\) is specified by a contrast matrix \(H\in{\bf R}^{m\times M}\) according to \[\omega\mapsto\widehat{x}^{H}(\omega)\in\mathop{\rm Argmin}_{x\in{\cal X}}\|H^ {T}(\omega-Ax)\|_{\infty}\mapsto\widehat{w}_{\rm poly}^{H}(\omega):=B \widehat{x}(\omega).\] Our interest in these estimates stems from the results of [35, 37, 38] where it is shown that in the Gaussian case (\(\xi\sim{\cal N}(0,\sigma^{2}I_{m})\)), linear and polyhedral estimates with properly designed efficiently computable contrast matrices are near-minimax optimal in terms of their risks over a rather general class of loss functions and signal sets--allitopes and spectralropes. 1
Footnote 1: Exact definitions of these sets are reproduced in the main body of the paper. For the time being, it suffices to point out two instructive examples: the bounded intersections of finitely many sets of the form \(\{x:\|Px\|_{p}\leq 1\}\), \(p\geq 2\), is an ellitope (and a spectralrope as well), and the unit ball of the spectral norm in the space of \(m\times n\) matrices is a spectralrope.
In this paper we consider an estimation problem which is a generalization of that mentioned above in which observation matrix \(A\in{\bf R}^{m\times n}\) is _uncertain_. Specifically, we assume that
\[\omega=A[\eta]x+\xi \tag{2}\]
where \(\xi\in{\bf R}^{m}\) is zero mean random noise and
\[A[\eta]=A+\sum\nolimits_{\alpha=1}^{q}\eta_{\alpha}A_{\alpha}\in{\bf R}^{m \times n} \tag{3}\]
where \(A,A_{1},...,A_{q}\) are given matrices and \(\eta\in{\bf R}^{q}\) is uncertain perturbation ("uncertainty" for short). We consider separately two situations: the first one in which the perturbation \(\eta\) is random ("random perturbation"), and the second one with \(\eta\) selected, perhaps in an adversarial fashion, from a given uncertainty set \({\cal U}\) ("uncertain-but-bounded perturbation"). Observation model (2) with random uncertainty is related to the linear regression problem with random errors in regressors [5, 8, 21, 22, 45, 68, 72] which is usually addressed through total least squares. It can also be seen as alternative modeling of the statistical inverse problem in which sensing matrix is recovered with stochastic error (see, e.g., [10, 11, 19, 25, 27, 51]). Estimation from observations (2) under uncertain-but-bounded perturbation of observation matrix can be seen as an extension of the problem of solving systems of equations affected by uncertainty which has received significant attention in the literature (cf., e.g., [14, 26, 41, 56, 61, 62, 64] and references therein). It is also tightly related to the system identification problem under uncertain-but-bounded perturbation of the observation of the state of the system [6, 9, 12, 33, 46, 52, 53, 57, 69].
In what follows, our goal is to extend the estimation constructions from [38] to the case of uncertain sensing matrix. Our strategy consists in constructing a tight efficiently computable
convex in \(H\) upper bound on the risk of a candidate estimate, and then building a "presumably good" estimate by minimizing this bound in the estimate parameter \(H\). Throughout the paper, we assume that the signal set \(\mathcal{X}\) is an ellitope, and the norm \(\|\cdot\|\) quantifying the recovery error is the maximum of a finite collection of Euclidean norms.
Our contributionscan be summarized as follows.
1. In Section 2.1 we analyse the \(\epsilon\)-risk (the maximum, over signals from \(\mathcal{X}\), of the radii of \((1-\epsilon)\)-confidence \(\|\cdot\|\)-balls) and the design of presumably good, in terms of this risk, linear estimates in the case of random uncertainty.
2. In Section 3.1, we build presumably good linear estimates in the case of _structured norm-bounded uncertainty_ (cf. [1, Chapter 7] and references therein), thus extending the corresponding results of [33].
Developments in A and B lead to novel computationally efficient techniques for designing presumably good linear estimates for both random and uncertain-but-bounded perturbations.
Analysis and design of polyhedral estimates under uncertainty in sensing matrix form the subject of Sections 2.2 (random perturbations) and 3.2 (uncertain-but-bounded perturbations). The situation here is as follows:
1. The random perturbation case of the _Analysis problem_ Given contrast matrix \(H\), find a provably tight efficiently computable upper bound on \(\epsilon\)-risk of the associated estimate is the subject of Section 2.2, where it is solved "in the full range" of our assumptions (ellitopic \(\mathcal{X}\), sub-Gaussian zero mean \(\eta\) and \(\xi\)). In contrast, the random perturbation case of the Synthesis problem in which we want to minimize the above bound w.r.t. \(H\) turns out to be more involving--the bound to be optimized happens to be nonconvex in \(H\). When there is no uncertainty in sensing matrix, this difficulty can be somehow circumvented [38, Section 5.1]; however, when uncertainty in sensing matrix is present, the strategy developed in [38, Section 5.1] happens to work only when \(\mathcal{X}\) is an ellipsoid rather than a general-type ellitope. The corresponding developments are the subject of Sections 2.2.4, 2.2.5, and 2.2.6.
2. In our context, analysis and design of polyhedral estimates under uncertain-but-bounded perturbations in the sensing matrix appears to be the most difficult; our very limited results on this subject form the subject of Section 3.2,
Notation and assumptions.We denote with \(\|\cdot\|\) the norm on \(\mathbf{R}^{\nu}\) used to measure the estimation error. In what follows, \(\|\cdot\|\) is a maximum of Euclidean norms
\[\|u\|=\max_{\ell\leq L}\sqrt{u^{T}R_{\ell}u} \tag{4}\]
where \(R_{\ell}\in\mathbf{S}^{\nu}_{+}\), \(\ell=1,...,L\), are given matrices with \(\sum_{\ell}R_{\ell}\succ 0\).
Throughout the paper, unless otherwise is explicitly stated, we assume that observation noise \(\xi\) is zero-mean sub-Gaussian, \(\xi\sim\mathcal{SG}(0,\sigma^{2}I)\), i.e., for all \(t\in\mathbf{R}^{m}\),
\[\mathbf{E}\left\{e^{t^{T}\xi}\right\}\leq\exp\left(\tfrac{\sigma^{2}}{2}\|t \|_{2}^{2}\right). \tag{5}\]
## 2 Random perturbations
In this section we assume that uncertainty \(\eta\) is sub-Gaussian, with parameters \(0,I\), i.e.,
\[\mathbf{E}\left\{e^{t^{T}\eta}\right\}\leq\exp\left(\tfrac{1}{2}\|t\|_{2}^{2} \right)\quad\forall t\in\mathbf{R}^{q}. \tag{6}\]
In this situation, given \(\epsilon\in(0,1)\), we quantify the quality of recovery \(\widehat{w}(\cdot)\) of \(w=Bx\) by its maximal over \(x\in\mathcal{X}\)\(\epsilon\)_-risk_
\[\mathrm{Risk}_{\epsilon}[\widehat{w}|\mathcal{X}]:=\sup_{x\in\mathcal{X}} \inf\left\{\rho:\,\mathrm{Prob}_{\xi,\eta}\{\|Bx-\widehat{w}(A[\eta]x+\xi)\|> \rho\}\leq\epsilon\right\} \tag{7}\]
(the radius of the smallest \(\|\cdot\|\)-ball centered at \(\widehat{w}(\omega)\) which covers \(x\), uniformly over \(x\in\mathcal{X}\)).
### Design of presumably good linear estimate
#### 2.1.1 Preliminaries: ellitopes
Throughout this section, we assume that the signal set \(\mathcal{X}\) is _a basic ellitope_. Recall that, by definition [35, 38], a basic ellitope in \(\mathbf{R}^{n}\) is a set of the form
\[\mathcal{X}=\{x\in\mathbf{R}^{n}:\,\exists t\in\mathcal{T}:\,z^{T}T_{k}z\leq t _{k},\,k\leq K\}, \tag{8}\]
where \(T_{k}\in\mathbf{S}^{n}_{+}\), \(T_{k}\succeq 0\), \(\sum_{k}T_{k}\succ 0\), and \(\mathcal{T}\subset\mathbf{R}^{K}_{+}\) is a convex compact set with a nonempty interior which is monotone: whenever \(0\leq t^{\prime}\leq t\in\mathcal{T}\) one has \(t^{\prime}\in\mathcal{T}\). We refer to \(K\) as _ellitopic dimension_ of \(\mathcal{X}\).
Clearly, every basic ellitope is a convex compact set with nonempty interior which is symmetric w.r.t. the origin. For instance,
**A.** Bounded intersection \(\mathcal{X}\) of \(K\) centered at the origin ellipsoids/elliptic cylinders \(\{x\in\mathbf{R}^{n}:\,x^{T}T_{k}x\leq 1\}\)\([T_{k}\succeq 0]\) is a basic ellitope:
\[\mathcal{X}=\{x\in\mathbf{R}^{n}:\exists t\in\mathcal{T}:=[0,1]^{K}:x^{T}T_{k} x\leq t_{k},\,k\leq K\}\]
In particular, the unit box \(\{x\in\mathbf{R}^{n}:\|x\|_{\infty}\leq 1\}\) is a basic ellitope.
**B.** A \(\|\cdot\|_{p}\)-ball in \(\mathbf{R}^{n}\) with \(p\in[2,\infty]\) is a basic ellitope:
\[\{x\in\mathbf{R}^{n}:\|x\|_{p}\leq 1\}=\big{\{}x:\exists t\in\mathcal{T}=\{t \in\mathbf{R}^{n}_{+},\|t\|_{p/2}\leq 1\}:\underbrace{x_{k}^{2}}_{x^{T}T_{k}x} \leq t_{k},\,k\leq K\big{\}}.\]
In the present context, our interest for ellitopes is motivated by their special relationship with the optimization problem
\[\mathrm{Opt}_{*}(C)=\max_{x\in\mathcal{X}}x^{T}Cx,\ C\in\mathbf{S}^{n} \tag{9}\]
of maximizing a homogeneous quadratic form over \(\mathcal{X}\). As it is shown in [38], when \(\mathcal{X}\) is an ellitope, (9) admits "reasonably tight" efficiently computable upper bound. Specifically,
**Theorem 2.1**: [38, Proposition 4.6] _Given ellitope (8) and matrix \(C\), consider the quadratic maximization problem (9) along with its relaxation2_
Footnote 2: Here and below, we use notation \(\phi_{\mathcal{S}}(\cdot)\) for the support function of a convex set \(\mathcal{S}\subset\mathbf{R}^{n}\): for \(y\in\mathbf{R}^{n}\),
\[\phi_{\mathcal{S}}(y)=\sup_{u\in\mathcal{S}}y^{T}s.\]
\[\mathrm{Opt}(C)=\min_{\lambda}\Big{\{}\phi_{\mathcal{T}}(\lambda):\lambda\geq 0,\sum\nolimits_{k}\lambda_{k}T_{k}-C\succeq 0\Big{\}} \tag{10}\]
_The problem is computationally tractable and solvable, and \(\mathrm{Opt}(C)\) is an efficiently computable upper bound on \(\mathrm{Opt}_{*}(C)\). This upper bound is tight:_
\[\mathrm{Opt}_{*}(C)\leq\mathrm{Opt}(C)\leq 3\ln(\sqrt{3}K)\mathrm{Opt}_{*}(C).\]
#### 2.1.2 Tight upper bounding the risk of linear estimate
Consider a linear estimate
\[\widehat{w}^{H}(\omega)=H^{T}\omega\quad[H\in\mathbf{R}^{m\times\nu}]\]
**Proposition 2.1**: _In the setting of this section, synthesis of a presumably good linear estimate reduces to solving the convex optimization problem_
\[\min_{H\in\mathbf{R}^{m\times\nu}}\mathfrak{R}[H] \tag{11}\]
_where_
\[\begin{array}{rcl}\mathfrak{R}[H]&=&\min_{\genfrac{}{}{0.0pt}{}{\lambda_{ \ell},\mu^{\ell},\kappa^{\ell},\ell}{\varkappa^{\ell},\rho,\varepsilon}}\left\{ \left[1+\sqrt{2\ln(2L/\epsilon)}\right]\left[\sigma\max_{\ell\leq L}\|HR^{1/2}_ {\ell}\|_{\mathrm{Fro}}+\rho\right]+\varrho:\\ &&\mu^{\ell}\geq 0,\varkappa^{\ell}\geq 0,\,\lambda_{\ell}+\phi_{\mathcal{T}}( \mu_{\ell})\leq\rho,\kappa+\phi_{\mathcal{T}}(\varkappa^{\ell})\leq\varrho, \,\ell\leq L\\ &&\left[\frac{\lambda_{\ell}I_{\nu q}}{\frac{1}{2}\left[A_{1}^{T}HR^{1/2}_{ \ell},...,A_{q}^{T}HR^{1/2}_{\ell}\right]}\right]\frac{\left\lfloor\frac{1}{2} \left[R^{1/2}_{\ell}H^{T}A_{1};...;R^{1/2}_{\ell}H^{T}A_{q}\right]\right\rfloor }{\sum_{k}\mu^{\ell}_{k}T_{k}}\right]\succeq 0,\,\ell\leq L\\ &&\left[\frac{\kappa^{\ell}I_{\nu}}{\frac{1}{2}[B-H^{T}A]^{T}R^{1/2}_{\ell}} \right]\frac{\left\lfloor\frac{1}{2}R^{1/2}_{\ell}[B-H^{T}A]}{\sum_{k}\varkappa ^{\ell}_{k}T_{k}}\right]\succeq 0,\,\ell\leq L\end{array}\right\} \tag{12}\]
_For a candidate contrast matrix \(H\), the \(\epsilon\)-risk of the linear estimate \(\widehat{w}^{H}_{\mathrm{lin}}(\omega)=H^{T}\omega\) is upper-bounded by \(\mathfrak{R}[H]\)._
#### 2.1.3 A modification
Let us assume that a \(K\)-repeated version of observation (2) is available, i.e., we observe
\[\omega^{K}=\{\omega_{k}=A[\eta_{k}]x+\xi_{k},\,k=1,...,K\} \tag{13}\]
with independent across \(k\) pairs \((\xi_{k},\eta_{k})\). In this situation, we can relax the assumption of sub-Gaussianity of \(\xi\) and \(\eta\) to the second moment boundedness condition
\[\mathbf{E}\{\xi\xi^{T}\}\preceq\sigma^{2}I_{m},\quad\mathbf{E}\left\{\eta\eta ^{T}\right\}\preceq I_{q}. \tag{14}\]
Let us consider the following construction. For each \(\ell\leq L\), given \(H\in\mathbf{R}^{m\times\nu}\) we denote
\[\left.\begin{array}{rcl}\widetilde{\mathfrak{R}}_{\ell}[H]&=&\min_{\lambda, \mu,\kappa,\varkappa}\left\{\sigma\|HR^{1/2}_{\ell}\|_{\mathrm{Fro}}+\lambda+ \phi_{\mathcal{T}}(\mu)+\kappa+\phi_{\mathcal{T}}(\varkappa):\right.\\ &&\left.\mu\geq 0,\varkappa\geq 0,\left[\frac{\kappa I_{\nu}}{\frac{1}{2}[B-H^ {T}A]^{T}R^{1/2}_{\ell}}\right]\frac{1}{\sum_{k}\varkappa_{k}T_{k}}\right] \succeq 0\\ &&\left[\frac{\lambda I_{\nu q}}{\frac{1}{2}\left[R^{1/2}_{\ell}H^{T}A_{1};... ;R^{1/2}_{\ell}H^{T}A_{q}\right]}{\frac{1}{2}\left[A_{1}^{T}HR^{1/2}_{\ell},...,A_{q}^{T}HR^{1/2}_{\ell}\right]}\right]\sum_{k}\mu_{k}T_{k}\right]\succeq 0\end{array}\right\} \tag{15}\]
and consider the convex optimization problem
\[\widetilde{H}_{\ell}\in\underset{H}{\mathrm{Argmin}}\,\widetilde{\mathfrak{R }}_{\ell}[H]. \tag{16}\]
We define the "reliable estimate" \(\widehat{w}^{(r)}(\omega^{K})\) of \(w=Bx\) as follows.
1. Given \(H_{\ell}\in\mathbf{R}^{m\times\nu}\) and observations \(\omega_{k}\) we compute linear estimates \(w_{\ell}(\omega_{k})=H_{\ell}\omega_{k}\), \(\ell=1,...,L\), \(k=1,...,K\);
2. We define vectors \(z_{\ell}\in\mathbf{R}^{\nu}\) as geometric medians of \(w_{\ell}(\omega_{k})\): \[z_{\ell}(\omega^{K})\in\operatorname*{Argmin}_{z}\sum_{k=1}^{K}\|R_{\ell}^{1/ 2}(w_{\ell}(\omega_{k})-z)\|_{2},\;\ell=1,...,L.\]
3. Finally, we select as \(\widehat{w}^{(r)}(\omega^{K})\) any point of the set \[\mathcal{W}(\omega^{K})=\bigcap_{\ell=1}^{L}\left\{w\in\mathbf{R}^{\nu}:\,\|R _{\ell}^{1/2}(z_{\ell}(\omega^{K})-w)\|_{2}\leq 4\widetilde{\mathfrak{R}}_{ \ell}[H_{\ell}]\right\}.\] or set \(\widehat{w}^{(r)}(\omega^{K})\) a once for ever fixed point, e.g., \(\widehat{w}^{(r)}(\omega^{K})=0\) if \(\mathcal{W}(\omega^{K})=\emptyset\).
We have the following analog of Proposition 2.1.
**Proposition 2.2**: _In the situation of this section, it holds_
\[\sup_{x\in\mathcal{X}}\mathbf{E}_{\eta_{k},\xi_{k}}\left\{\|R_{\ell}^{1/2}(w_ {\ell}(\omega_{k})-Bx)\|_{2}^{2}\right\}\leq\widetilde{\mathfrak{R}}_{\ell}^{ 2}[H_{\ell}],\;\ell\leq L, \tag{17}\]
_and_
\[\operatorname{Prob}\left\{\|R_{\ell}^{1/2}(z_{\ell}(\omega^{K})-Bx)\|_{2}\geq 4 \widetilde{\mathfrak{R}}_{\ell}[H_{\ell}]\right\}\leq e^{-0.1070K},\;\ell\leq L. \tag{18}\]
_As a consequence, whenever \(K\geq\ln[L/\epsilon]/0.1070\), the \(\epsilon\)-risk of the aggregated estimate \(\widehat{w}^{(r)}(\omega^{K})\) satisfies_
\[\operatorname{Risk}_{\epsilon}[\widehat{w}^{(r)}(\omega^{K})|\mathcal{X}] \leq\overline{\mathfrak{R}},\;\;\overline{\mathfrak{R}}=8\max_{\ell\leq L} \widetilde{\mathfrak{R}}_{\ell}[H_{\ell}].\]
Remark.Proposition 2.2 is motivated by the desire to capture situations in which sub-Gaussian assumption on \(\eta\) and \(\xi\) does not hold or is too restrictive. Consider, e.g., the case where the uncertainty in the sensing matrix reduces to zeroing out some randomly selected columns in the nominal matrix \(\overline{A}\) (think of taking picture through the window with frost patterns). Denoting by \(\gamma\) the probability to zero out a particular column and assuming that columns are zeroed out independently, model (2) in this situation reads
\[\omega=A[\eta]x+\xi,\,A[\eta]=(1-\gamma)\overline{A}+\sum\nolimits_{\alpha=1} ^{n}\eta_{\alpha}A_{\alpha}\]
where \(\eta_{1},...,\eta_{n}\) are i.i.d. zero mean random variables taking values \((\gamma-1)\rho\) and \(\gamma\rho\) with probabilities \(\gamma\) and \(1-\gamma\), and \(A_{\alpha},\,1\leq\alpha\leq n\), is an \(m\times n\) matrix with all but the \(\alpha\)-th column being zero and \(\operatorname{Col}_{\alpha}[A_{\alpha}]=\rho^{-1}\operatorname{Col}_{\alpha}[ \overline{A}]\). Scaling factor \(\rho\) is selected to yield the unit sub-Gaussianity parameter of \(\eta\) or \(\mathbf{E}\{\eta_{\alpha}^{2}\}=1\) depending on whether Proposition 2.1 or Proposition 2.2 is used. For small \(\gamma\), the scaling factor \(\rho\) is essentially smaller in the first case, resulting in larger "disturbance matrices" \(A_{\alpha}\) and therefore--in stricter constraints in the optimization problem (11), (12) responsible for the design of the linear estimate.
#### 2.1.4 Numerical illustration
In Figure 1 we present results of a toy experiment in which
* \(n=32,m=32\), and \(\nu=16\), \(\overline{A}x\in\mathbf{R}^{m}\) is the discrete time convolution of \(x\in\mathbf{R}^{n}\) with a simple kernel \(\varkappa\) of length \(9\) restricted onto the "time horizon" \(\{1,...,n\}\), and \(Bx\) cuts off \(x\) the first \(\nu\) entries. We consider Gaussian perturbation \(\eta\sim\mathcal{N}(0,\gamma^{2}I_{q})\), \(q=9\), and \(A[\eta]x=[A+\sum_{\alpha=1}^{q}\eta_{\alpha}A_{\alpha}]x\) which is the convolution of \(x\) with the kernel \(\varkappa_{\eta}\) restricted onto the time horizon \(\{1,...,n\}\), \(\gamma\) being the control parameter.
* \(L=1\) and \(\|\cdot\|=\|\cdot\|_{2}\),
* \(\mathcal{X}\) is the ellipsoid \(\{x:\sum_{i}i^{2}[Dx]_{i}^{2}\leq 1\}\), where \(D\) is the matrix of inverse Discrete Cosine Transform of size \(n\times n\).
* \(\xi\sim\mathcal{N}(0,\sigma^{2}I_{m})\), \(\sigma=10^{-4}\).
In each cell of the plot we represent error distributions and upper risk bounds (horizontal bar) of four estimates (from left to right) for different uncertainty levels \(\gamma\): (1) robust estimate by Proposition 2.1 and upper bound \(\mathfrak{R}\) on its \(0.05\)-risk, (2) single-observation estimate \(w_{1}(\omega_{1})=H_{1}\omega_{1}\) yielded by the minimizer \(H_{1}\) of \(\widetilde{\mathfrak{R}}_{1}[H]\) over \(H\), see (15), and upper bound \(\widetilde{\mathfrak{R}}_{1}[H_{1}]\) on its _expected error risk_,3 (3) "nominal" estimate--estimate by Proposition 2.1 as applied to the "no uncertainty" case where all \(A_{\alpha}\) in (3) are set to \(0\) and upper bound \(\mathfrak{R}\) from (12) on its \(0.05\)-risk computed using actual
Figure 1: Distributions of \(\ell_{2}\)-recovery errors and upper bounds of the robust and βnominalβ estimates for different values of \(\gamma\) parameter.
uncertainty level, (4) "nominal" estimate \(\widetilde{w}_{1}(\omega_{1})=\widetilde{H}_{1}\omega_{1}\) yielded by the minimizer \(\widetilde{H}_{1}\) of \(\mathfrak{R}_{1}[H]\) over \(H\) in the "no uncertainty" case and upper bound \(\mathfrak{R}_{1}[\widetilde{H}_{1}]\) on its "actual"--with uncertainty present--expected error risk.
### Design of presumably good polyhedral estimate
#### 2.2.1 Preliminaries on polyhedral estimates
Consider a slightly more general than (2), (3) observation scheme
\[\omega=Ax+\zeta \tag{19}\]
where \(A\in\mathbf{R}^{m\times n}\) is given, unknown signal \(x\) is known to belong to a given signal set \(\mathcal{X}\) given by (8), and \(\zeta\) is observation noise with probability distribution \(P_{x}\) which can depend on \(x\). For example, when observation \(\omega\) is given by (2), (3), we have
\[\zeta=\sum\nolimits_{\alpha=1}^{q}\eta_{\alpha}A_{\alpha}x+\xi \tag{20}\]
with zero mean sub-Gaussian \(\eta\) and \(\xi\).
When building polyhedral estimate in the situation in question, one, given tolerance \(\epsilon\in(0,1)\) and a positive integer \(M\), specifies a computationally tractable convex set \(\mathcal{H}\), the larger the better, of vectors \(h\in\mathbf{R}^{m}\) such that
\[\mathrm{Prob}_{\zeta\sim P_{x}}\{|h^{T}\zeta|>1\}\leq\epsilon/M\quad\forall x \in\mathcal{X}. \tag{21}\]
A polyhedral estimate \(\widehat{w}^{H}(\cdot)\) is specified by contrast matrix \(H\in\mathbf{R}^{M\times n}\) restricted to have all columns in \(\mathcal{H}\) according to
\[\omega\mapsto\widehat{x}^{H}(\omega)\in\underset{u\in\mathcal{X}}{\mathrm{ Argmin}}\left\{\|H^{T}[Au-\omega]\|_{\infty}\right\},\;\widehat{w}^{H}_{ \mathrm{poly}}(\omega)=B\widehat{x}^{H}(\omega). \tag{22}\]
It is easily seen (cf. [38, Proposition 5.1.1]) that the \(\epsilon\)-risk (7) of the above estimate is upper-bounded by the quantity
\[\mathfrak{p}[H]=\sup_{y}\left\{\|By\|:y\in 2\mathcal{X},\|H^{T}Ay\|_{\infty} \leq 2\right\}. \tag{23}\]
Indeed, let \(h_{1},...,h_{M}\) be the columns of \(H\). For \(x\in\mathcal{X}\) fixed, the inclusions \(h_{j}\in\mathcal{H}\) imply that the \(P_{x}\)-probability of the event \(Z_{x}=\{\zeta:|\zeta^{T}h_{j}|\leq 1\,\forall j\leq M\}\) is at least \(1-\epsilon\). When this event takes place, we have \(\|H^{T}[\omega-Ax]\|_{\infty}\leq 1\), which combines with \(x\in\mathcal{X}\) to imply that \(\|H^{T}[\omega-A\widehat{x}^{H}(\omega)]\|_{\infty}\leq 1\), so that \(\|H^{T}A[x-\widehat{x}^{H}(\omega)]\|_{\infty}\leq 2\), and besides this, \(x-\widehat{x}^{H}(\omega)\in 2\mathcal{X}\), whence \(\|Bx-\widehat{w}^{H}_{\mathrm{poly}}(\omega)\|\leq\mathfrak{p}[H]\) by definition of \(\mathfrak{p}[H]\). The bottom line is that whenever \(x\in\mathcal{X}\) and \(\zeta=\omega-Ax\in Z_{x}\), which happens with \(P_{x}\)-probability at least \(1-\epsilon\), we have \(\|Bx-\widehat{w}^{H}_{\mathrm{poly}}(\omega)\|\leq\mathfrak{p}[H]\), whence the \(\epsilon\)-risk of the estimate \(\widehat{w}^{H}_{\mathrm{poly}}\) indeed is upper-bounded by \(\mathfrak{p}[H]\).
To get a presumably good polyhedral estimate, one minimizes \(\mathfrak{p}[H]\) over \(M\times\nu\) matrices \(H\) with columns from \(\mathcal{H}\). Precise minimization is problematic, because \(\mathfrak{p}[\cdot]\), while being convex, is usually difficult to compute. Thus, the design routine proposed in [37] goes via minimizing an efficiently computable upper bound on \(\mathfrak{p}[H]\). It is shown in [38, Section 5.1.5] that when \(\mathcal{X}\) is ellipte
and \(\|u\|=\|Ru\|_{2}\), a reasonably tight upper bound on \(\mathfrak{p}[H]\) is given by the efficiently computable function
\[\mathfrak{p}_{+}[H]=2\min_{\lambda,\mu,v}\left\{\lambda+\phi_{\mathcal{T}}(\mu)+ \sideset{}{\sum}{{}_{i}}{\upsilon}_{i}:\begin{array}{c|c}\mu\geq 0,v\geq 0\\ \left[\frac{\lambda I_{v}}{2\,B^{T}R^{2}}\right]\,A^{T}\,H\text{Diag}\{v\}H^ {T}\,A+\sum_{k}\mu_{k}T_{k}\end{array}\right]\succeq 0\end{array}\right\}.\]
Synthesis of a presumably good polyhedral estimate reduces to minimizing the latter function in \(H\) under the restriction \(\text{Col}_{j}[H]\in\mathcal{H}\). Note that the latter problem still is nontrivial because \(\mathfrak{p}_{+}\) is nonconvex in \(H\).
Our objective here is to implement the outlined strategy in the case of observation \(\omega\) is given by (2), (3).
#### 2.2.2 Specifying \(\mathcal{H}\)
Our first goal is to specify, given tolerance \(\delta\in(0,1)\), a set \(\mathcal{H}_{\delta}\subset\mathbf{R}^{m}\), the larger the better, such that
\[h\in\mathcal{H}_{\delta},x\in\mathcal{X}\Rightarrow\text{Prob}_{\zeta\sim P _{x}}\{|h^{T}\zeta|>1\}\leq\delta. \tag{24}\]
Note that a "tight" sufficient condition for the validity of (24) is
\[\text{Prob}_{\xi}\{|h^{T}\xi|>1/2\} \leq\delta/2, \tag{25a}\] \[\text{Prob}_{\eta}\left\{\left|\sum\nolimits_{\alpha=1}^{q}[h^{T}A _{\alpha}x]\eta_{\alpha}\right|>1/2\right\} \leq\delta/2,\,\forall x\in\mathcal{X}. \tag{25b}\]
Note that under the sub-Gaussian assumption (5), \(h^{T}\xi\) is itself sub-Gaussian, \(h^{T}\xi\sim\mathcal{SG}(0,\sigma^{2}\|h\|_{2}^{2})\); thus, a tight sufficient condition for (25a) is
\[\|h\|_{2}\leq[\sigma\chi(\delta)]^{-1},\,\,\chi(\delta)=2\sqrt{2\ln(2/\delta)}. \tag{26}\]
Furthermore, by (6), r.v. \(\sum_{\alpha=1}^{q}[h^{T}A_{\alpha}x]\eta_{\alpha}=h^{T}[A_{1}x,...,A_{q}x]\eta\) is sub-Gaussian with parameters \(0\) and \(\|[h^{T}A_{1}x;...;h^{T}A_{q}x]\|_{2}^{2}\), implying the validity of (25b) for a given \(x\) whenever
\[\|[h^{T}A_{1}x;...;h^{T}A_{q}x]\|_{2}\leq\chi^{-1}(\delta).\]
We want this relation to hold true for every \(x\in\mathcal{X}\), that is, we want the operator norm \(\|\cdot\|_{\mathcal{X},2}\) of the mapping
\[x\mapsto\mathcal{A}[h]x,\,\,\mathcal{A}[h]=[h^{T}A_{1};h^{T}A_{2};...;h^{T}A_{ q}] \tag{27}\]
induced by the norm \(\|\cdot\|_{\mathcal{X}}\) on the argument and the norm \(\|\cdot\|_{2}\) on the image space to be upper-bounded by \(\chi(\delta)\):
\[\|\mathcal{A}[h]\|_{\chi,2}\leq\chi^{-1}(\delta). \tag{28}\]
Invoking [33, Theorem 3.1] (cf. also the derivation in the proof of Proposition 2.1 in Section A.2), a tight sufficient condition for the latter relation is
\[\text{Opt}[h]:=\min_{\lambda,\mu}\left\{\lambda+\phi_{\mathcal{T}}(\mu):\, \mu\geq 0,\,\left[\begin{array}{c|c}\lambda I_{q}&\frac{1}{2}\mathcal{A}[h]\\ \hline\frac{1}{2}\mathcal{A}^{T}[h]&\sum_{k}\mu_{k}T_{k}\end{array}\right] \succ 0\right\}\leq\chi^{-1}(\delta), \tag{29}\]
tightness meaning that \(\text{Opt}[h]\) is within factor \(O(1)\sqrt{\ln(K+1)}\) of \(\|\mathcal{A}[h]\|_{\mathcal{X},2}\).
The bottom line is that with \(\mathcal{H}_{\delta}\) specified by constraints (26) and (28) (or by the latter replaced with its tight relaxation (29)) we do ensure (24).
#### 2.2.3 Bounding the risk of the polyhedral estimate \(\widehat{w}^{H}\)
**Proposition 2.3**: _In the situation of this section, let \(\epsilon\in(0,1)\), and let \(H=[H_{1},...,H_{L}]\) be \(m\times ML\) matrix with \(L\) blocks \(H_{\ell}\in\mathbf{R}^{m\times M}\) such that \(\operatorname{Colj}[H]\in\mathcal{H}_{\delta}\) for all \(j\leq ML\) and \(\delta=\epsilon/ML\). Consider optimization problem_
\[\mathfrak{p}_{+}[H]=2\min_{\lambda_{\ell},\mu^{\ell},\upsilon^{ \ell},\varrho}\left\{\rho:\,\mu^{\ell}\geq 0,\upsilon^{\ell}\geq 0,\,\lambda_{ \ell}+\phi_{\mathcal{T}}(\mu^{\ell})+\sum\nolimits_{j=1}^{M}\upsilon_{j}^{ \ell}\leq\rho,\,\ell\leq L\right.\] \[\left.\left[\begin{array}{c|c}\lambda_{\ell}I_{\nu}&\frac{1}{2} R_{\ell}^{1/2}B\\ \hline\frac{1}{2}B^{T}R_{\ell}^{1/2}&A^{T}H_{\ell}\text{Diag}\{\upsilon^{\ell} \}H_{\ell}^{T}A+\sum\nolimits_{k}\mu_{k}^{\ell}T_{k}\end{array}\right]\succeq 0, \,\ell\leq L\right\}. \tag{30}\]
_Then_
\[\operatorname{Risk}_{\epsilon}[\widehat{w}^{H}|\mathcal{X}]\leq\mathfrak{p}_ {+}[H].\]
#### 2.2.4 Optimizing \(\mathfrak{p}_{+}[H]\)--the strategy
Proposition 2.3 resolves the analysis problem--it allows to efficiently upper-bound the \(\epsilon\)-risk of a given polyhedral estimate \(\widehat{w}^{H}_{\text{poly}}\). At the same time, "as is," it does not allow to build the estimate itself (solve the "estimate synthesis" problem--compute a presumably good contrast matrix) because straightforward minimization of \(\mathfrak{p}_{+}[H]\) (that is, adding \(H\) to decision variables of the right hand side of (2.2.3) results in a nonconvex problem. A remedy, as proposed in [38, Section 5.1], stems from the concept of a cone compatible with a convex compact set \(\mathcal{H}\subset\mathbf{R}^{m}\) which is defined as follows:
Given positive integer \(J\) and real \(\varkappa\geq 1\) we say that a closed convex cone \(\mathbf{K}\subset\mathbf{S}_{+}^{m}\times\mathbf{R}_{+}\) is \((J,\varkappa)\)-compatible with \(\mathcal{H}\) if
1. whenever \(h_{1},...,h_{J}\in\mathcal{H}\) and \(\upsilon\in\mathbf{R}_{+}^{J}\), the pair \(\left(\sum\nolimits_{j=1}^{J}\upsilon_{j}h_{j}h_{j}^{T},\sum\nolimits_{j} \upsilon_{j}\right)\) belongs to \(\mathbf{K}\), and "nearly vice versa":
2. given \((\Theta,\varrho)\in\mathbf{K}\) and \(\varkappa\geq 1\), we can efficiently build collections of vectors \(h_{j}\in\mathcal{H}\), and reals \(\upsilon_{j}\geq 0\), \(j\leq J\), such that \(\Theta=\sum\nolimits_{j=1}^{J}\upsilon_{j}h_{j}h_{j}^{T}\) and \(\sum\nolimits_{j}\upsilon_{j}\leq\varkappa\varrho\).
**Example.** Let \(\mathcal{H}\) be a centered at the origin Euclidean ball of radius \(R>0\) in \(\mathbf{R}^{J}\). When setting
\[\mathbf{K}=\{(\Theta,\varrho):\,\Theta\succeq 0,\operatorname{Tr}(\Theta) \leq R^{2}\varrho\operatorname{Tr}(\Theta)\},\]
we obtain a cone \((M,1)\)-compatible with \(\mathcal{H}\). Indeed, for \(h_{j}\in\mathcal{H}\) and \(\upsilon_{j}\geq 0\) we have
\[\operatorname{Tr}\left(\sum\nolimits_{j}\upsilon_{j}h_{j}h_{j}^{T}\right)\leq R ^{2}\sum\nolimits_{j}\upsilon_{j},\]
that is \(\left(\Theta:=\sum\nolimits_{j}\upsilon_{j}h_{j}h_{j}^{T},\varrho:=\sum \nolimits_{j}\upsilon_{j}\right)\in\mathbf{K}\). Vice versa, given \((\Theta,\varrho)\in\mathbf{K}\), i.e., \(\Theta\succeq 0\) and \(\varrho\geq\operatorname{Tr}(\Theta)/R^{2}\) and specifying \(f_{1},...,f_{m}\) as the orthonormal system of eigenvectors of \(\Theta\), and \(\lambda_{j}\) as the corresponding eigenvalues and setting \(h_{j}=Rf_{j}\), \(\upsilon_{j}=R^{-2}\lambda_{j})\), we get \(h_{j}\in\mathcal{H}\), \(\Theta=\sum\nolimits_{j}\upsilon_{j}h_{j}h_{j}^{T}\) and \(\sum\nolimits_{j}\upsilon_{j}=\operatorname{Tr}(\Theta)/R^{2}\leq\varrho\).
Coming back to the problem of minimizing \(\mathfrak{p}_{+}[H]\) in \(H\), assume that we have at our disposal a cone \(\mathbf{K}\) which is \((M,\varkappa)\)-compatible with \(\mathcal{H}_{\delta}\). In this situation, we can replace the nonconvex problem
\[\min_{H=[H^{1},...,H^{L}]}\{\mathfrak{p}_{+}[H]:\,\operatorname{Colj}[H^{\ell }]_{j}\in\mathcal{H}_{\delta}\} \tag{31}\]
with the problem
\[\min_{\begin{subarray}{c}\bar{\lambda}_{\ell},\bar{\rho},\\ \Theta_{\ell},\varrho_{\ell},\bar{\varphi}\end{subarray}}\left\{\bar{\rho}:\,( \Theta_{\ell},\varrho_{\ell})\in\mathbf{K},\bar{\mu}^{\ell}\geq 0,\,\bar{ \lambda}_{\ell}+\phi_{\mathcal{T}}(\bar{\mu}^{\ell})+\varrho_{\ell}\leq\bar{ \rho},\,\ell\leq L,\right.\] \[\left.\left[\begin{array}{c|c}\bar{\lambda}_{\ell}I_{\nu}&\frac{ 1}{2}R_{\ell}^{1/2}B\\ \hline\frac{1}{2}B^{T}R_{\ell}^{1/2}&A^{T}\Theta_{\ell}A+\sum_{k}\!\bar{\mu}_{k }^{\ell}T_{k}\end{array}\right]\succeq 0,\ell\leq L\right\}. \tag{32}\]
Unlike (31), the latter problem is convex and efficiently solvable provided that \(\mathbf{K}\) is computationally tractable, and can be considered as "tractable \(\sqrt{\varkappa}\)-tight" relaxation of the problem of interest (31). Namely,
* Given a feasible solution \(H_{\ell},\lambda_{\ell},\mu^{\ell},v^{\ell},\rho\) to the problem of interest (31), we can set \[\Theta_{\ell}=\sum\nolimits_{j=1}^{M}\!v_{j}^{\ell}\mathrm{Col}_{j}[H_{\ell}] \mathrm{Col}_{j}^{T}[H_{\ell}],\,\,\,\varrho_{\ell}=\sum\nolimits_{j}\!v_{j} ^{\ell},\] thus getting \((\Theta_{\ell},\varrho_{\ell})\in\mathbf{K}\). By (i) in the definition of compatibility, \(\Theta_{\ell},\varrho_{\ell},\bar{\lambda}_{\ell}=\lambda_{\ell},\bar{\mu}^{ \ell}=\mu^{\ell},\bar{\rho}=\rho\) is a feasible solution to (32), and this transformation preserves the value of the objective
* Vice versa, given a feasible solution \(\Theta_{\ell},\varrho_{\ell},\bar{\lambda}_{\ell},\bar{\mu}^{\ell},\bar{\rho}\) to (32) and invoking (ii) of the definition of compatibility, we can convert, in a computationally efficient way, the pairs \((\Theta_{\ell},\rho_{\ell})\in\mathbf{K}\) into the pairs \(H_{\ell}\in\mathbf{R}^{m\times M}\), \(\bar{v}^{\ell}\in\mathbf{R}_{+}^{m}\) in such a way that the columns of \(H_{\ell}\) belong to \(\mathcal{H}_{\delta}\), \(\Theta_{\ell}=H_{\ell}\mathrm{Diag}\{\bar{v}^{\ell}\}H_{\ell}^{T}\), \(\sum_{j}\!\bar{v}_{j}^{\ell}\leq\varkappa_{\varrho_{\ell}}\). Assuming w.l.o.g. that all matrices \(R_{\ell}^{1/2}B\) are nonzero, we obtain \(\phi_{\mathcal{T}}(\bar{\mu}^{\ell})+\varrho_{\ell}>0\) and \(\bar{\lambda}_{\ell}>0\) for all \(\ell\). We claim that setting \[\gamma_{\ell}=\sqrt{[\phi_{\mathcal{T}}(\bar{\mu}^{\ell})+\varkappa_{\varrho _{\ell}}]/\bar{\lambda}_{\ell}},\,\lambda_{\ell}=\gamma_{\ell}\bar{\lambda}_ {\ell},\,\,\mu_{\ell}=\gamma_{\ell}^{-1}\bar{\mu}_{\ell},v^{\ell}=\gamma_{ \ell}^{-1}\bar{v}^{\ell},\,\,\rho=\sqrt{\varkappa_{\ell}}\bar{\rho}\] we get a feasible solution to (31). Indeed, all we need is to verify that this solution satisfies, for every \(\ell\leq L\), constraints of (30). To check the semidefinite constraint, note that \[\left[\begin{array}{c|c}\lambda_{\ell}I_{\nu}&\frac{1}{2}R_{\ell}^{1/2}B\\ \hline\frac{1}{2}B^{T}R_{\ell}^{1/2}&A^{T}H_{\ell}\mathrm{Diag}\{v^{\ell}\}H_ {\ell}^{T}A+\sum_{k}\!\mu_{k}^{\ell}T_{k}\end{array}\right]=\left[\begin{array} []{c|c}\gamma_{\ell}\bar{\lambda}_{\ell}I_{\nu}&\frac{1}{2}R_{\ell}^{1/2}B\\ \hline\frac{1}{2}B^{T}R_{\ell}^{1/2}&\gamma_{\ell}^{-1}\left[A^{T}H_{\ell} \mathrm{Diag}\{\bar{v}^{\ell}\}H_{\ell}^{T}A+\sum_{k}\!\bar{\mu}_{k}^{\ell}T_{ k}\right]\end{array}\right]\] and the matrix in the right-hand side is \(\succeq 0\) by the semidefinite constraint of (32) combined with \(\Theta_{\ell}=\sum_{j}\!\bar{v}_{j}^{\ell}\mathrm{Col}_{j}[H_{\ell}]\mathrm{ Col}_{j}^{T}[H_{\ell}]\). Furthermore, note that by construction \(\sum_{j}\!\bar{v}_{j}^{\ell}\leq\varkappa_{\varrho_{\ell}}\), whence \[\lambda_{\ell}+\phi_{\mathcal{T}}(\mu^{\ell})+\sum\nolimits_{j} \!v_{j}^{\ell} =\gamma_{\ell}\bar{\lambda}_{\ell}+\gamma_{\ell}^{-1}[\phi_{ \mathcal{T}}(\bar{\mu}^{\ell})+\varkappa_{\varrho_{\ell}}]=2\sqrt{\bar{ \lambda}_{\ell}[\phi_{\mathcal{T}}(\bar{\mu}^{\ell})+\varkappa_{\varrho_{\ell}}]}\] \[\leq 2\sqrt{\varkappa}\sqrt{\bar{\lambda}_{\ell}[\phi_{\mathcal{T}} (\bar{\mu}^{\ell})+\varrho_{\ell}]}\leq\sqrt{\varkappa}\left[\bar{\lambda}_{ \ell}+\phi_{\mathcal{T}}(\bar{\mu}^{\ell})+\varrho_{\ell}\right]\leq\sqrt{ \varkappa}\bar{\rho}=\rho\] (we have taken into account that \(\varkappa\geq 1\)).
We conclude that the (efficiently computable) optimal solution to the relaxed problem (32) can be efficiently converted to a feasible solution to problem (31) which is within the factor at most \(\sqrt{\varkappa}\) from optimality in terms of the objective. Thus,
* Given a \(\varkappa\)-compatible with \(\mathcal{H}_{\delta}\) cone \(\mathbf{K}\), we can find, in a computationally efficient fashion, a feasible solution to the problem of interest (31) with the value of the objective by at most the factor \(\sqrt{\varkappa}\) greater than the optimal value of the problem.
What we propose is to build a presumably good polyhedral estimate by applying the just outlined strategy to the instance of (31) associated with \({\cal H}={\cal H}_{\delta}\) given by (26) and (29). The still missing--and crucial--element in this strategy is a computationally tractable cone \({\bf K}\) which is \((M,\varkappa)\)-compatible, for some "moderate" \(\varkappa\), with our \({\cal H}_{\delta}\). For the time being, we have at our disposal such a cone only for the "no uncertainty in sensing matrix" case (that is, in the case where all \(A_{\alpha}\) are zero matrices), and it is shown in [38, Chapter 5] that in this case the polyhedral estimate stemming from the just outlined strategy is near minimax-optimal, provided that \(\xi\sim{\cal N}(0,\sigma^{2}I_{m})\).
When "tight compatibility"--with \(\varkappa\) logarithmic in the dimension of \({\cal H}\)--is sought, the task of building a cone \((M,\varkappa)\)-compatible with a given convex compact set \({\cal H}\) reveals to be highly nontrivial. To the best of our knowledge, for the time being, the widest family of sets \({\cal H}\) for which tight compatibility has been achieved is the family of ellitopes [39]. Unfortunately, this family seems to be too narrow to capture the sets \({\cal H}_{\delta}\) we are interested in now. At present, the only known to us "tractable case" here is the ball case \(K=1\), and even handling this case requires extending compatibility results of [39] from ellitopes to spectralopes.
#### 2.2.5 Estimate synthesis utilizing cones compatible with spectralopes
Let for \(S^{ij}\in{\bf S}^{d_{i}}\), \(1\leq i\leq I\), \(1\leq j\leq N\), and let for \(g\in{\bf R}^{N}\), \(S_{i}[g]=\sum_{j=1}^{N}g_{j}S^{ij}\). A _basic spectralope in \({\bf R}^{N}\)_ is a set \({\cal H}\subset{\bf R}^{N}\) represented as
\[{\cal H}=\{g\in{\bf R}^{N}:\exists r\in{\cal R}:S_{i}^{2}[g]\preceq r_{i}I_{d_{ i}},i\leq I\}; \tag{33}\]
here \({\cal R}\) is a compact convex monotone subset of \({\bf R}^{I}_{+}\) with nonempty interior, and \(\sum_{i}S_{i}^{2}[g]\succ 0\) for all \(g\neq 0\). We refer to \(d=\sum_{i}d_{i}\) as _spectralopic dimension_ of \({\cal H}\). A spectralope, by definition, is a linear image of a basic spectralope.
As shown in [38], where the notion of a spectralope was introduced, spectralopes are convex compact sets symmetric w.r.t. the origin, and basic spectralopes have nonempty interiors. The family of spectralopes is rather rich--finite intersections, direct products, linear images, and arithmetic sums of spectralopes, same as inverse images of spectralopes under linear embeddings, are spectralopes, with spectralopic representations of the results readily given by spectralopic representations of the operands.
Every ellitope is a spectralope. An example of spectralope which is important to us is the set \({\cal H}_{\delta}\) given by (26) and (28) in the "ball case" where \({\cal X}\) is an ellipsoid (case of \(K=1\)). In this case, by one-to-one linear parameterization of signals \(x\), accompanied for the corresponding updates in \(A,A_{\alpha}\), and \(B\), we can assume that \(T_{1}=I_{n}\) in (8), so that \({\cal X}\) is the unit Euclidean ball,
\[{\cal X}=\{x\in{\bf R}^{n}:x^{T}x\leq 1\}.\]
In this situation, denoting by \(\|\cdot\|_{2,2}\) the spectral norm of a matrix, constraints (26) and (28) specify the set
\[\begin{array}{rcl}{\cal H}_{\delta}&=&\left\{h\in{\bf R}^{m}:\|h\|_{2}\leq( \sigma\chi(\delta))^{-1},\|{\cal A}[h]\|_{2,2}\leq\chi^{-1}(\delta)\right\}\\ &=&\left\{h\in{\bf R}^{m}:\;\exists r\in{\cal R}:S_{j}^{2}[h]\preceq r_{j}I_{d _{j}},j\leq 2\right\}\end{array} \tag{34}\]
where \({\cal R}=\{[r_{1};r_{2}]:0\leq r_{1},r_{2}\leq 1\}\),
\[S_{1}[h]=\sigma\chi(\delta)\left[\framebox{$h$}\begin{array}{c|c}\framebox{$ h$}\\ \hline\hline\end{array}\right]\in{\bf S}^{m+1},\;S_{2}[h]=\chi(\delta)\left[ \framebox{${\cal A}[h]$}\begin{array}{c}\framebox{${\cal A}[h]$}\\ \hline\end{array}\right]\in{\bf S}^{m+q}\]
with \(d_{1}=m+1\), \(d_{2}=m+q\). We see that in the ball case \({\cal H}_{\delta}\) is a basic spectratope.
We associate with a spectratope \({\cal H}\), as defined in (33), linear mappings
\[{\cal S}_{i}[G]={\sum}_{p,q}G_{pq}S^{ip}S^{iq}:{\bf S}^{N}\to{\bf S}^{d_{i}}.\]
Note that
\[{\cal S}_{i}\left[{\sum}_{j}g_{j}g_{j}^{T}\right]={\sum}_{j}S_{i}^{2}[g_{j}], \ g_{j}\in{\bf R}^{N},\]
and
\[G\preceq G^{\prime}\Rightarrow\,{\cal S}_{i}[G]\preceq{\cal S}_ {i}[G^{\prime}], \tag{35a}\] \[\{G\succeq 0\ \&\ {\cal S}_{i}[G]=0\,\forall\ell\}\Rightarrow\,G=0. \tag{35b}\]
A cone "tightly compatible" with a basic spectratope is given by the following
**Proposition 2.4**: _Let \({\cal H}\subset{\bf R}^{N}\) be a basic spectratope_
\[{\cal H}=\{g\in{\bf R}^{N}:\,\exists r\in{\cal R}:\,S_{i}^{2}[g]\preceq r_{i} I_{d_{i}},\,i\leq I\}\]
_with "spectratopic data" \({\cal R}\) and \(S_{i}[\cdot]\), \(i\leq I\), satisfying the requirements in the above definition._
_Let us specify the closed convex cone \({\bf K}\subset{\bf S}^{N}_{+}\times{\bf R}_{+}\) as_
\[{\bf K}=\big{\{}(\Sigma,\rho)\in{\bf S}^{N}_{+}\times{\bf R}_{+}:\,\exists r \in{\cal R}:\,{\cal S}_{i}[\Sigma]\preceq\rho r_{i}I_{d_{i}},i\leq I\big{\}}.\]
_Then_
(i) _whenever_ \(\Sigma=\sum_{j}\lambda_{j}g_{j}g_{j}^{T}\) _with_ \(\lambda_{j}\geq 0\) _and_ \(g_{j}\in{\cal H}\ \forall j\)_, we have_
\[\Big{(}\Sigma,\,{\sum}_{j}\lambda_{j}\Big{)}\in{\bf K},\]
(ii) _and "nearly" vice versa: when_ \((\Sigma,\rho)\in{\bf K}\)_, there exist (and can be found efficiently by a randomized algorithm)_ \(\lambda_{j}\geq 0\) _and_ \(g_{j}\)_,_ \(j\leq N\)_, such that_
\[\Sigma={\sum}_{j}\lambda_{j}g_{j}g_{j}^{T}\ \ {\rm with}\ \ {\sum}_{j}\lambda_{j}\leq \varkappa\rho\ \ {\rm and}\ \ g_{j}\in{\cal H},\ j\leq N.\]
_where_
\[\varkappa=4\ln(4DN),\,D={\sum}_{i}d_{i}.\]
For the proof and for the sketch of the randomized algorithm mentioned in (ii), see Section B.2 of the appendix.
#### 2.2.6 Implementing the strategy
We may now summarize our approach to the design of a presumably good polyhedral estimate. By reasons outlined at the end of Section 2.2.4, the only case where the components we have developed so far admit "smooth assembling" is the one where \({\cal X}\) is ellipsoid which in our context w.l.o.g. can be assumed to be the unit Euclidean ball. Thus, in the rest of this Section it is assumed that \({\cal X}\) is the unit Euclidean ball in \({\bf R}^{n}\). Under this assumption the recipe, suggested by the preceding
analysis, for designing presumably good polyhedral estimate is as follows. Given \(\epsilon\in(0,1)\), we \(\bullet\) set \(\delta=\epsilon/Lm\) and solve the convex optimization problem
\[\begin{array}{l}\mathrm{Opt}=\min_{\begin{subarray}{c}\Theta_{\ell}\in \mathbb{S}^{m},\\ e_{\ell},\chi_{\ell},\bar{\rho}_{\ell}\end{subarray}}\left\{\bar{\rho}:\,\bar{ \mu}_{\ell}\geq 0,\,\Theta_{\ell}\succeq 0,\,\sigma^{2}\chi^{2}(\delta) \mathrm{Tr}(\Theta_{\ell})\leq\varrho_{\ell},\,\bar{\lambda}_{\ell}+\bar{\mu}_{ \ell}+\varrho_{\ell}\leq\bar{\rho},\,\ell\leq L,\\ \left[\begin{array}{c|c}\left[\mathrm{Tr}(A_{\alpha}^{T}\Theta_{\ell}A_{ \beta})\right]_{\alpha,\beta=1}^{q}&\\ \hline&\sum\limits_{\alpha,\beta}A_{\alpha}^{T}\Theta_{\ell}A_{\beta}\\ \hline\bar{\lambda}_{\ell}I_{\nu}&\frac{1}{2}R_{\ell}^{1/2}B\\ \hline\frac{1}{2}B^{T}R_{\ell}^{1/2}&A^{T}\Theta_{\ell}A+\bar{\mu}_{\ell}I_{n }\end{array}\right]\succeq 0,\ell\leq L\end{array} \tag{36}\]
--this is what under the circumstances becomes problem (32) with the cone \(\mathbf{K}\) given by Proposition 2.4 as applied to the spectratope \(\mathcal{H}_{\delta}\) given by (34). Note that by Proposition 2.4, \(\mathbf{K}\) is \(\varkappa\)-compatible with \(\mathcal{H}_{\delta}\), with
\[\varkappa=4\ln(4m(m+n+q+1)). \tag{37}\]
For instance, in the case of rank 1 matrices \(A_{\alpha}=f_{\alpha}g_{\alpha}^{T}\) and \(\|\cdot\|=\|\cdot\|_{2}\) (36) becomes
\[\begin{array}{l}\mathrm{Opt}=\min_{\begin{subarray}{c}\Theta\in\mathbb{S}^{ m},\\ e,\lambda,\bar{\mu}\end{subarray}}\left\{\bar{\rho}:\,\bar{\mu}\geq 0,\,\Theta \succeq 0,\,\sigma^{2}\chi^{2}(\delta)\mathrm{Tr}(\Theta)\leq\varrho,\,\bar{ \lambda}+\bar{\mu}+\varrho\leq\bar{\rho}\right.\\ \left.\left[\begin{array}{c|c}\left[(f_{\alpha}^{T}\Theta f_{\beta})g_{ \alpha}^{T}g_{\beta}\right]_{\alpha,\beta=1}^{q}&\\ \hline&\left[\sum_{\alpha,\beta=1}^{q}[f_{\alpha}^{T}\Theta f_{\beta}\right]g_ {\alpha}g_{\beta}^{T}\right]\preceq\chi^{-2}(\delta)\varrho I_{q+n}\\ \left[\begin{array}{c|c}\bar{\lambda}_{\ell}I_{\nu}&\frac{1}{2}B\\ \hline\frac{1}{2}B^{T}&A^{T}\Theta A+\bar{\mu}I_{n}\end{array}\right]\succeq 0 \end{array}\right\};\end{array} \tag{38}\]
\(\bullet\) use the randomized algorithm described in the proof of Proposition 2.4 to convert the \(\Theta_{\ell}\)-components of the optimal solution to (36) into a contrast matrix. Specifically,
1. for \(\ell=1,2,...,L\) we generate matrices \(G_{\varsigma}^{k}=\Theta_{\ell}^{1/2}\mathrm{Diag}\{\varsigma^{k}\}O\), \(k=1,...,K\), where \(O\) is the orthonormal matrix of \(m\times m\) Discrete Cosine Transform, and \(\varsigma^{k}\) are i.i.d. realizations of \(m\)-dimensional Rademacher random vector;
2. for every \(k\leq K\), we compute the maximum \(\theta(G_{\ell}^{k})\) of values of the Minkowski function of \(\mathcal{H}_{\delta}\) as evaluated at the columns of \(G_{\ell}^{k}\), with \(\mathcal{H}_{\delta}\) given by (26), (28), and select among \(G_{\ell}^{k}\) matrix \(G_{\ell}\) with the smallest value of \(\theta(G_{\ell}^{k})\). Then the \(\ell\)-th block of the contrast matrix we are generating is \(H_{\ell}=G_{\ell}\theta^{-1}(G_{\ell})\).
With reliability \(1-2^{-K}L\) the resulting contrast matrix \(H\) (which definitely has all columns in \(\mathcal{H}_{\delta}\)) is, by (!), near-optimal, within factor \(\sqrt{\varkappa}\) in terms of the objective, solution to (31), and the \(\epsilon\)-risk of the associated polyhedral estimate is upper-bounded by \(2\sqrt{\varkappa}\mathrm{Opt}\) with \(\mathrm{Opt}\) given by (36).
In Figure 2 we present error distributions and upper risk bounds (horizontal bar) of linear and polyhedral estimates in the numerical experiment with the model described in Section 2.1.3. In the plot cells, from left to right: (1) robust linear estimate by Proposition 2.1 and upper bound \(\mathfrak{R}\) on its \(0.05\)-risk, (2) robust linear estimate \(w_{1}(\omega_{1})\) yielded by Proposition 2.2 and upper bound \(\widetilde{\mathfrak{R}}_{1}\) on its expected error risk, (3) robust polyhedral estimate by Proposition 2.4 and upper bound on its \(0.05\)-risk.
#### 2.2.7 A modification
So far, our considerations related to polyhedral estimates were restricted to the case of sub-Gaussian \(\eta\) and \(\xi\). Similarly to what was done in Section 2.1.3, we are about to show that passing from observation (2) to its \(K\)-repeated, with "moderate" \(K\), version (cf. (13))
\[\omega^{K}=\{\omega_{k}=A[\eta_{k}]x+\xi_{k},\,\,\,k=1,...,K\}\]
with pairs \((\eta_{k},\xi_{k})\) independent across \(k\), we can relax the sub-Gaussianity assumption replacing it with moment condition (14). Specifically, let us set
\[\mathcal{H}=\left\{h\in\mathbf{R}^{m}:\sigma\|h\|_{2}\leq\tfrac{1}{8},\| \mathcal{A}[h]\|_{\mathcal{X},2}\leq\tfrac{1}{8}\right\},\,\mathcal{A}[h]x=[h^ {T}A_{1};...h^{T}A_{q}]\]
(cf. (26) and (28)).
Given tolerance an \(m\times M\) contrast matrix \(H\) with columns \(h_{j}\in\mathcal{H}\), and observation (13), we build the polyhedral estimate as follows.4
Footnote 4: Readers acquainted with the literature on robust estimation will immediately recognize that the proposed construction is nothing but a reformulation of the celebrated βmedian-of-meansβ estimate of [59] (see also [49, 29, 54, 48]) for our purposes.
1. For \(j=1,...,M\) we compute empirical medians \(y_{j}\) of the data \(h_{j}^{T}\omega_{k}\), \(k=1,...,K\), \[y_{j}=\text{median}\{h_{j}^{T}\omega_{k},\,1\leq k\leq K\}.\]
2. We specify \(\widehat{x}^{H}(\omega^{K})\) as a point from \(\text{Argmin}_{u\in\mathcal{X}}\,\|y-H^{T}Au\|_{\infty}\) and use, as the estimate of \(Bx\), the vector \(\widehat{w}_{\text{poly}}^{H}(\omega^{K})=B\widehat{x}^{H}(\omega^{K})\).
Figure 2: Distributions of \(\ell_{2}\)-recovery errors and upper bounds of the robust linear and robust polyhedral estimates for different values of \(\gamma\) parameter.
**Lemma 2.1**: _In the situation of this section, let \(\xi_{k}\) and \(\eta_{k}\) satisfy moment constraint of (14), and let \(K\geq\overline{\kappa}=2.5\ln[M/\epsilon]\). Then estimate \(\widehat{w}^{H}_{\rm poly}(\omega^{K})\) satisfies_
\[{\rm Risk}_{\epsilon}[\widehat{w}^{H}_{\rm poly}(\omega^{K})|\mathcal{X}]\leq \mathfrak{p}[H]\]
_(cf. (23))._
As an immediate consequence of the result of Lemma 2.1, the constructions and results of Sections 2.2.3-2.2.6 apply, with \(\chi(\delta)=8\) and \(\mathcal{H}\) in the role of \(\mathcal{H}_{\delta}\), to our present situation in which the sub-Gaussianity of \(\xi,\eta\) is relaxed to the second moment condition (14) and instead of single observation \(\omega\), we have access to a "short"--with \(K\) logarithmic in \(M/\epsilon\)--sample of \(K\) independent realizations of \(\omega\).
## 3 Uncertain-but-bounded perturbations
In this section we assume that perturbation vector \(\eta\) in (2) is deterministic and runs through a given uncertainty set \(\mathcal{U}\), so that (2) becomes
\[\omega=A[\eta]x+\xi,\ A[\eta]=A+D[\eta], \tag{39}\]
where \(D[\eta]\) is (homogeneous) linear matrix-valued function of perturbation \(\eta\) running through \(\mathcal{U}\). As about observation noise \(\xi\), we still assume that its distribution \(P_{x}\) (which may depend on \(x\)) satisfies (5), i.e., is sub-Gaussian with zero mean and sub-Gaussian matrix parameter \(\sigma^{2}I_{m}\) for every \(x\in\mathcal{X}\).
In our present situation it is natural to redefine the notion of the \(\epsilon\)-risk of an estimate \(\omega\mapsto\widehat{x}(\omega)\): here we consider uniform over \(x\in\mathcal{X}\) and \(\eta\in\mathcal{U}\)\(\epsilon\)-risk
\[{\rm Risk}_{\epsilon}[\widehat{w}|\mathcal{X}]=\sup_{x\in\mathcal{X},\eta\in \mathcal{U}}\inf\Big{\{}\rho:{\rm Prob}_{\xi\sim P_{x}}\{\|\widehat{w}(A[\eta ]x+\xi)-Bx\|>\rho\}\leq\epsilon\Big{\}}.\]
Besides this, we, as before, assume that
\[\|y\|=\max_{\ell\leq L}\sqrt{y^{T}R_{\ell}y} [R_{\ell}\succeq 0,\sum_{\ell}R_{\ell}\succ 0]\]
### Design of presumably good linear estimate
Observe that the error of the linear estimate \(\widehat{w}^{H}(\omega)=H^{T}\omega\) satisfies
\[\|\widehat{w}(A[\eta]x+\xi)-Bx\|\leq\|H^{T}\xi\|+\max_{x\in\mathcal{X},\eta\in \mathcal{U}}\big{\|}H^{T}D[\eta]x\big{\|}+\max_{x\in\mathcal{X}}\|[B-H^{T}A]x\| \tag{40}\]
Similarly to what was done in Section 2.1, design of a presumably good linear estimate \(\widehat{x}_{H}(\omega)\) consists in minimizing over \(H\) the sum of tight efficiently computable upper bounds on the terms in the right-hand side of (40). Recall that bounds on the first and the last term were already established in Section 2.1 (cf. (58) and (59) in the proof of Proposition 2.1). What is missing is a tight upper bound on
\[\mathfrak{s}(H)=\max_{x\in\mathcal{X},\eta\in\mathcal{U}}\big{\|}H^{T}D[\eta]x \big{\|}\,.\]
In the rest of this section we focus on building efficiently computable upper bound on \(\mathfrak{s}(H)\) which is convex in \(H\); the synthesis of the contrast \(H\) is then conducted by minimizing with respect to \(H\) the resulting upper bound on estimation risk.
We assume from now on that \(\mathcal{U}\) is a convex compact set in certain \(\mathbf{R}^{q}\). In this case \(\mathfrak{s}(H)\) is what in [33] was called the robust norm
\[\|\mathcal{Z}[H]\|_{\mathcal{X}}=\max_{Z\in\mathcal{Z}[H]}\|Z\|_{\mathcal{X}}, \ \|Z\|_{\mathcal{X}}=\max_{x\in\mathcal{X}}\|Zx\|\]
of the uncertain \(\nu\times n\) matrix
\[\mathcal{Z}[H]=\{Z=H^{T}D[\eta]:\eta\in\mathcal{U}\},\]
i.e., the maximum, over instances\(Z\in\mathcal{Z}[H]\), of operator norms of the linear mappings \(x\mapsto Zx\) induced by the norm with the unit ball \(\mathcal{X}\) on the argument space and the norm \(\|\cdot\|\) on the image space.
It is well known that aside of a very restricted family of special cases, robust norms do not allow for efficient computation. We are about to list known to us generic cases when these norms admit efficiently computable upper bounds which are tight within logarithmic factors.
#### 3.1.1 Scenario uncertainty
This is the case where the nuisance set \(\mathcal{U}=\mathrm{Conv}\{\eta^{1},...,\eta^{S}\}\) is given as a convex hull of moderate number of scenarios \(\eta^{s}\). In this case, \(\mathfrak{s}(H)\) the maximum of operator norms:
\[\mathfrak{s}(H)=\max_{s\leq S}\max_{x\in\mathcal{X}}\|H^{T}D[\eta^{s}]x\|= \max_{s\leq S,\ell\leq L}\|\mathcal{M}_{s\ell}[H]\|_{\mathcal{X},2},\quad \mathcal{M}_{s\ell}[H]=R_{\ell}^{1/2}H^{T}D[\eta^{s}],\]
where, for \(Q\in\mathbf{R}^{\nu\times n}\), \(\|Q\|_{\mathcal{X},2}=\max_{x\in\mathcal{X}}\|Qx\|_{2}\) is the operator norm of the linear mapping \(x\mapsto Qx:\mathbf{R}^{n}\to\mathbf{R}^{\nu}\) induced by the norm \(\|\cdot\|_{\mathcal{X}}\) with the unit ball \(\mathcal{X}\) on the argument space, and the Euclidean norm \(\|\cdot\|_{2}\) on the image space. Note that this norm is efficiently computable in the ellipsoid case where \(\mathcal{X}=\{x\in\mathbf{R}^{n}:x^{T}Tx\leq 1\}\) with \(T\succ 0\) (that is, for \(K=1\), \(T_{1}=T\), \(\mathcal{T}=[0,1]\) in (8))--one has \(\|Q\|_{\mathcal{X},2}=\|QT^{-1/2}\|_{2,2}.\) When \(\mathcal{X}\) is a general ellipote, norm \(\|\cdot\|_{\mathcal{X},2}\) is difficult to compute. However, it admits a tight efficiently computable convex in \(Q\) upper bound:5 it is shown in [33, Theorem 3.1] that function
Footnote 5: We have already used it in the proof of Proposition 2.1 when upper-bounding the corresponding terms \(s_{\ell}(H)\) in the case of random uncertainty.
\[\mathrm{Opt}[Q]=\min_{\lambda,\mu}\left\{\lambda+\phi_{\mathcal{T}}(\mu):\mu \geq 0,\left[\begin{array}{c|c}\lambda I_{\nu}&\frac{1}{2}Q\\ \frac{1}{2}Q^{T}&\sum_{k}\mu_{k}T_{k}\end{array}\right]\succeq 0\right\}\]
satisfies \(\|Q\|_{\mathcal{X},2}\leq\mathrm{Opt}[Q]\leq 2.4\sqrt{\ln(4K)}\|Q\|_{\mathcal{X},2}\). As a result, under the circumstances,
\[\mathfrak{s}(H) =\max_{s\leq S,\ell\leq L}\mathrm{Opt}_{s\ell}[H],\] \[\mathrm{Opt}_{s\ell}[H] =\min_{\lambda_{\ell},\mu^{\ell}}\left\{\lambda_{\ell}+\phi_{ \mathcal{T}}(\mu^{\ell}):\mu^{\ell}\geq 0,\left[\begin{array}{c|c} \lambda_{\ell}I_{\nu}&\frac{1}{2}R_{\ell}^{1/2}H^{T}D[\eta^{s}]\\ \hline\frac{1}{2}D^{T}[\eta^{s}]HR_{\ell}^{1/2}&\sum_{k}\mu^{\ell}{}_{k}T_{k} \end{array}\right]\succeq 0\right\},\]
is a tight within the factor \(2.4\sqrt{\ln(4K)}\) efficiently computable convex in \(H\) upper bound on \(\mathfrak{s}(H)\).
#### 3.1.2 Box and structured norm-bounded uncertainty
In the case of _structured norm-bounded uncertainty_ function \(D[\eta]\) in the model (39) is of the form
\[D[\eta] =\sum\nolimits_{\alpha=1}^{q}P_{\alpha}^{T}\eta_{\alpha}Q_{\alpha} \quad[P_{\alpha}\in{\bf R}^{p_{\alpha}\times m},Q_{\alpha}\in{\bf R}^{q_{\alpha} \times n}],\] \[{\cal U} =\{\eta=(\eta_{1},...,\eta_{q})\}={\cal U}_{1}\times...\times{ \cal U}_{q},\] (41) \[{\cal U}_{\alpha} =\left\{\begin{array}{ll}\{\eta_{\alpha}=\delta I_{p_{\alpha}}: |\delta|\leq 1\}\subset{\bf R}^{p_{\alpha}\times p_{\alpha}},q_{\alpha}=p_{ \alpha}&,\alpha\leq q_{\rm s},\\ \{\eta_{\alpha}\in{\bf R}^{p_{\alpha}\times q_{\alpha}}:\|\eta_{\alpha}\|_{2, 2}\leq 1\}&,q_{\rm s}<\alpha\leq q.\end{array}\right.\] ["scalar perturbation blocks"]
The special case of (3.1.2) where \(q_{\rm s}=q\), that is,
\[{\cal U}=\{\eta\in{\bf R}^{q}:\|\eta\|_{\infty}\leq 1\}\,\&\,\,A[\eta]=A+D[ \eta]=A+\sum\nolimits_{\alpha=1}^{q}\eta_{\alpha}A_{\alpha}\]
is referred to as box uncertainty. In this section we operate with structured norm-bounded uncertainty (3.1.2), assuming w.l.o.g. that all \(P_{\alpha}\) are nonzero. The main result here (for underlying rationale and proof, see Section C.2) is as follows:
**Proposition 3.1**: _Let \({\cal X}\subset{\bf R}^{n}\) be an ellitope: \({\cal X}=P{\cal Y}\), where_
\[{\cal Y}=\{y\in{\bf R}^{n}:\exists t\in{\cal T}:y^{T}T_{k}y\leq t_{k},k\leq K\}\]
_is a basic ellitope. Given the data of structured norm-bounded uncertainty (3.1.2), consider the efficiently computable convex function_
\[\overline{\sf s}(H) =\max_{\ell\leq L}{\rm Opt}_{\ell}(H),\] \[{\rm Opt}_{\ell}(H) =\min_{\mu,\upsilon,\lambda,U_{s},V_{s},U^{t},V^{t}}\left\{ \frac{1}{2}[\mu+\phi_{\cal T}(\upsilon)]:\,\mu\geq 0,\upsilon\geq 0,\lambda\geq 0\right.\] \[\left.\begin{array}{l}U_{s}\\ \hline-P^{T}A_{s\ell}^{I}[H]\end{array}\right|\begin{array}{l}-A_{s\ell}[H] \end{array}\right\}\succeq 0,\,s\leq q_{\rm s},\,\left\lceil\begin{array}{l}U^{t} \\ \hline-L_{t\ell}[H]\end{array}\right\rceil\,\frac{-L_{t\ell}^{T}[H]}{\lambda_{t}I _{p_{\rm q_{\rm s}+t}}}\succeq 0,\,t\leq q-q_{\rm s}\\ V^{t}-\lambda_{t}P^{T}R_{t}^{T}R_{t}P\succeq 0,\,t\leq q-q_{\rm s}\\ \mu I_{\nu}-\sum_{s}U_{s}-\sum_{t}U^{t}\succeq 0,\,\,\,\sum_{k}v_{k}T_{k}-\sum_{s}V _{s}-\sum_{t}V^{t}\succeq 0\end{array}\right\}\]
_where_
\[A_{s\ell}[H] =R_{\ell}^{1/2}H^{T}P_{s}^{T}Q_{s},\,1\leq s\leq q_{\rm s}\] \[L_{t\ell}[H] =P_{q_{\rm s}+t}HR_{\ell}^{1/2},\,R_{t}=Q_{q_{\rm s}+t},\,1\leq t \leq q-q_{\rm s}.\]
_Then_
\[{\sf s}(H)\leq\overline{\sf s}(H)\leq\varkappa(K)\max[\vartheta(2\kappa),\pi/ 2]{\sf s}(H),\]
_where \(\kappa=\max_{\alpha\leq q_{\rm s}}\min[p_{\alpha},q_{\alpha}]\) (\(\kappa=0\) when \(q_{\rm s}=0\)),_
\[\varkappa(K)=\left\{\begin{array}{ll}1,&K=1,\\ \frac{5}{2}\sqrt{\ln(2K)},&K>1,\end{array}\right.\]
_and \(\vartheta(k)\) is a universal function of integer \(k\geq 0\) specified in (75) such that_
\[\vartheta(0)=0,\,\,\vartheta(1)=1,\,\,\vartheta(2)=\pi/2,\,\,\vartheta(3)=1.73 48...,\,\,\vartheta(4)=2,\,\,\,\vartheta(k)\leq\tfrac{1}{2}\pi\sqrt{k},\,\,\,k \geq 1.\]
Note that the "box uncertainty" version of Proposition 3.1 was derived in [33].
#### 3.1.3 Robust estimation of linear forms
Until now, we imposed no restrictions on the matrix \(B\). We are about to demonstrate that when we aim at recovering the value of a given linear form \(b^{T}x\) of signal \(x\in\mathcal{X}\), i.e., when \(B\) is a row vector:
\[Bx=b^{T}x\qquad[b\in\mathbf{R}^{n}], \tag{42}\]
we can handle much wider family of uncertainty sets \(\mathcal{U}\) than those considered so far. Specifically, assume on the top of (42) that \(\mathcal{U}\) is a spectratope:
\[\begin{array}{c}\mathcal{U}=\{\eta=Qv,v\in\mathcal{V}\},\,\mathcal{V}=\{v \in\mathbf{R}^{M}:\exists s\in\mathcal{S}:S_{\ell}^{2}[v]\preceq s_{\ell}I_{d_ {\ell}},\,\ell\leq L\},\\ S_{\ell}[v]=\sum_{i=1}^{M}v_{i}S^{i\ell},\,S^{i\ell}\in\mathbf{S}^{d_{\ell}} \end{array} \tag{43}\]
(as is the case, e.g., with structured norm-bounded uncertainty) and let \(\mathcal{X}\) be a spectratope as well:
\[\begin{array}{c}\mathcal{X}=\{x=Py,y\in\mathcal{Y}\},\,\mathcal{Y}=\{y\in \mathbf{R}^{N}:\exists t\in\mathcal{T}:T_{k}^{2}[y]\preceq t_{k}I_{f_{k}},\,k \leq K\},\\ T_{k}[y]=\sum_{j=1}^{N}y_{j}T^{jk},\,T^{jk}\in\mathbf{S}^{f_{k}}.\end{array} \tag{44}\]
The contrast matrix \(H\) underlying a candidate linear estimate becomes a vector \(h\in\mathbf{R}^{m}\), the associated linear estimate being \(\widehat{w}_{h}(\omega)=h^{T}\omega\). In our present situation \(\nu=1\) we lose nothing when setting \(\|\cdot\|=|\cdot|\). Representing \(D[\eta]\) as \(\sum_{\alpha=1}^{q}\eta_{\alpha}A_{\alpha}\), we get
\[\mathfrak{r}_{b}(h)=\max_{x\in\mathcal{X},\eta\in\mathcal{U}}\Big{|}h^{T}{ \sum}_{\alpha}\eta_{\alpha}A_{\alpha}x\Big{|}=\max_{\eta\in\mathcal{U},x\in \mathcal{X}}\eta^{T}A[h]x,\quad A[h]=[h^{T}A_{1};...;h^{T}A_{q}].\]
In other words, \(\mathfrak{r}_{b}(h)\) is the operator norm \(\|A[h]\|_{\mathcal{X}\mathcal{U}_{*}}\) of the linear mapping \(x\mapsto A[h]x\) induced by the norm \(\|\cdot\|_{\mathcal{X}}\) with the unit ball \(\mathcal{X}\) on the argument space and the norm with the unit ball \(\mathcal{U}_{*}\)--the polar of the spectratope \(\mathcal{U}\)--on the image space. Denote
\[\begin{array}{c}\lambda[\Lambda]=[\mathrm{Tr}(\Lambda_{1});...;\mathrm{Tr} (\Lambda_{K})],\,\,\,\Lambda_{k}\in\mathbf{S}^{f_{k}},\\ \lambda[\Upsilon]=[\mathrm{Tr}(\Upsilon_{1});...;\mathrm{Tr}(\Upsilon_{L})], \,\,\,\Upsilon_{\ell}\in\mathbf{S}^{d_{\ell}},\end{array}\]
and for \(Y\in\mathbf{S}^{d_{\ell}}\) and \(X\in\mathbf{S}^{f_{k}}\)
\[R_{\ell}^{+,*}[Y]=\Big{[}\mathrm{Tr}(YR^{i\ell}R^{j\ell})\Big{]}_{i,j\leq M} \,,\quad T_{k}^{+,*}[X]=\Big{[}\mathrm{Tr}(XT^{ik}T^{jk})\Big{]}_{i,j\leq N}\,.\]
Invoking [33, Theorem 7], we arrive at
**Proposition 3.2**: _In the case of (43) and (44), efficiently computable convex function_
\[\begin{array}{c}\mathfrak{r}_{b}(h)=\min_{\Lambda,\Upsilon}\left\{\frac{1}{ 2}[\phi_{\mathcal{T}}(\lambda[\Lambda])+\phi_{\mathcal{S}}(\lambda[\Upsilon]) :\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
### Design of the robust polyhedral estimate
On a close inspection, the strategy for designing a presumably good polyhedral estimate developed in Section 2.2 for the case of random uncertainty works in the case of uncertain-but-bounded perturbations \(A[\eta]=A+\underbrace{\sum_{\alpha}\eta_{\alpha}A_{\alpha}}_{D[\eta]}\), \(\eta\in\mathcal{U}\), provided that the constraints (25) on the allowed columns \(h\) of the contrast matrices are replaced with the constraint
\[\mathrm{Prob}_{\xi}\{|h^{T}\xi|>1/2\}\leq\delta/2, \tag{46a}\] \[\left|\sum\nolimits_{\alpha=1}^{q}[h^{T}A_{\alpha}x]\eta_{ \alpha}\right|\leq 1/2\ \forall(x\in\mathcal{X},\eta\in\mathcal{U}). \tag{46b}\]
Assuming that \(\mathcal{U}\) and \(\mathcal{X}\) are the spectratopes (43), (44) and invoking Proposition 3.2, an efficiently verifiable sufficient condition for \(h\) to satisfy the constraints (46) is
\[\|h\|_{2}\leq 2\sigma\sqrt{2\ln(2/\delta)}\ \ \mathrm{and}\ \,\mathfrak{t}_{b}(h) \leq 1/2 \tag{47}\]
(see (26), (45)). It follows that in order to build an efficiently computable upper bound for the \(\epsilon\)-risk of a polyhedral estimate associated with a given \(m\times ML\) contrast matrix \(H=[H_{1},..,H_{L}]\), \(H_{\ell}\in\mathbf{R}^{m\times M}\), it suffices to check whether the columns of \(H\) satisfy constraints (47) with \(\delta=\epsilon/ML\). If the answer is positive, one can upper-bound the risk utilizing the following spectratopic version of Proposition 2.3:
**Proposition 3.3**: _In the situation of this section, let \(\epsilon\in(0,1)\), and let \(H=[H_{1},...,H_{L}]\) be \(m\times ML\) matrix with \(L\) blocks \(H_{\ell}\in\mathbf{R}^{m\times M}\) such that all columns of \(H\) satisfy (47) with \(\delta=\epsilon/ML\). Consider optimization problem_
\[\mathfrak{p}_{+}[H]=2\min_{\lambda_{\ell},\Upsilon^{\ell},v^{ \ell},\rho}\Big{\{}\rho:\ \upsilon^{\ell}\geq 0,\,\Upsilon^{\ell}=\{\Upsilon^{ \ell}_{k}\in\mathbf{S}^{f_{k}}_{+},k\leq K\},\,\ell\leq L \tag{48}\] \[\lambda_{\ell}+\phi_{\mathcal{T}}(\lambda[\Upsilon^{\ell}])+ \sum_{j=1}^{M}v^{\ell}_{j}\leq\rho,\,\ell\leq L\] \[\left[\frac{\lambda_{\ell}I_{\nu}}{\frac{1}{2}P^{T}B^{T}R_{\ell}^ {1/2}}\ \ P^{T}A^{T}H_{\ell}\mathrm{Diag}\{\upsilon^{\ell}\}H_{\ell}^{T}AP+\sum_{k}T^ {+,*}_{k}[\Upsilon^{\ell}_{k}]\right]\succeq 0,\,\ell\leq L\ \Bigg{\}}\]
_where_
\[\lambda[\Upsilon^{\ell}]=[\mathrm{Tr}(\Upsilon^{\ell}_{1});...;\mathrm{Tr}( \Upsilon^{\ell}_{K})],\ \ \text{and}\ \ T^{+,*}_{k}(V)=\left[\mathrm{Tr}(VT^{ik}T^{jk})\right]_{1\leq i,j\leq N}\ \text{for}\ V\in\mathbf{S}^{f_{k}}.\]
_Then_
\[\mathrm{Risk}_{\epsilon}[\widehat{w}^{H}|\mathcal{X}]\leq\mathfrak{p}_{+}[H].\]
Remarks.As it was already explained, when taken together, Propositions 3.2 and 3.3 allow to compute efficiently an upper bound on the \(\epsilon\)-risk of the polyhedral estimate associated with a given \(m\times ML\) contrast matrix \(H\): when the columns of \(H\) satisfy (47) with \(\delta=\epsilon/ML\), this bound is \(\mathfrak{p}_{+}[H]\), otherwise it is, say, \(+\infty\). The outlined methodology can be applied to any pair of spectratopes \(\mathcal{X}\), \(\mathcal{Y}\). However, to design a presumably good polyhedral estimate, we need to optimize the risk bound obtained in \(H\), and this seems to be difficult because the bound, same as its "random perturbation" counterpart, is nonconvex in \(H\). At present, we know only one generic situation where the synthesis problem admits "presumably good" solution--the case where both
\(\mathcal{X}\) and \(\mathcal{U}\) are ellipsoids. Applying appropriate one-to-one linear transformations to perturbation \(\eta\) and signal \(x\), the latter situation can be reduced to that with
\[\mathcal{X}=\{x\in\mathbf{R}^{n}:\|x\|_{2}\leq 1\},\ \ \mathcal{U}=\{\eta\in \mathbf{R}^{q}:\|\eta\|_{2}\leq 1\}, \tag{49}\]
which we assume till the end of this section. In this case (47) reduces to
\[\|h\|_{2}\leq[2\sigma\sqrt{2\ln(2/\delta)}]^{-1}\ \ \text{and}\ \ \|\mathcal{A}[h]\|_{2,2}\leq 1/2 \tag{50}\]
where the matrix \(\mathcal{A}[h]\) is given by (27). Note that (50) is nothing but the constraint (29) where the elliptoe \(\mathcal{X}\) is set to be the unit Euclidean ball (that is, when \(K=1\), \(T_{1}=I_{n}\), and \(\mathcal{T}=[0;1]\) in (8)) and the right hand side \(\chi^{-1}(\delta)\) in the constraint is replaced with \(1/2\). As a result, (46) can be processed in the same fashion as constraints (26) and (the single-ellipsoid case of) (29) were processed in Sections 2.2.3 and 2.2.4 to yield a computationally efficient scheme for building a presumably good, in the case of (49), polyhedral estimate. This scheme is the same as that described at the end of Section 2.2.5 with just one difference: the quantity \(\chi(\delta)\) in the first semidefinite constraint of (36) and (38) should now be replaced with constant \(2\). Denoting by Opt the optimal value of the modified in the way just explained problem (36), the \(\epsilon\)-risk of the polyhedral estimate yielded by an optimal solution to the problem is upper-bounded by \(2\sqrt{\varkappa}\)Opt, with \(\varkappa\) given by (37).
## Appendix A Proofs for Section 2.1
### Preliminaries: concentration of quadratic forms of sub-Gaussian vectors
For the reader's convenience, we recall in this section some essentially known bounds for deviations of quadratic forms of sub-Gaussian random vectors (cf., e.g., [28, 66, 67]).
1\({}^{o}\).Let \(\xi\) be a \(d\)-dimensional normal vector, \(\xi\sim\mathcal{N}(\mu,\Sigma)\). For all \(h\in\mathbf{R}^{d}\) and \(G\in\mathbf{S}^{d}\) such that \(G\prec\Sigma^{-1}\) we have the well known relationship:
\[\ln\left(\mathbf{E}_{\xi}\left\{e^{h^{T}\xi+\frac{1}{2}\xi^{T}G \xi}\right\}\right) =-\tfrac{1}{2}\ln\text{Det}(I-\Sigma^{1/2}G\Sigma^{1/2})\] \[+h^{T}\mu+\tfrac{1}{2}\mu^{T}G\mu+\tfrac{1}{2}[G\mu+h]^{T} \Sigma^{1/2}(I-\Sigma^{1/2}G\Sigma^{1/2})^{-1}\Sigma^{1/2}[G\mu-h]. \tag{51}\]
Now, suppose that \(\eta\sim\mathcal{SG}(0,\Sigma)\) where \(\Sigma\in\mathbf{S}^{d}_{+}\), let also \(h\in\mathbf{R}^{d}\) and \(S\in\mathbf{R}^{d\times d}\) such that \(S\Sigma S^{T}\prec I\). Then for \(\xi\sim\mathcal{N}(h,S^{T}S)\) one has
\[\mathbf{E}_{\eta}\left\{e^{h^{T}\eta+\frac{1}{2}\eta^{T}S^{T}S\eta}\right\}= \mathbf{E}_{\eta}\left\{\mathbf{E}_{\xi}\left\{e^{\eta^{T}\xi}\right\}\right\} =\mathbf{E}_{\xi}\left\{\mathbf{E}_{\eta}\left\{e^{\eta^{T}\xi}\right\} \right\}\leq\mathbf{E}_{\xi}\left\{e^{\frac{1}{2}\xi^{T}\Sigma\xi}\right\},\]
so that
\[\ln\left(\mathbf{E}_{\eta}\left\{e^{h^{T}\eta+\frac{1}{2}\eta^{T} S^{T}S\eta}\right\}\right) \leq\ln\left(\mathbf{E}_{\xi}\left\{e^{\frac{1}{2}\xi^{T}\Sigma \xi}\right\}\right)\] \[=-\tfrac{1}{2}\ln\text{Det}(I-S\Sigma S^{T})+\tfrac{1}{2}h^{T} \Sigma h+\tfrac{1}{2}h^{T}\Sigma S^{T}(I-S\Sigma S^{T})^{-1}S\Sigma h\] \[=-\tfrac{1}{2}\ln\text{Det}(I-S\Sigma S^{T})+\tfrac{1}{2}h^{T} \Sigma^{1/2}(I-S\Sigma S^{T})^{-1}\Sigma^{1/2}h.\]
In particular, when \(\zeta\sim\mathcal{SG}(0,I)\), one has
\[\ln\left(\mathbf{E}_{\zeta}\left\{e^{h^{T}\zeta+\frac{1}{2}\zeta^{T}G\zeta} \right\}\right)\leq-\tfrac{1}{2}\ln\text{Det}(I-G)+\tfrac{1}{2}h^{T}(I-G)^{-1 }h=:\Phi(h,G).\]
Observe that \(\Phi(h,G)\) is convex and continuous in \(h\in{\bf R}^{d}\) and \(0\preceq G\prec I\) on its domain. Using the inequality (cf. [47, Lemma 1])
\[\forall v\in[0,1[\quad-\ln(1-v)\leq v+\frac{v^{2}}{2(1-v)}, \tag{52}\]
we get
\[\Phi(h,G)\leq\tfrac{1}{2}{\rm Tr}[G]+\tfrac{1}{4}{\rm Tr}[G(I-G)^{-1}G]+\tfrac {1}{2}h^{T}(I-G)^{-1}h=:\widetilde{\Phi}(h,G).\]
Finally, using
\[{\rm Tr}[G(I-G)^{-1}G]\leq(1-\lambda_{\max}(G))^{-1}{\rm Tr}[G^{2}],\quad h^{ T}(I-G)^{-1}h\leq(1-\lambda_{\max}(G))^{-1}h^{T}h,\]
we arrive at
\[\widetilde{\Phi}(h,G)\leq\tfrac{1}{2}{\rm Tr}[G]+\tfrac{1}{4}(1-\lambda_{ \max}(G))^{-1}({\rm Tr}[G^{2}]+2\|h\|_{2}^{2})=:\overline{\Phi}(h,G).\]
2\({}^{o}\).In the above setting, let \(Q\in{\bf S}^{d}_{+}\), \(\alpha>2\lambda_{\max}(Q)\), \(G=2Q/\alpha\), and let \(h=0\). By the Cramer argument we conclude that
\[{\rm Prob}\left\{\zeta^{T}Q\zeta\geq\alpha[\Phi(2Q/\alpha)+\ln\epsilon^{-1}] \right\}\leq\epsilon \tag{53}\]
where \(\Phi(\cdot)=\Phi(0,\cdot)\). In particular,
\[{\rm Prob}\left\{\zeta^{T}Q\zeta\geq\min_{\alpha>2\lambda_{\max}(Q)}\alpha[ \Phi(2Q/\alpha)+\ln\epsilon^{-1}]\right\}\leq\epsilon \tag{54}\]
Clearly, similar bounds hold with \(\Phi\) replaced with \(\widetilde{\Phi}\) and \(\overline{\Phi}\). For instance,
\[{\rm Prob}\left\{\zeta^{T}Q\zeta\geq\alpha[\overline{\Phi}(2Q/\alpha)+\ln \epsilon^{-1}]\right\}\leq\epsilon,\]
so, when choosing \(\alpha=2\lambda_{\max}(Q)+\sqrt{\frac{{\rm Tr}(Q^{2})}{\ln\epsilon^{-1}}}\) we arrive at the "standard bound"
\[{\rm Prob}\left\{\zeta^{T}Q\zeta\geq{\rm Tr}(Q)+2\|Q\|_{\rm Fro}\sqrt{\ln \epsilon^{-1}}+2\lambda_{\max}(Q)\ln\epsilon^{-1}\right\}\leq\epsilon. \tag{55}\]
**Corollary A.1**: _Let \(\epsilon\in(0,1)\), \(W_{1},...,W_{L}\) be matrices from \({\bf S}^{d}_{+}\), and let \(\upsilon\sim{\cal SG}(0,V)\) be a \(d\)-dimensional sub-Gaussian random vector. Then_
\[{\rm Prob}\left\{\max_{\ell\leq L}\upsilon^{T}W_{\ell}\upsilon\geq\left[1+ \sqrt{2\ln(L/\epsilon)}\right]^{2}\max_{\ell\leq L}{\rm Tr}(W_{\ell}V)\right\} \leq\epsilon.\]
**Proof.** Let \(R^{2}=\max_{\ell\leq L}{\rm Tr}(W_{\ell}V)\). W.l.o.g. we may assume that \(\upsilon=V^{1/2}\zeta\) where \(\zeta\sim{\cal SG}(0,I)\). Let us fix \(\ell\leq L\). Applying (55) with \(Q=V^{1/2}W_{\ell}V^{1/2}\) and \(\epsilon\) replaced with \(\epsilon/L\), when taking into account that \(\upsilon^{T}W_{\ell}\upsilon=\zeta^{T}Q\zeta\) with
\[\lambda_{\max}(Q)\leq\|Q\|_{\rm Fro}\leq{\rm Tr}(Q)\leq R^{2},\]
we get
\[{\rm Prob}\left\{\upsilon^{T}W_{\ell}\upsilon\geq\left[1+\sqrt{2\ln(L/\epsilon )}\right]^{2}R^{2}\right\}\leq\frac{\epsilon}{L},\]
and the claim of the corollary follows. \(\Box\)
### Proof of Proposition 2.1
Let \(H\) be a candidate contrast matrix.
1\({}^{o}\).Observe that
\[\|\widehat{w}^{H}(\omega)-Bx\|\leq\|H^{T}\xi\|+\left\|H^{T}{\sum}_{\alpha=1}^{q} \eta_{\alpha}A_{\alpha}x\right\|+\|[B-H^{T}A]x\|. \tag{56}\]
Clearly,
\[\|[B-H^{T}A]x\|\leq\max_{\ell\leq L}\left\{\max_{x\in\mathcal{X}}x^{T}[B-H^{T} A]^{T}R_{\ell}[B-H^{T}A]x\right\}^{1/2},\]
so that by Theorem 2.1,
\[\forall x\in\mathcal{X}\quad\|[B-H^{T}A]x\|\leq\max_{\ell\leq L}\mathfrak{r}_ {\ell}(H) \tag{57}\]
where
\[\mathfrak{r}_{\ell}^{2}(H)=\min_{\upsilon}\left\{\phi_{\mathcal{T}}(\upsilon ):\,\upsilon\geq 0,\,\left[\begin{array}{c|c}I_{\nu}&R_{\ell}^{1/2}[B-H^{T}A]\\ \hline[B-H^{T}A]^{T}R_{\ell}^{1/2}&\sum_{k}\upsilon_{k}T_{k}\end{array} \right]\succeq 0\right\}.\]
Taking into account that \(\sqrt{u}=\min_{\lambda\geq 0}\{\frac{u}{4\lambda}+\lambda\}\) for \(u>0\), we get
\[\mathfrak{r}_{\ell}(H)=\min_{\upsilon,\lambda}\left\{\lambda+\frac{\phi_{ \mathcal{T}}(\upsilon)}{4\lambda}:\,\upsilon\geq 0,\lambda\geq 0,\,\left[ \begin{array}{c|c}I_{\nu}&R_{\ell}^{1/2}[B-H^{T}A]\\ \hline[B-H^{T}A]^{T}R_{\ell}^{1/2}&\sum_{k}\upsilon_{k}T_{k}\end{array} \right]\succeq 0\right\}.\]
Setting \(\mu=\upsilon/(4\lambda)\), by the homogeneity of \(\phi_{\mathcal{T}}(\cdot)\) we obtain
\[\mathfrak{r}_{\ell}(H)=\min_{\mu,\lambda}\left\{\lambda+\phi_{\mathcal{T}}( \mu):\,\mu\geq 0,\,\left[\begin{array}{c|c}\lambda I_{\nu}&\frac{1}{2}R_{ \ell}^{1/2}[B-H^{T}A]\\ \hline\frac{1}{2}[B-H^{T}A]^{T}R_{\ell}^{1/2}&\sum_{k}\mu_{k}T_{k}\end{array} \right]\succeq 0\right\}. \tag{58}\]
2\({}^{o}\).Next, by Corollary A.1 of the appendix,
\[\mathrm{Prob}\left\{\|H^{T}\xi\|\geq[1+\sqrt{2\ln(2L/\epsilon)}]\sigma\max_{ \ell\leq L}\sqrt{\mathrm{Tr}(HR_{\ell}H^{T})}\right\}\leq\epsilon/2. \tag{59}\]
Similarly, because
\[\left\|H^{T}{\sum}_{\alpha=1}^{q}\eta_{\alpha}A_{\alpha}x\right\|=\max_{\ell \leq L}\left\|R_{\ell}^{1/2}H^{T}[A_{1}x,...,A_{q}x]\eta\right\|_{2},\]
we conclude that for any \(x\in\mathcal{X}\)
\[\mathrm{Prob}\left\{\left\|H^{T}{\sum}_{\alpha=1}^{q}\eta_{\alpha}A_{\alpha}x \right\|\geq[1+\sqrt{2\ln(2L/\epsilon)}]\max_{\ell\leq L}s_{\ell}(H)\right\} \leq\epsilon/2\]
where \(s_{\ell}(H)=\left\{\max_{x\in\mathcal{X}}x^{T}\left[\sum_{\alpha}A_{\alpha}^{T} HR_{\ell}H^{T}A_{\alpha}\right]x\right\}^{1/2}\). Again, by Theorem 2.1, \(s_{\ell}(H)\) may be tightly upper-bounded by the quantity \(\mathfrak{s}_{\ell}(H)\) such that
\[\mathfrak{s}_{\ell}^{2}(H)=\min_{\upsilon}\left\{\phi_{\mathcal{T}}(\upsilon): \,\upsilon\geq 0,\,\left[\begin{array}{c|c}I_{\nu q}&[R_{\ell}^{1/2}H^{T}A_{1};... ;R_{\ell}^{1/2}H^{T}A_{q}]\\ \hline[A_{1}^{T}HR_{\ell}^{1/2},...,A_{q}^{T}HR_{\ell}^{1/2}]&\sum_{k}\upsilon_ {k}T_{k}\end{array}\right]\succeq 0\right\}.\]
Now, repeating the steps which led to (58) above, we conclude that
\[\mathfrak{s}_{\ell}(H)=\min_{\mu^{\prime},\lambda^{\prime}}\Big{\{} \lambda^{\prime}+\phi_{\mathcal{T}}(\mu^{\prime}):\,\mu^{\prime}\geq 0,\] \[\left[\begin{array}{c|c}\lambda^{\prime}I_{\nu q}&\frac{1}{2}[R_ {\ell}^{1/2}H^{T}A_{1};...;R_{\ell}^{1/2}H^{T}A_{q}]\\ \hline\frac{1}{2}[A_{1}^{T}HR_{\ell}^{1/2},...,A_{q}^{T}HR_{\ell}^{1/2}]&\sum_ {k}\mu_{k}^{\prime}T_{k}\end{array}\right]\succeq 0\right\}. \tag{60}\]
\(\mathbf{3^{o}}\).When substituting the above bounds into (56), we conclude that for every feasible solution \(\lambda_{\ell},\mu^{\ell},\kappa^{\ell},\varkappa^{\ell},\rho,\varrho\) to problem (12) associated with \(H\), the \(\epsilon\)-risk of the linear estimate \(\,\widehat{w}_{\ln}^{H}(\cdot)\) may be upper-bounded by the quantity
\[[1+\sqrt{2\ln(2L/\epsilon)}]\left[\sigma\max_{\ell\leq L}\|HR_{\ell}^{1/2}\|_{ \mathrm{Fro}}+\rho\right]+\varrho.\qed\]
## Appendix B Proofs for Section 2.2
### Proof of Proposition 2.3
All we need to prove is that if \(\lambda_{\ell},\mu^{\ell},\upsilon^{\ell},\rho\) is a feasible solution to the optimization problem (30), then the inequality
\[\mathrm{Risk}_{\epsilon}[\widehat{w}_{\mathrm{poly}}^{H}|\mathcal{X}]\leq 2\rho \tag{61}\]
holds. Indeed, let us fix \(x\in\mathcal{X}\). Since the columns of \(H\) belong to \(\mathcal{H}_{\delta}\), the \(P_{x}\)-probability of the event
\[\mathcal{Z}^{c}=\{\zeta:\|H^{T}\zeta\|_{\infty}>1\} [\zeta=\sum_{\alpha}\eta_{\alpha}A_{\alpha}x+\xi]\]
is at most \(ML\delta=\epsilon\). Let us fix observation \(\omega=Ax+\zeta\) with \(\zeta\) belonging to the complement \(\mathcal{Z}\) of \(\mathcal{Z}^{c}\). Then
\[\|H^{T}[\omega-Ax]\|_{\infty}=\|H^{T}\zeta\|_{\infty}\leq 1,\]
implying that the optimal value in the optimization problem \(\min_{u\in\mathcal{X}}\|H^{T}[Au-\omega\|_{\infty}\) is at most \(1\). Consequently, setting \(\widehat{x}=\widehat{x}^{H}(\omega)\), we have \(\widehat{x}\in\mathcal{X}\) and \(\|H^{T}[A\widehat{x}-\omega]\|_{\infty}\leq 1\), see (22). We conclude that setting \(z=\frac{1}{2}[x-\overline{x}]\), we have
\[\|H^{T}_{\ell}Az\|_{\infty}\leq 1,\ell\leq L\]
with \(z\in\mathcal{X}\), implying that \(z^{T}T_{k}z\leq t_{k}\), \(k\leq K\), for some \(t\in\mathcal{T}\). Now let \(u\in\mathbf{R}^{\nu}\) with \(\|u\|_{2}\leq 1\). Semidefinite constraints in (30) imply that
\[u^{T}R_{\ell}^{1/2}Bz \leq u^{T}\lambda_{\ell}I_{\nu}u+z^{T}\left[A^{T}H_{\ell}\mathrm{ Diag}\{\upsilon^{\ell}\}H^{T}_{\ell}A+\sum\nolimits_{k}\mu^{\ell}_{k}T_{k} \right]z\] \[\leq\lambda_{\ell}u^{T}u+\sum\nolimits_{j}v^{\ell}_{j}[H^{T}Az]^ {2}_{j}+\sum\nolimits_{k}\mu^{\ell}_{k}t_{k}\] \[\leq\lambda_{\ell}+\sum\nolimits_{j}v^{\ell}_{j}+\phi_{\mathcal{ T}}(\mu^{\ell})\leq\rho\]
(recall that \(\|u\|_{2}\leq 1\), \(\lambda_{\ell}\geq 0,\mu^{\ell}\geq 0,\upsilon^{\ell}\geq 0\), \(t\in\mathcal{T}\), and \(\|H^{T}_{\ell}Az\|_{\infty}\leq 1\)). We conclude that \(u^{T}R_{\ell}^{1/2}Bz\leq\rho\), \(\ell\leq L\), whenever \(\|u\|_{2}\leq 1\), i.e., \(\|R_{\ell}^{1/2}[Bz]\|_{2}\leq\rho^{2}\). The latter relation holds true for all \(\ell\leq L\), implying that \(\|Bz\|\leq\rho\), that is, \(\|Bx-\widehat{x}(\omega)\|=2\|Bz\|\leq 2\rho\) whenever \(\zeta\in\mathcal{Z}\). \(\qed\)
### Proof of Proposition 2.4
\(\mathbf{0^{o}}\).We need the following technical result.
**Theorem B.1**: [70, Theorem 4.6.1] _Let \(Q_{i}\in\mathbf{S}^{n}\), \(1\leq i\leq I\), and let \(\xi_{i}\), \(i=1,...,I\), be independent Rademacher (\(\pm 1\) with probabilities \(1/2\)) or \(\mathcal{N}(0,1)\) random variables. Then for all \(t\geq 0\) one has_
\[\mathrm{Prob}\left\{\left\|\sum\nolimits_{i=1}^{I}\xi_{i}Q_{i}\right\|\geq t \right\}\leq 2n\exp\left\{-\frac{t^{2}}{2v_{Q}}\right\}\]
_where \(\|\cdot\|\) is the spectral norm, and \(v_{Q}=\left\|\sum_{i=1}^{I}Q_{i}^{2}\right\|.\)_
\(1^{o}\). Proof of (i). Let \(\lambda_{j}\geq 0\), \(g_{j}\in{\cal H}\), \(j\leq M\), and \(\Sigma=\sum_{j}\lambda_{j}g_{j}g_{j}^{T}\). Then for every \(j\) there exists \(r^{j}\in{\cal R}\) such that \(S_{i}^{2}[g_{j}]\preceq[r^{j}]_{i}I_{d_{i}}\), \(i\leq I\). Assuming \(\sum_{j}\lambda_{j}>0\) and setting \(\kappa_{j}=[\sum_{j}\lambda_{j}]^{-1}\lambda_{j}\) and \(r=\sum_{j}\kappa_{j}r^{j}\in{\cal R}\), we have
\[{\cal S}_{i}\left[\sum\nolimits_{j}\lambda_{j}g_{j}g_{j}^{T}\right]=\sum \nolimits_{j}\lambda_{j}S_{i}^{2}[g_{j}]\preceq\sum\nolimits_{j}\lambda_{j}[ r^{j}]_{i}I_{d_{i}}=\left[\sum\nolimits_{j}\lambda_{j}\right]r_{i}I_{d_{i}},\]
implying that \((\Sigma,\sum_{j}\lambda_{j})\in{\bf K}\). The latter inclusion is true as well when \(\lambda=0\).
\(2^{o}\). Proof of (ii). Let \((\Sigma,\rho)\in{\bf M}\), and let us prove that \(\Sigma=\sum_{j=1}^{N}\lambda_{j}g_{j}g_{j}^{T}\) with \(g_{j}\in{\cal H}\), \(\lambda_{j}\geq 0\), and \(\sum_{j}\lambda_{j}\leq\varkappa\rho\). There is nothing to prove when \(\rho=0\), since in this case \(\Sigma=0\) due to \((\Sigma,0)\in{\bf K}\) combined with (35b). Now let \(\rho>0\), so that for some \(r\in{\cal R}\) we have
\[{\cal S}_{i}[\Sigma]\preceq\rho r_{i}I_{d_{i}},\,i\leq I, \tag{62}\]
let \(Z=\Sigma^{1/2}\), and let \(O\) be the orthonormal \(N\times N\) matrix of \(N\)-point Discrete Cosine Transform, so that all entries in \(O\) are in magnitude \(\leq\sqrt{2/N}\). For a Rademacher random vector \(\varsigma=[\varsigma_{1};...;\varsigma_{M}]\) (i.e., with entries \(\varsigma_{i}\) which are independent Rademacher random variables), let
\[Z^{\varsigma}=Z{\rm Diag}\{\varsigma\}O.\]
In this case, one has \(Z^{\varsigma}[Z^{\varsigma}]^{T}\equiv\Sigma\), that is,
\[\sum\nolimits_{p=1}^{N}{\rm Col}_{p}[Z^{\varsigma}]{\rm Col}_{p}^{T}[Z^{ \varsigma}]\equiv\Sigma.\]
Recall that
\[{\rm Col}_{j}[Z^{\varsigma}]=\sum\nolimits_{p}\varsigma_{p}O_{pj}{\rm Col}_{p} [Z],\]
and thus
\[S_{i}[{\rm Col}_{j}[Z^{\varsigma}]]=\sum\nolimits_{p}\varsigma_{p}O_{pj}S_{i}[ {\rm Col}_{p}[Z]].\]
Now observe that
\[\sum\nolimits_{p}\bigl{(}O_{pj}S_{i}[{\rm Col}_{p}[Z]]\bigr{)}^{2} =\sum\nolimits_{p}O_{pj}^{2}S_{i}^{2}[{\rm Col}_{p}[Z]]=\sum \nolimits_{p}O_{pj}^{2}{\cal S}_{i}[{\rm Col}_{p}[Z]{\rm Col}_{p}^{T}[Z]]\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
satisfies \(\mathrm{Prob}(\Xi)\geq\frac{1}{2}\), while
\[\sum\nolimits_{j}\!\lambda_{j}g_{j}^{\varsigma}[g_{j}^{\varsigma}]^{T}=\sum \nolimits_{j}\!\mathrm{Col}_{j}[Z^{\varsigma}]\mathrm{Col}_{j}^{T}[Z^{\varsigma} ]\equiv\Sigma\;\;\mathrm{and}\;\;\sum\nolimits_{j}\!\lambda_{j}=\gamma\kappa \rho=2\gamma\rho=\varkappa\rho.\]
Thus, with probability \(\geq 1/2\) (whenever \(\varsigma\in\Xi\)), vectors \(g_{j}=g_{j}^{\varsigma}\) and \(\lambda_{j}\) meet the requirements in (ii).
Note that the proof of the proposition suggests an efficient randomized algorithm for generating the required \(g_{j}\) and \(\lambda_{j}\): we generate realizations of \(\varsigma\) of a Rademacher random vector, compute the corresponding vectors \(g_{j}^{\varsigma}\), and terminate when all of them happen to belong to \(\mathcal{H}\). The corresponding probability not to terminate in course of the first \(k\) rounds of randomization is then \(\leq 2^{-k}\).
### Proof of Lemma 2.1
The proof of the lemma is given by the standard argument underlying median-of-means construction (cf. [59, Section 6.5.3.4]). For the sake of completeness, we reproduce it here.
1\({}^{o}\).Observe that when (14) holds, \(h\in\mathcal{H}\), \(x\in\mathcal{X}\) and \(\zeta=\xi+\sum_{\alpha}\eta_{\alpha}A_{\alpha}x\), the probability of the event
\[\{|h^{T}\zeta|>1\}\]
is at most \(1/8\). Indeed, when \(|h^{T}\zeta|>1\) implies that either \(|h^{T}\xi|>1/2\) or \(|\eta^{T}\mathcal{A}[h]x|>1/2\). By the Chebyshev inequality, the probability of the first of these events is at most \(4\mathbf{E}\{(h^{T}\xi)^{2}\}\leq 4\sigma^{2}\|h\|_{2}^{2}\leq\frac{1}{16}\) (we have used the first relation in (14) and took into account that \(h\in\mathcal{H}\)). By similar argument, the probability of the second event is at most \(4\mathbf{E}\{(\eta^{T}\mathcal{A}[h]x)^{2}\}\leq 4\|\mathcal{A}[h]x\|_{2}^{2} \leq\frac{1}{16}\).
2\({}^{o}\).Let \(\zeta_{k}=\omega_{k}-Ax\). By construction, \(z_{j}=y_{j}-h_{j}^{T}Ax\) is the median of the i.i.d. sequence \(h_{j}^{T}\zeta_{k}\), \(k=1,...,K\). When \(|z_{j}|>1\), at least \(K/2\) of the events \(\{|h_{j}^{T}\zeta_{k}|>1\}\), \(k\leq K\), take place. Because the probability of each of \(K\) independent events is \(\leq 1/8\), it is easily seen6 that the probability that at least \(K/2\) of them happen is bounded with
Footnote 6: We refer to, e.g., [24, Section 2.3.2] for the precise justification of this obvious claim.
\[\pi(K):=\sum_{k\geq K/2}\binom{K}{k}(1/8)^{k}(7/8)^{K-k}\leq\sum_{k\geq K/2} \binom{K}{k}2^{-K}[(1/4)^{k}(7/4)^{K-k}]\leq(\sqrt{7}/4)^{K}\leq e^{-0.4K}.\]
In other words, the probability of each event \(E_{j}=\{\omega^{K}:|y_{j}-h_{j}^{T}Ax|>1\}\), \(j=1,...,M\), is bounded with \(\pi(K)\). Thus, none of the events \(E_{1},...,E_{M}\) takes place with probability at least \(1-M\pi(K)\), and in such case we have \(\|y-H^{T}Ax\|_{\infty}\leq 1\), and so \(\|y-H^{T}A\widehat{x}^{H}(\omega^{K})\|_{\infty}\leq 1\) as well. We conclude that for every \(x\in\mathcal{X}\), the probability of the event
\[\left\{x-\widehat{x}^{H}(\omega^{K})\in 2\mathcal{X},\,\|H^{T}A[x-\widehat{x}^{ H}(\omega^{K})]\|_{\infty}\leq 2\right\}\]
is at least \(1-M\pi(K)\geq 1-\epsilon\) when \(K\geq 2.5\ln[M/\epsilon]\), and when it happens, one has \(\|Bx-\widehat{w}_{\mathrm{poly}}^{H}(\omega^{K})\|\leq\mathfrak{p}[H]\).
### Proof of Proposition 2.2
\(1^{o}\).Let \(\ell\leq L\) and \(k\leq K\) be fixed, let \(H=H_{\ell}\in{\bf R}^{m\times\nu}\) be a candidate contrast matrix, and let \(\lambda,\mu,\kappa,\varkappa\) be a feasible solution to (15). One has
\[{\bf E}_{\xi_{k}}\left\{\|R_{\ell}^{1/2}H^{T}\xi_{k}\|_{2}^{2} \right\}={\rm Tr}\left({\bf E}_{\xi_{k}}\left\{R_{\ell}^{1/2}H^{T}\xi_{k}\xi_{ k}^{T}HR_{\ell}^{1/2}\right\}\right)\leq\sigma^{2}{\rm Tr}(HR_{\ell}H^{T})= \sigma^{2}\|HR_{\ell}^{1/2}\|_{\rm Fro}^{2}. \tag{64}\]
Next, for any \(x\in{\cal X}\) fixed we have
\[{\bf E}_{\eta_{k}}\left\{\left\|R_{\ell}^{1/2}H^{T}[\sum\nolimits _{\alpha}[\eta_{k}]_{\alpha}A_{\alpha}]x\right\|_{2}^{2}\right\} ={\bf E}_{\eta_{k}}\left\{\left\|R_{\ell}^{1/2}H^{T}[A_{1}x,...,A_{ q}x]\eta_{k}\right\|_{2}^{2}\right\}=x^{T}\left[\sum\nolimits_{\alpha}A_{ \alpha}^{T}HR_{\ell}H^{T}A_{\alpha}\right]x\] \[=\|[R_{\ell}^{1/2}H^{T}A_{1};...;R_{\ell}^{1/2}H^{T}A_{q}]x\|_{2 }^{2}\leq(\lambda+\phi_{\cal T}(\mu))^{2} \tag{65}\]
where the concluding inequality follows from the constraints in (15) (cf. item \(2^{o}\) of the proof of Proposition 2.1). Next, similarly to item \(1^{o}\) of the proof of Proposition 2.1 we have
\[\|R_{\ell}^{1/2}(B-H^{T}A)x\|_{2}^{2}\leq(\kappa+\phi_{\cal T}( \varkappa))^{2}.\]
Put together, the latter bound along with (64) and (65) imply (17).
\(2^{o}\).By the Chebyshev inequality,
\[\forall\ell,k\quad{\rm Prob}\left\{\|R_{\ell}^{1/2}(w_{\ell}( \omega_{k})-Bx)\|_{2}\geq 2\widetilde{\mathfrak{R}}_{\ell}[H_{\ell}]\right\} \leq\tfrac{1}{4};\]
applying [54, Theorem 3.1] we conclude that
\[\forall\ell\quad{\rm Prob}\left\{\|R_{\ell}^{1/2}(z_{\ell}(\omega ^{K})-Bx)\|_{2}\geq 2C_{\alpha}\widetilde{\mathfrak{R}}_{\ell}[H_{\ell}] \right\}\leq e^{-K\psi(\alpha,\tfrac{1}{4})}\]
where
\[\psi(\alpha,\beta)=(1-\alpha)\ln\frac{1-\alpha}{1-\beta}+\alpha \ln\frac{\alpha}{\beta} \tag{66}\]
and \(C_{\alpha}=\frac{1-\alpha}{\sqrt{1-2\alpha}}\). When choosing \(\alpha=\frac{\sqrt{3}}{2+\sqrt{3}}\) which corresponds to \(C_{\alpha}=2\) we obtain \(\psi(\alpha,\tfrac{1}{4})=0.1070...\) so that for \(\ell\leq L\)
\[{\rm Prob}\left\{\|R_{\ell}^{1/2}(z_{\ell}(\omega^{K})-Bx)\|_{2} \geq 4\widetilde{\mathfrak{R}}_{\ell}[H_{\ell}]\right\}\leq e^{-0.1070K}\]
what is (18).
\(3^{o}\).Now, let \(K\geq\ln(L/\epsilon)/0.1070\). In this case, for all \(\ell\leq L\)
\[{\rm Prob}\left\{\|R_{\ell}^{1/2}(z_{\ell}(\omega^{K})-Bx)\|_{2} \geq 4\widetilde{\mathfrak{R}}_{\ell}[H_{\ell}]\right\}\leq\epsilon/L,\]
so that with probability \(\geq 1-\epsilon\) the set \({\cal W}(\omega^{K})\) is not empty (it contains \(Bx\)), and for all \(v\in{\cal W}(\omega^{K})\) one has
\[\|R_{\ell}^{1/2}(v-Bx)\|_{2}\leq\|R_{\ell}^{1/2}(z_{\ell}(\omega^ {K})-v)\|_{2}+\|R_{\ell}^{1/2}(z_{\ell}(\omega^{K})-Bx)\|_{2}\geq 8\widetilde{ \mathfrak{R}}_{\ell}[H_{\ell}].\qed\]
Proofs for Section 3
### Proof of Proposition 3.3
The proof follows that of Proposition 2.3. All we need to prove is that if \(H\) satisfies the premise of the proposition and \(\lambda_{\ell},\Upsilon^{\ell},\upsilon^{\ell},\rho\) is a feasible solution to (48), then the inequality
\[\text{Risk}_{\epsilon}[\widehat{w}_{\text{poly}}^{H}|\mathcal{X}]\leq 2\rho \tag{67}\]
holds. Indeed, let us fix \(x\in\mathcal{X}\) and \(\eta\in\mathcal{U}\). Since the columns of \(H\) satisfy (47), the \(P_{x}\)-probability of the event
\[\mathcal{Z}_{x,\eta}=\{\xi:\|H^{T}[D[\eta]x+\xi\|_{\infty}\leq 1\}\]
is at least \(1-ML\delta=1-\epsilon\). Let us fix observation \(\omega=Ax+D[\eta]x+\xi\) with \(\xi\in\mathcal{Z}_{x,\eta}\). Then
\[\|H^{T}[\omega-Ax]\|_{\infty}=\|H^{T}[D[\eta]x+\xi]\|_{\infty}\leq 1, \tag{68}\]
implying that the optimal value in the optimization problem \(\min_{u\in\mathcal{X}}\|H^{T}[Au-\omega]_{\infty}\) is at most \(1\). Consequently, setting \(\widehat{x}=\widehat{x}^{H}(\omega)\), we have \(\widehat{x}\in\mathcal{X}\) and \(\|H^{T}[A\widehat{x}-\omega]\|_{\infty}\leq 1\), see (22). These observations combine with (68) and the inclusion \(x\in\mathcal{X}\) to imply that for \(z=\frac{1}{2}[x-\widehat{x}]\) we have \(z\in\mathcal{X}\) and \(\|H^{T}z\|_{\infty}\leq 1\). Recalling what \(\mathcal{X}\) is we conclude that \(z=Py\) with \(T_{k}^{2}[y]\preceq t_{k}I_{f_{k}},k\leq K\) for some \(t\in\mathcal{T}\) and
\[\|H_{\ell}^{T}APy\|_{\infty}=\|H_{\ell}^{T}Az\|_{\infty}\leq 1,\;\ell\leq L. \tag{69}\]
Now let \(u\in\mathbf{R}^{\nu}\) with \(\|u\|_{2}\leq 1\). Semidefinite constraints in (48) imply that
\[u^{T}R_{\ell}^{1/2}Bz =u^{T}R_{\ell}^{1/2}BPy\leq u^{T}\lambda_{\ell}I_{\nu}u+y^{T} \left[PA^{T}H_{\ell}\text{Diag}\{\upsilon^{\ell}\}H_{\ell}^{T}AP+\sum\nolimits _{k}T_{k}^{+,*}[\Upsilon_{k}^{\ell}]\right]y\] \[=\lambda_{\ell}u^{T}u+\sum\nolimits_{j}v_{j}^{\ell}\underbrace{ [H_{\ell}^{T}APy]_{j}^{2}}_{\leq 1\text{ by \ \ \eqref{eq:p}}}+\sum\nolimits_{k}y^{T}T_{k}^{+,*}[ \Upsilon_{k}^{\ell}]y\] \[\leq\lambda_{\ell}+\sum\nolimits_{j}v_{j}^{\ell}+\sum\limits_{k} \sum\limits_{i,j\leq N}y_{i}y_{j}\text{Tr}(\Upsilon_{k}^{\ell}T^{ik}T^{jk})\] \[=\lambda_{\ell}+\sum\nolimits_{j}v_{j}^{\ell}+\sum\limits_{k} \text{Tr}(\Upsilon_{k}^{\ell}T_{k}^{2}[y])\] \[\leq\lambda_{\ell}+\sum\nolimits_{j}v_{j}^{\ell}+\sum\limits_{k} t_{k}\text{Tr}(\Upsilon_{k}^{\ell})\text{ [due to $\Upsilon^{\ell}\succeq 0$ and $T_{k}^{2}[y]\preceq t_{k}I_{f_{k}}$]}\] \[\leq\lambda_{\ell}+\sum\nolimits_{j}v_{j}^{\ell}+\phi_{\mathcal{ T}}(\lambda[\Upsilon^{\ell}])\leq\rho \tag{70}\]
where the concluding inequality follows from the constraints of (48). (70) holds true for all \(u\) with \(\|u\|_{2}\leq 1\), and we conclude that for \(x\in\mathcal{X}\) and \(\eta\in\mathcal{U}\) and \(\xi\in\mathcal{Z}_{x,\eta}\) (recall that the latter inclusion takes place with \(P_{x}\)-probability \(\geq 1-\epsilon\)) we have
\[\|R_{\ell}^{1/2}B[\widehat{x}^{H}(Ax+D[\eta]x+\xi)-x]\|_{2}\leq 2\rho,\;\ell\leq L.\]
Recalling what \(\|\cdot\|\) is, we get
\[\forall(x\in\mathcal{X},\eta\in\mathcal{U}):\text{Prob}_{\xi\sim P_{x}}\{\|B[ x-\widehat{x}^{H}(Ax+D[\eta]x+\xi)]\|>2\rho\|\leq\epsilon,\]
that is, \(\text{Risk}_{\epsilon}[\widehat{w}_{\text{poly}}^{H}|\mathcal{X}]\leq 2\rho\). The latter relation holds true whenever \(\rho\) can be extended to a feasible solution to (48), and (67) follows.
### Robust norm of uncertain matrix with structured norm-bounded uncertainty
#### c.2.1 Situation and goal
Let matrices \(A_{s}\in\mathbf{R}^{m\times n}\), \(s\leq S\), and \(L_{t}\in\mathbf{R}^{p_{t}\times m}\), \(R_{t}\in\mathbf{R}^{q_{t}\times n}\), \(t\leq T\), be given. These data specify uncertain \(m\times n\) matrix
\[\mathcal{A}=\{A=\sum\nolimits_{s}\!\delta_{s}A_{s}+\sum\nolimits_{t}\!L_{t}^{T }\Delta_{t}R_{t}:|\delta_{s}|\leq 1\,\forall s\leq S,\|\Delta_{t}\|_{2,2}\leq 1\, \forall t\leq T\}. \tag{71}\]
Given ellitopes
\[\begin{array}{rcl}\mathcal{X}&=&\{Py:y\in\mathcal{Y}\}\subset\mathbf{R}^{n},\,\mathcal{Y}=\{y\in\mathbf{R}^{N}\ \&\ \exists t\in\mathcal{T}:y^{T}T_{k}y\leq t_{k},k\leq K\},\\ \mathcal{B}_{*}&=&\{Qz:z\in\mathcal{Z}\}\subset\mathbf{R}^{m},\,\mathcal{Z}= \{z\in\mathbf{R}^{M}:\exists s\in\mathcal{S}:z^{T}S_{\ell}z\leq s_{\ell},\, \ell\leq L\},\end{array} \tag{72}\]
we want to upper-bound the robust norm
\[\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}}=\max_{A\in\mathcal{A}}\|A\|_{ \mathcal{X},\mathcal{B}},\]
of uncertain matrix \(\mathcal{A}\) induced by the norm \(\|\cdot\|_{\mathcal{X}}\) with the unit ball \(\mathcal{X}\) in the argument space and the norm \(\|\cdot\|_{\mathcal{B}}\) with the unit ball \(\mathcal{B}\) which is the polar of \(\mathcal{B}_{*}\) in the image space.
#### c.2.2 Main result
**Proposition C.1**: _Given uncertain matrix (71) and ellitopes (72), consider convex optimization problem_
\[\mathrm{Opt}=\min_{\begin{subarray}{c}\mu,v,\lambda,\\ U_{s},V_{s},U^{t},V^{t}\end{subarray}}\tfrac{1}{2}[\phi_{\mathcal{S}}(\mu)+ \phi_{\mathcal{T}}(v)]\] \[\mathrm{subject\ to}\] \[\mu\geq 0,\,\upsilon\geq 0,\,\lambda\geq 0\] \[\left[\begin{array}{c|c}U_{s}&-Q^{T}A_{s}P\\ \hline-P^{T}A_{s}^{T}Q&V_{s}\end{array}\right]\succeq 0 \tag{73a}\] \[\left[\begin{array}{c|c}U^{t}&-Q^{T}L_{t}^{T}\\ \hline-L_{t}Q&\lambda_{t}I_{p_{t}}\end{array}\right]\succeq 0,\;V^{t}-\lambda_{t}P^{T}R_{t}^ {T}R_{t}P\succeq 0\] (73b) \[\sum\nolimits_{\ell}\!\mu_{\ell}S_{\ell}-\sum\nolimits_{s}\!U_{s} -\sum\nolimits_{t}\!U^{t}\succeq 0\] (73c) \[\sum\nolimits_{k}\!v_{k}T_{k}-\sum\nolimits_{s}\!V_{s}-\sum \nolimits_{t}\!V^{t}\succeq 0 \tag{73d}\]
_The problem is strictly feasible and solvable, and_
\[\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}}\leq\mathrm{Opt}\leq\varkappa(K) \varkappa(L)\max\left[\vartheta(2\kappa),\pi/2\right]\|\mathcal{A}\|_{\mathcal{ X},\mathcal{B}} \tag{74}\]
_where_
* _the function_ \(\vartheta(k)\) _of nonnegative integer_ \(k\) _is given by_ \(\vartheta(0)=0\) _and_ \[\vartheta(k)=\left[\min_{\alpha}\left\{(2\pi)^{-k/2}\int|\alpha_{1}u_{1}^{2}+...+\alpha_{k}u_{k}^{2}|\mathrm{e}^{-u^{T}u/2}du,\,\alpha\in\mathbf{R}^{k},\| \alpha\|_{1}=1\right\}\right]^{-1},\;\;k\geq 1;\] (75)
* \(\kappa=\max_{s\leq S}\mathrm{Rank}(A_{s})\) _when_ \(S\geq 1\)_, otherwise_ \(\kappa=0\)_;_
* \(\varkappa(\cdot)\) _is given by_ \[\varkappa(J)=\left\{\begin{array}{ll}1,&J=1,\\ \frac{5}{2}\sqrt{\ln(2J)},&J>1.\end{array}\right.\] (76)
Remarks.The rationale behind (73) is as follows. Checking that the \({\cal X},{\cal B}\)-norm of uncertain \(m\times n\) matrix (71) is \(\leq a\in{\bf R}\) is the same as to verify that for all \(\delta_{s}\in[-1,1],\ \Delta_{t}:\|\Delta_{t}\|_{2,2}\leq 1\)
\[\sum\nolimits_{s}\delta_{s}u^{T}A_{s}v+\sum\nolimits_{t}u^{T}L_{t}^{T}\Delta_ {t}R_{t}v\leq a\|u\|_{{\cal B}_{*}}\|v\|_{{\cal X}}\quad\forall(u\in{\bf R}^{m },v\in{\bf R}^{n}),\]
or, which is the same due to what \({\cal B}_{*}\) and \({\cal X}\) are, that for all \(\delta_{s}\in[-1,1],\Delta_{t}:\|\Delta_{t}\|_{2,2}\leq 1\)
\[\sum\nolimits_{s}\delta_{s}z^{T}Q^{T}A_{s}Py+\sum\nolimits_{t}z^{T}Q^{T}L_{t }^{T}\Delta_{t}R_{t}Py\leq a\|z\|_{\mathcal{Z}}\|y\|_{{\cal Y}}\quad\forall(z \in{\bf R}^{M},y\in{\bf R}^{N}). \tag{77}\]
A simple certificate for (77) is a collection of positive semidefinite matrices \(U_{s},V_{s},U^{t},V^{t},U,V\) such that for all \(z\in{\bf R}^{M}\), \(y\in{\bf R}^{N}\) and all \(s\leq S\), \(t\leq T\) it holds
\[2z^{T}[Q^{T}A_{s}P]y \leq z^{T}U_{s}z+y^{T}V_{s}y, \tag{78a}\] \[2z^{T}Q^{T}L_{t}^{T}\Delta_{t}R_{t}Py \leq z^{T}U^{t}z+y^{T}V^{t}y\ \ \forall(\Delta_{t}:\|\Delta_{t}\|_{2,2}\leq 1),\] (78b) \[\sum\nolimits_{s}U_{s}+\sum\nolimits_{t}U^{t} \preceq U,\] (78c) \[\sum\nolimits_{s}V_{s}+\sum\nolimits_{t}V^{t} \preceq V,\] (78d) \[\max_{z\in{\cal Z}}z^{T}Uz+\max_{y\in{\cal Y}}y^{T}Vy \leq 2a. \tag{78e}\]
Now, (78a) clearly is the same as (73a). It is known (this fact originates from [7]) that (78b) is the same as existence of \(\lambda_{t}\geq 0\) such that (73b) holds. Finally, existence of \(\mu\geq 0\) such that \(\sum\nolimits_{\ell}\mu_{\ell}S_{\ell}\succeq U\) and \(v\geq 0\) such that \(\sum\nolimits_{k}v_{k}T_{k}\succeq V\) (see (73c) and (73d)) implies due to the structure of \({\cal Z}\) and \({\cal Y}\) that \(\max_{z\in{\cal Z}}z^{T}Uz\leq\phi_{{\cal S}}(\mu)\) and \(\max_{y\in{\cal Y}}y^{T}Vy\leq\phi_{{\cal T}}(v)\). The bottom line is that a feasible solution to (73) implies the existence of a certificate
\[\left\{U_{s},U^{t},V_{s},V^{t},s\leq S,t\leq T,U=\sum\nolimits_{\ell}\mu_{ \ell}S_{\ell},V=\sum\nolimits_{k}v_{k}T_{k}\right\}\]
for relation (77) with \(a=\frac{1}{2}[\phi_{{\cal S}}(\mu)+\phi_{{\cal T}}(v)]\).
**Proof of Proposition C.1. 1\({}^{o}\)**. Strict feasibility and solvability of the problem are immediate consequences of \(\sum\nolimits_{\ell}S_{\ell}\succ 0\) and \(\sum\nolimits_{k}T_{k}\succ 0\).
Let us prove the first inequality in (74). All we need to show is that if
[a] \(\mu,v,\lambda,U_{s},V_{s},U^{t},V^{t}\) is feasible for (73),
[b] \(x=Py\) with \(y^{T}T_{k}y\leq\tau_{k}\), \(k\leq K\), for some \(\tau\in{\cal T}\) and \(u=Qz\) for some \(z\) such that \(z^{T}S_{\ell}z\leq\varsigma_{\ell}\), \(\ell\leq L\), for some \(\varsigma\in{\cal S}\), and
[c] \(\delta_{s}\), \(\Delta_{t}\) satisfy \(|\delta_{s}|\leq 1\), \(\|\Delta_{t}\|_{2,2}\leq 1\),
then \(\gamma:=u^{T}[\sum\nolimits_{s}\delta_{s}A_{s}+\sum\nolimits_{t}L_{t}^{T} \Delta_{t}R_{t}]x\leq\frac{1}{2}[\phi_{{\cal S}}(\mu)+\phi_{{\cal T}}(v)]\). Assuming [a-c], we have
\[\gamma =\sum\nolimits_{s}\delta_{s}z^{T}Q^{T}A_{s}Py+\sum\nolimits_{t}z ^{T}Q^{T}L_{t}^{T}\underbrace{\Delta_{t}R_{t}Py}_{\zeta_{t}}\] \[\leq\tfrac{1}{2}z^{T}\left[\sum\nolimits_{s}U_{s}\right]z+\tfrac{1 }{2}y^{T}\left[\sum\nolimits_{s}V_{s}\right]y+\sum\nolimits_{t}\|L_{t}Qz\|_{2} \|\zeta_{t}\|_{2}\quad\mbox{ [by \eqref{eq:21} and due to $|\delta_{s}|\leq 1$]}\] \[\leq\tfrac{1}{2}z^{T}\left[\sum\nolimits_{s}U_{s}\right]z+\tfrac{1 }{2}y^{T}\left[\sum\nolimits_{s}V_{s}\right]y+\sum\limits_{t}\sqrt{(\lambda_{t }z^{T}U^{t}z)(y^{T}P^{T}R_{t}^{T}R_{t}Py)}\] \[\quad\mbox{[due to \eqref{eq:21} and $\|\Delta_{t}\|_{2,2}\leq 1$]}\] \[=\tfrac{1}{2}z^{T}\left[\sum\nolimits_{s}U_{s}\right]z+\tfrac{1}{2}y ^{T}\left[\sum\nolimits_{s}V_{s}\right]y+\sum\nolimits_{t}\sqrt{(z^{T}U^{t}z)( \lambda_{t}y^{T}P^{T}R_{t}^{T}R_{t}Py)}.\]
Thus, by the second inequality of (73b),
\[\gamma \leq\tfrac{1}{2}z^{T}\left[\sum\nolimits_{s}U_{s}\right]z+\tfrac{1} {2}y^{T}\left[\sum\nolimits_{s}V_{s}\right]y+\sum\nolimits_{t}\sqrt{(z^{T}U^{t} z)(y^{T}V^{t}y)}\] \[\leq\tfrac{1}{2}z^{T}\left[\sum\nolimits_{s}U_{s}\right]z+\tfrac{1 }{2}y^{T}\left[\sum\nolimits_{s}V_{s}\right]y+\tfrac{1}{2}{\sum\nolimits_{t}}[z^ {T}U^{t}z+y^{T}V^{t}y]\] \[=\tfrac{1}{2}\left[z^{T}\left[\sum\nolimits_{s}U_{s}+\sum\nolimits _{t}U^{t}\right]z+y^{T}\left[\sum\nolimits_{s}V_{s}+\sum\nolimits_{t}V^{t} \right]y\right]\] \[\leq\tfrac{1}{2}\left[\sum\nolimits_{\ell}\mu_{\ell}z^{T}S_{\ell }z+\sum\nolimits_{k}y^{T}v_{k}T_{k}y\right]\quad\text{[by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eq
\[2{\rm Opt}=\max_{\overline{\sigma},\overline{\delta},\overline{\delta}, \overline{t},\overline{\tau},\overline{\delta},\overline{\lambda},\overline{ \overline{S}},\overline{T}\atop\overline{\sigma}_{s},\overline{\sigma}_{s}, \overline{A}_{s},\overline{U}^{t},\overline{L}_{t},\overline{L}_{t},\overline {V}^{t}}\ 2{\sum}_{s}{\rm Tr}(Q^{T}A_{s}P\overline{A}_{s}^{T})+2{\sum}_{t}{\rm Tr}(Q^{T }L_{t}^{T}\overline{L}_{t})\] (D) subject to \[[\overline{g};\overline{\alpha}]\in{\bf T},\,[\overline{h}; \overline{\beta}]\in{\bf S},\,\overline{\mu}\geq 0,\overline{v}\geq 0,\, \overline{\lambda}\geq 0,\,\overline{V}^{t}\succeq 0,\overline{S}\succeq 0, \overline{T}\succeq 0\] \[\left[\begin{array}{c|c}\overline{U}_{s}&\overline{A}_{s}\\ \overline{A}_{s}^{T}&\overline{V}_{s}\end{array}\right]\succeq 0,\,\left[ \begin{array}{c|c}\overline{U}^{t}&\overline{L}_{t}^{T}\\ \overline{L}_{t}&\overline{\Lambda}_{t}\end{array}\right]\succeq 0\] \[\overline{\alpha}=1,\,[\overline{g};\overline{\alpha}]\in{\bf S},\, \overline{\beta}=1,\,[\overline{h};\overline{\beta}]\in{\bf T},\,-\overline{ g}_{\ell}+{\rm Tr}(\overline{S}S_{\ell})+\overline{\mu}_{\ell}=0,\,-\overline{h}_{k}+{\rm Tr }(\overline{T}T_{k})+\overline{v}_{k}=0\] \[{\rm Tr}(\overline{\Lambda}_{t})-{\rm Tr}(\overline{V}_{t}P^{T}R ^{T}R_{t}P)+\overline{\lambda}_{t}=0\] \[\overline{U}_{s}=\overline{S},\,\overline{U}_{t}=\overline{S},\, \overline{V}_{s}=\overline{T},\,\overline{V}^{t}=\overline{T}\]
(here and in what follows the constraints should be satisfied for all values of "free indexes" \(s\leq S\), \(t\leq T\), \(\ell\leq L\), \(k\leq K\)). Taking into account that relation \(\left[\begin{array}{c|c}X&Y\\ \overline{Y}^{T}&Z\end{array}\right]\succeq 0\) is equivalent to \(X\succeq 0,Z\succeq 0\), and \(Y=X^{1/2}\Delta Z^{1/2}\) with \(\|\Delta\|_{2,2}\leq 1\), and that \([\overline{g};1]\in{\bf S}\), \([\overline{h};1]\in{\bf T}\) is the same as \(\overline{g}\in{\cal S},\,\overline{h}\in{\cal T}\), \((D)\) boils down to
\[{\rm Opt}=\max_{\overline{g},\overline{h},\overline{S},\overline{T} \atop\overline{\Delta}_{s},\overline{\delta}_{t},\overline{\Lambda}_{t}}\left\{ {\sum}_{s}{\rm Tr}(Q^{T}A_{s}P\overline{A}_{s}^{T})+{\sum}_{t}{\rm Tr}(Q^{T}L_{ t}^{T}\overline{L}_{t}):\right.\] \[\left.\begin{array}{c}\overline{g}\in{\cal T},\,\overline{h} \in{\cal S},\,\overline{S}\succeq 0,\,\overline{T}\succeq 0,\,{\rm Tr}( \overline{S}S_{\ell})\leq\overline{g}_{\ell},\,{\rm Tr}(\overline{T}T_{k})\leq \overline{h}_{k}\\ \overline{A}_{s}=\overline{S}^{1/2}\overline{\Delta}_{s}\overline{T}^{1/2},\, \|\overline{\Delta}_{s}\|_{2,2}\leq 1,\overline{L}_{t}^{T}=\overline{S}^{1/2}\overline{ \delta}_{t}\overline{\Lambda}_{t}^{1/2},\,\|\overline{\delta}_{t}\|_{2,2}\leq 1\\ \left.\begin{array}{c}\mathrm{Tr}(\overline{\Lambda}_{t})\leq{\rm Tr}( \overline{S}^{1/2}P^{T}R^{T}R_{t}P\overline{S}^{1/2})\end{array}\right\}\end{array}\right\}\]
or, which is the same,
\[{\rm Opt}=\max_{\overline{g},\overline{h},\overline{S},\overline{T}\atop \overline{\Delta}_{s},\overline{\delta}_{t},\overline{\Lambda}_{t},\overline{L}_ {t}}\left\{{\sum}_{s}{\rm Tr}(\overline{S}^{1/2}Q^{T}A_{s}P\overline{T}^{1/2} \overline{\Delta}_{s}^{T})+2{\sum}_{t}{\rm Tr}(\overline{S}^{1/2}Q^{T}L_{t}^{T }\overline{\Lambda}_{t}^{1/2}\overline{\delta}_{t}^{T}):\right.\] (D \[{}^{\prime}\] \[\left.\begin{array}{c}\overline{g}\in{\cal T},\,\overline{h} \in{\cal S},\,\overline{S}\succeq 0,\,\overline{T}\succeq 0,\,{\rm Tr}( \overline{S}S_{\ell})\leq\overline{g}_{\ell},\,{\rm Tr}(\overline{T}T_{k})\leq \overline{h}_{k}\\ \|\overline{\Delta}_{s}\|_{2,2}\leq 1,\,\|\overline{\delta}_{t}\|_{2,2}\leq 1\\ \left.\begin{array}{c}\mathrm{Tr}(\overline{\Lambda}_{t})\leq{\rm Tr}( \overline{T}^{1/2}P^{T}R_{t}^{T}R_{t}P\overline{T}^{1/2}),\,\overline{\Lambda}_ {t}\succeq 0\end{array}\right\}\end{array}\right\}\]
Note that for \(\Delta\) and \(\delta\) such that \(\|\Delta\|_{2,2}\leq 1\) and \(\|\delta\|_{2,2}\leq 1\) one has
\[{\rm Tr}(A\Delta)\leq\|A\|_{\mbox{\tiny nuc}}=\|\lambda({\cal L}[A])\|_{1},\,\,{ \cal L}[A]=\left[\begin{array}{c|c}\framebox{$\frac{1}{2}A$}\\ \framebox{$\frac{1}{2}A^{T}$}\end{array}\right]\]
and
\[Tr(AB^{T}\delta)=\langle A,\delta^{T}B\rangle_{\rm Fro}\leq\|A\|_{\rm Fro}\| \delta^{T}B\|_{\rm Fro}\leq\|A\|_{\rm Fro}\|B\|_{\rm Fro}\]
(here \(\|A\|_{\mbox{\tiny nuc}}\) stands for the nuclear norm and \(\lambda(A)\) for the vector of eigenvalues of a symmetric matrix \(A\)). Consequently, for a feasible solution to (D\({}^{\prime}\)) it holds
\[{\rm Tr}(\overline{S}^{1/2}Q^{T}A_{s}P\overline{T}^{1/2}\overline{\Delta}_{s}^ {T})\leq\|\lambda({\cal L}[\overline{S}^{1/2}Q^{T}A_{s}P\overline{T}^{1/2}])\|_{1},\]
\[\mathrm{Tr}(\overline{S}^{1/2}Q^{T}L_{t}^{T}\overline{\Lambda}_{t}^{1/2}\overline{ \delta}_{t}^{T})\leq\|\overline{S}^{1/2}Q^{T}L_{t}^{T}\|_{\mathrm{Fro}}\| \overline{\Lambda}_{t}^{1/2}\|_{\mathrm{Fro}}.\]
The latter bound combines with the last constraint in (D\({}^{\prime}\)) to imply that
\[\mathrm{Tr}(\overline{S}^{1/2}Q^{T}L_{t}^{T}\overline{\Lambda}_{t}^{1/2} \overline{\delta}_{t}^{T})\leq\|\overline{S}^{1/2}Q^{T}L_{t}^{T}\|_{\mathrm{ Fro}}\|\overline{T}^{1/2}P^{T}R_{t}^{T}\|_{\mathrm{Fro}},\]
and we conclude that
\[\mathrm{Opt}\leq\max_{\overline{S},\overline{g},\overline{T}, \overline{h}}\left\{\sum\nolimits_{s}\left\|\lambda(\mathcal{L}[\overline{S}^{ 1/2}Q^{T}A_{s}P\overline{T}^{1/2}])\right\|_{1}+\sum\nolimits_{t}\left\| \overline{S}^{1/2}Q^{T}L_{t}^{T}\|_{\mathrm{Fro}}\|\overline{T}^{1/2}P^{T}R_{ t}^{T}\right\|_{\mathrm{Fro}}: \tag{81}\] \[\begin{array}{c}\overline{S}\succeq 0,\overline{g}\in\mathcal{S}, \,\mathrm{Tr}(\overline{S}S_{\ell})\leq\overline{g}_{\ell},\,\ell\leq L\\ \overline{T}\succeq 0,\overline{h}\in\mathcal{T},\,\mathrm{Tr}(\overline{T}T_{k}) \leq\overline{h}_{k},\,k\leq K\end{array}\right\}\]
\(\mathbf{4}^{o}\).We need the following result:
**Lemma C.1**: _[_3_, Lemma 2.3]_ _(cf. also [2, Lemma 3.4.3]) If the ranks of all matrices \(A_{s}\) (and thus--matrices \(\overline{S}^{1/2}Q^{T}A_{s}P\overline{T}^{1/2}\)) do not exceed a given \(\kappa\geq 1\), then for \(\omega\sim\mathcal{N}(0,I_{M+N})\) one has_
\[\mathbf{E}\left\{|\omega^{T}\mathcal{L}[\overline{S}^{1/2}Q^{T}A_{s}P\overline {T}^{1/2}]\omega|\right\}\geq\|\lambda(\mathcal{L}[\overline{S}^{1/2}Q^{T}A_{s} P\overline{T}^{1/2}])\|_{1}/\vartheta(2\kappa),\]
_with \(\vartheta(\cdot)\) as described in Proposition C.1._
Our next result is as follows (cf. [1, Proposition B.4.12])
**Lemma C.2**: _Let \(\in\mathbf{R}^{p\times q}\), \(B\in\mathbf{R}^{r\times q}\) and \(\xi\sim\mathcal{N}(Q,I_{q})\). Then_
\[\mathbf{E}_{\xi}\left\{\|A\xi\|_{2}\|B\xi\|_{2}\right\}\geq\frac{2}{\pi}\|A \|_{\mathrm{Fro}}\|\|B\|_{\mathrm{Fro}}.\]
**Proof.** Setting \(A^{T}A=U\mathrm{Diag}\{\lambda\}U^{T}\) with orthogonal \(U\) and \(\zeta=U^{T}\xi\), we have
\[\mathbf{E}\left\{\|A\xi\|_{2}\|B\xi\|_{2}\right\}=\mathbf{E}\left\{\sqrt{ \sum\nolimits_{i=1}^{q}\lambda_{i}[U^{T}\xi]_{i}^{2}}\|B\xi\|_{2}\right\}.\]
The right hand side is concave in \(\lambda\), so that the infimum of this function in \(\lambda\) varying in the simplex \(\sum\nolimits_{i}\lambda_{i}=\mathrm{Tr}(A^{T}A)\) is attained at an extreme point. In other words, there exists vector \(a\in\mathbf{R}^{q}\) with \(a^{T}a=\|A\|_{\mathrm{Fro}}^{2}\) such that
\[\mathbf{E}\left\{\|A\xi\|_{2}\|B\xi\|_{2}\right\}\geq\mathbf{E}_{\xi}\left\{| a^{T}\xi|\,\|B\xi\|_{2}\right\}.\]
Applying the same argument to \(\|B\xi\|_{2}\)-factor, we can now find a vector \(b\in\mathbf{R}^{q}\), \(b^{T}b=\|B\|_{\mathrm{Fro}}^{2}\), such that
\[\mathbf{E}_{\xi}\left\{|a^{T}\xi|\,\|B\xi\|_{2}\right\}\geq\mathbf{E}_{\xi} \left\{|a^{T}\xi|\,|b^{T}\xi|\right\}.\]
It suffices to prove that the concluding quantity is \(\geq 2\|a\|_{2}\|b\|_{2}/\pi\). By homogeneity, this is the same as to prove that if \([s;t]\sim\mathcal{N}(0,I_{2})\), then \(\mathbf{E}\{|t|\,|\cos(\phi)t+\sin(\phi)s|\}\geq\frac{2}{\pi}\) for all \(\phi\in[0,2\pi)\), which is straightforward (for the justification, see the proof of Proposition 2.3 of [4]). \(\Box\)
The last building block is the following
**Lemma C.3**: _[_33_, Lemma 6]_ _Let_
\[\mathcal{V}=\{v\in\mathbf{R}^{d}:\exists r\in\mathcal{R}:v^{T}R_{j}v\leq r_{j},1 \leq j\leq J\}\subset\mathbf{R}^{d}\]
_be a basic ellipteo, \(W\succeq 0\) be symmetric \(d\times d\) matrix such that_
\[\exists r\in\mathcal{R}:\mathrm{Tr}(WR_{j})\leq r_{j},j\leq J,\]
_and \(\omega\sim\mathcal{N}(0,W)\). Denoting by \(\rho(\cdot)\) the norm on \(\mathbf{R}^{d}\) with the unit ball \(\mathcal{V}\), we have_
\[\mathbf{E}\{\rho(\omega)\}\leq\varkappa(J).\]
_with \(\varkappa(\cdot)\) given by (76)._
\(4^{o}\)Now we can complete the proof of the second inequality in (74). Let \(\kappa\geq 1\), and let \(\overline{g},\overline{S},\overline{h},\overline{T}\) be feasible for the optimization problem in (81). Denoting by \(\|\cdot\|_{\mathcal{Q}}\) the norm with the unit ball \(\mathcal{Q}\), for all \(A\in\mathbf{R}^{m\times n}\), \(u\in\mathbf{R}^{m}\), and \(v\in\mathbf{R}^{n}\) we have
\[u^{T}Av\leq\|u\|_{\mathcal{B}_{*}}\|Av\|_{\mathcal{B}}\leq\|u\|_{\mathcal{B}_ {*}}\|A\|_{\mathcal{X},\mathcal{B}}\|v\|_{\mathcal{X}},\]
so that for all \(u\in\mathbf{R}^{m}\) and \(v\in\mathbf{R}^{n}\)
\[\|u\|_{\mathcal{B}_{*}}\|v\|_{\mathcal{X}}\|\mathcal{A}\|_{ \mathcal{X},\mathcal{B}} \geq\max_{\begin{subarray}{c}\epsilon_{s},\|\epsilon_{s}\|_{s} \leq 1,\\ \epsilon_{t},\|\epsilon_{t}\|_{2,2}\leq 1\end{subarray}}\left[\sum\nolimits_{s} \epsilon_{s}u^{T}A_{s}v+\sum\nolimits_{t}u^{T}L_{t}^{T}\delta_{t}R_{t}v\right]\] \[=\sum\nolimits_{s}\lvert u^{T}A_{s}v\rvert+\sum\nolimits_{t}\|L_ {t}u\|_{2}\|R_{t}v\|_{2}.\]
Thus, for all \(\overline{g},\overline{S},\overline{h},\overline{T}\) which are feasible for (81) and \(\xi\in\mathbf{R}^{M}\), \(\eta\in\mathbf{R}^{N}\),
\[\|\overline{S}^{1/2}\xi\|_{\mathcal{Z}}\|\overline{T}^{1/2}\eta \|_{\mathcal{Y}}\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}} \geq\|Q\overline{S}^{1/2}\xi\|_{\mathcal{B}_{*}}\|P\overline{T}^{1/ 2}\eta\|_{\mathcal{X}}\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}}\ \mathrm{[due to}\ \mathcal{B}_{*}=Q \mathcal{Z},\mathcal{X}=P\mathcal{Y}]\] \[\geq\sum\nolimits_{s}\lvert\xi^{T}\overline{S}^{1/2}Q^{T}A_{s}P \overline{T}^{1/2}\eta\rvert+\sum\nolimits_{t}\|L_{t}Q\overline{S}^{1/2}\xi\| _{2}\|R_{t}P\overline{T}^{1/2}\eta\|_{2}\] \[=\sum\nolimits_{s}\lvert[\xi;\eta]^{T}\mathcal{L}[\overline{S}^{1 /2}Q^{T}A_{s}P\overline{T}^{1/2}][\xi;\eta]\rvert\] \[+\sum\nolimits_{t}\lVert[L_{t}Q\overline{S}^{1/2},0_{p_{t}\times N }][\xi;\eta]\rVert_{2}\lVert[0_{q\times M},R_{t}P\overline{T}^{1/2}][\xi;\eta ]\rVert_{2}. \tag{82}\]
As a result, for \([\xi;\eta]\sim\mathcal{N}(0,I_{M+N})\), applying the bounds of Lemmas C.1 and C.2,
\[\mathbf{E}\left\{\left\lVert\overline{S}^{1/2}\xi\right\rVert_{ \mathcal{Z}}\right\}\mathbf{E}\left\{\left\lVert\overline{T}^{1/2}\eta\right\rVert _{\mathcal{Y}}\right\}\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}}=\mathbf{E} \left\{\left\lVert\overline{S}^{1/2}\xi\right\rVert_{\mathcal{Z}}\|\overline {T}^{1/2}\eta\|_{\mathcal{Y}}\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}}\right\}\] \[\geq\sum\nolimits_{s}\mathbf{E}\left\{\left\lvert[\xi;\eta]^{T} \mathcal{L}[\overline{S}^{1/2}Q^{T}A_{s}P\overline{T}^{1/2}][\xi;\eta]\right\rvert\right\}\] \[\quad+\sum\nolimits_{t}\mathbf{E}\left\{\left\lVert[L_{t}Q \overline{S}^{1/2},0_{p_{t}\times N}][\xi;\eta]\right\rVert_{2}\left\lVert[0 _{q_{t}\times M},R_{t}P\overline{T}^{1/2}][\xi;\eta]\right\rVert_{2}\right\}\] \[\geq\vartheta(2\kappa)^{-1}\sum\limits_{s}\left\lVert\lambda \left(\mathcal{L}[\overline{S}^{1/2}Q^{T}A_{s}P\overline{T}^{1/2}]\right) \right\rVert_{1}+\tfrac{2}{\pi}\sum\limits_{t}\|L_{t}Q\overline{S}^{1/2}\|_{ \mathrm{Fro}}\|R_{t}P\overline{T}^{1/2}\|_{\mathrm{Fro}}.\]
Besides this, by Lemma C.3 we have
\[\mathbf{E}\left\{\left\lVert\overline{S}^{1/2}\xi\right\rVert_{\mathcal{Z}} \right\}\leq\varkappa(L),\ \ \mathbf{E}\left\{\|\overline{T}^{1/2}\eta\|_{\mathcal{Y}}\right\}\leq\varkappa(K)\]
due to the fact that \(\overline{g},\overline{S},\overline{h}\) and \(\overline{T}\) are feasible for (81). This combines with (82) to imply that the value \(\varkappa(L)\varkappa(K)\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}}\) is lower bounded with the quantity
Invoking the inequality in (81), we arrive at the second inequality in (74). The above reasoning assumed that \(\kappa\geq 1\), with evident simplifications, it is applicable to the case of \(\kappa=0\) as well. \(\Box\)
### Proof of Proposition 3.1
We put \(S=q_{\mathrm{s}}\) and \(T=q-q_{\mathrm{s}}\). In the situation of Proposition 3.1 we want to tightly upper-bound quantity
\[\mathfrak{s}(H) = \max_{x\in\mathcal{X},\eta\in\mathcal{U}}\left\|H^{T}D[\eta]x\right\|\] \[= \max_{\ell\leq L}\max_{x\in\mathcal{X},\eta\in\mathcal{U}}\left\{ \sqrt{[H^{T}D[\eta]x]^{T}R_{\ell}[H^{T}D[\eta]x]}\right\}\] \[= \max_{\ell\leq L}\|\mathcal{A}_{\ell}[H]\|_{\mathcal{X},2},\]
where \(\|\cdot\|_{\mathcal{X},2}\) is the operator norm induced by \(\|\cdot\|_{\mathcal{X}}\) on the argument and \(\|\cdot\|_{2}\) on the image space and the uncertain matrix \(\mathcal{A}_{\ell}[H]\) is given by
\[\mathcal{A}_{\ell}=\left\{\sum_{s=1}^{S}\delta_{s}\underbrace{R_{ \ell}^{1/2}H^{T}P_{s}^{T}Q_{s}}_{=:A_{s\ell}[H]}+\sum_{t=1}^{T}\underbrace{R_{ \ell}^{1/2}H^{T}P_{S+t}^{T}}_{L_{tt}^{T}[H]}\Delta_{s}\underbrace{Q_{S+t}}_{=: R_{t}}:\right.\] \[\left.\begin{array}{ccc}|\delta_{s}|\leq 1&,1\leq s\leq S\\ \|\Delta_{s}\|_{2,2}\leq 1&,1\leq t\leq T\end{array}\right\}\]
It follows that
\[\mathfrak{s}(H)=\max_{\ell\leq L}\|\mathcal{A}_{\ell}[H]\|_{\mathcal{X},2},\]
and Proposition C.1 provides us with the efficiently computable convex in \(H\) upper bound \(\overline{\mathfrak{s}}(H)\) on \(\mathfrak{s}(H)\):
\[\overline{\mathfrak{s}}(H) =\max_{\ell\leq L}\mathrm{Opt}_{\ell}(H),\] \[\mathrm{Opt}_{\ell}(H) =\min_{\mu,v,\lambda,U_{s},V_{s},U^{t},V^{t}}\left\{\tfrac{1}{2}[ \mu+\phi_{\mathcal{T}}(v)]:\,\mu\geq 0,\,v\geq 0,\,\lambda\geq 0\right.\] \[\left.\begin{array}{ccc}U_{s}&-A_{s\ell}[H]P\\ \hline-P^{T}A_{s\ell}^{T}[H]&V_{s}\\ U^{t}&-L_{t\ell}^{T}[H]\end{array}\right\}\succeq 0\] \[\left.\begin{array}{ccc}U^{t}&-L_{t\ell}^{T}[H]\\ \hline-L_{t\ell}[H]&\lambda_{t}I_{p_{\mathrm{s}+t}}\\ V^{t}-\lambda_{t}P^{T}R_{t}^{T}R_{t}P\succeq 0\\ \mu I_{\nu}-\sum_{s}U_{s}-\sum_{t}U^{t}\succeq 0\\ \sum_{k}v_{k}T_{k}-\sum_{s}V_{s}-\sum_{t}V^{t}\succeq 0\end{array}\right\}\]
and tightness factor of this bound does not exceed \(\max[\vartheta(2\kappa),\pi/2]\) where \(\kappa=\max_{\alpha\leq q_{\mathrm{s}}}\min[p_{\alpha},q_{\alpha}]\). \(\Box\)
### Spectratopic version of Proposition C.1
Proposition C.1 admits a "spectratopic version," in which ellipotes \(\mathcal{X}\) and \(\mathcal{B}_{*}\) given by (72) are replaced by the pair of _spectratopes_
\[\mathcal{X} = \{Py:y\in\mathcal{Y}\}\subset\mathbf{R}^{n},\mathcal{Y}=\{y\in \mathbf{R}^{N}\ \&\ \exists t\in\mathcal{T}:\,T_{k}[y]^{2}\preceq t_{k}I_{f_{k}},k\leq K\}, \tag{83a}\] \[T_{k}[y]=\sum_{j=1}^{N}y_{j}T^{jk},\,T^{jk}\in\mathbf{S}^{f_{k}},\sum_{k}T_{k}^{2}[y]\succ 0\ \forall y\neq 0\] \[\mathcal{B}_{*} = \{Qz:z\in\mathcal{Z}\}\subset\mathbf{R}^{m},\,\mathcal{Z}=\{z\in \mathbf{R}^{M}:\exists s\in\mathcal{S}:\,S_{\ell}^{2}[z]\preceq s_{\ell}I_{d_{ \ell}},\,\ell\leq L\},\] (83b) \[S_{\ell}[z]=\sum_{j=1}^{M}z_{j}S^{jk\ell},\,S^{jk\ell}\in\mathbf{S }^{d_{\ell}},\sum_{\ell}S_{\ell}^{2}[z]\succ 0\ \forall z\neq 0\]
The spectratopic version of the statement reads as follows:
**Proposition C.2**: _Given uncertain matrix (71) and spectratopes (83a) and (83b), consider convex optimization problem_
\[\mathrm{Opt}=\min_{\mu,\upsilon,\lambda,U_{s},V_{s},U^{t},V^{t}} \left\{\frac{1}{2}[\phi_{\mathcal{S}}(\lambda[\mu])+\phi_{\mathcal{T}}(\lambda [\upsilon])]:\right.\] \[\qquad\text{subject to}\] \[\mu=\{M_{\ell}\in\mathbf{S}^{d_{\ell}}_{+},\ell\leq L\},\,\upsilon =\{\Upsilon_{k}\in\mathbf{S}^{f_{k}}_{+},k\leq K\},\,\lambda\geq 0\] \[\left[\begin{array}{c|c}U_{s}&-Q^{T}A_{s}P\\ \hline-P^{T}A_{s}^{T}Q&V_{s}\end{array}\right]\succeq 0 \tag{84a}\] \[\left[\begin{array}{c|c}U^{t}&-Q^{T}L_{t}^{T}\\ \hline-L_{t}Q&\lambda_{t}I_{p_{t}}\end{array}\right]\succeq 0,\,V^{t}-\lambda_{t}P^{T}R _{t}^{T}R_{t}P\succeq 0\] (84b) \[\sum_{\ell}S_{\ell}^{+,*}[M_{\ell}]-\sum_{s}U_{s}-\sum_{t}U^{t}\succeq 0\] (84c) \[\sum_{k}T_{k}^{+,*}[\Upsilon_{k}]-\sum_{s}V_{s}-\sum_{t}V^{t}\succeq 0 \tag{84d}\]
_where_
\[\lambda[\zeta]=[\mathrm{Tr}(Z_{1});...;\mathrm{Tr}(Z_{I})]\text{ for }\zeta=\{Z_{i}\in\mathbf{S}^{k_{i}},i\leq I\}\]
_and_
\[S_{\ell}^{+,*}[V]=\left[\mathrm{Tr}(VS^{i\ell}S^{j\ell})\right]_{i,j\leq M} \text{ for }V\in\mathbf{S}^{d_{\ell}},\,T_{k}^{+,*}[U]=\left[\mathrm{Tr}( UT^{ik}T^{jk})\right]_{i,j\leq N}\text{ for }U\in\mathbf{S}^{f_{k}}.\]
_Problem (84) is strictly feasible and solvable, and_
\[\|\mathcal{A}\|_{\mathcal{X},\mathcal{B}}\leq\mathrm{Opt}\leq\varsigma\left( \sum\nolimits_{k}f_{k}\right)\varsigma\left(\sum\nolimits_{\ell}d_{\ell} \right)\max\left[\vartheta(2\kappa),\pi/2\right]\|\mathcal{A}\|_{\mathcal{X}, \mathcal{B}}\]
_where \(\vartheta\) and \(\kappa\) are the same as in Proposition C.1 and_
\[\varsigma(J)=2\sqrt{2\ln(2J)}.\]
**Proof.** For \(Y\in\mathbf{S}^{M}\) and \(X\in\mathbf{S}^{N}\) let us set
\[S_{\ell}^{+}[Y]=\sum_{i,j=1}^{M}Y_{ij}S^{i\ell}S^{j\ell},\ T_{k}^{+}[X]=\sum_{ i,j=1}^{N}X_{ij}T^{ik}T^{jk},\]
so that
\[S_{\ell}^{+}[zz^{T}]=S_{\ell}^{2}[z],\,T_{k}^{+}[yy^{T}]=T_{k}^{2}[y] \tag{85}\]
and
\[\begin{array}{l}\mbox{Tr}(VS_{\ell}^{+}[Y])=\mbox{Tr}(S_{\ell}^{+,*}[V]Y)\mbox { for }V\in{\bf S}^{d_{\ell}},Y\in{\bf R}^{M},\\ \mbox{Tr}(UT_{k}^{+}[X])=\mbox{Tr}(T_{k}^{+,*}[U]X)\mbox{ for }U\in{\bf S}^{f_{k}},X \in{\bf R}^{N}.\end{array} \tag{86}\]
The proof of Proposition C.2 is obtained from that (below referred to as "the proof") of Proposition C.1 by the following modifications:
1. All references to (73) should be replaced with references to (84). Item [b] in \(1^{\circ}\) of the proof now reads \[\begin{array}{l}\mbox{[b${}^{\prime}$] $}x=Py\mbox{ with }T_{k}^{2}[y]\preceq\tau_{k}I_{f_{k}},\,k\leq K,\mbox{ for some }\tau\in{\cal T}\mbox{ and }u=Qz\mbox{ for some }z\\ \mbox{ such that }S_{\ell}^{2}[z]\preceq\varsigma_{\ell}I_{d_{\ell}},\,\ell\leq L, \mbox{ for some }\varsigma\in{\cal S}.\end{array}\] The last three lines in the chain (79) are replaced with \[\begin{array}{l}\gamma\leq\frac{1}{2}\left[\sum_{\ell}\mbox{Tr}([ zz^{T}]S_{\ell}^{+,*}[M_{\ell}])+\sum_{k}\mbox{Tr}([yy^{T}]T_{k}^{+,*}[\Upsilon_{k}]) \right]\mbox{ [by \eqref{eq:2.1} and \eqref{eq:2.2}]}\\ =\frac{1}{2}\left[\sum_{\ell}\mbox{Tr}(S_{\ell}^{2}[z]M_{\ell})+ \sum_{k}\mbox{Tr}(T_{k}^{2}[y]\Upsilon_{k})\right]\mbox{ [by \eqref{eq:2.1} and \eqref{eq:2.2}]}\\ \leq\frac{1}{2}\left[\sum_{\ell}\varsigma_{\ell}\mbox{Tr}(M_{ \ell})+\sum_{k}\tau_{k}\mbox{Tr}(\Upsilon_{k})\right]\mbox{ [due to (${}^{\prime}$) and $M_{\ell}\succeq 0,\Upsilon_{k}\succeq 0$]}\\ \leq\frac{1}{2}[\phi_{\cal S}(\lambda[\mu])+\phi_{\cal T}(\lambda[v])]\mbox{ [since }\varsigma\in{\cal S},\tau\in{\cal T}].\end{array}\]
2. Constraints (80) in (P) now read \[\left[\sum_{\ell}S_{\ell}^{+,*}[M_{\ell}]-\sum_{s}U_{s}-\sum_{t}U^{t}\right] ^{\overline{S}}\succeq 0,\ \ \left[\sum_{k}T_{k}^{+,*}[\Upsilon_{k}]-\sum_{s}V_{s}- \sum_{t}V^{t}\right]^{\overline{T}}\succeq 0.\] As a result, (81) becomes \[\begin{array}{l}\mbox{Opt}\leq\max_{\overline{S},\overline{S}, \overline{T},\overline{h}}\bigg{\{}\sum_{s}\left\|\lambda({\cal L}[\overline{S} ^{1/2}Q^{T}A_{s}P\overline{T}^{1/2}])\right\|_{1}+\sum_{t}\|\overline{S}^{1/2}Q ^{T}L_{t}^{T}\|_{\rm Fro}\|\overline{T}^{1/2}P^{T}R_{t}^{T}\|_{\rm Fro}:\\ \frac{\overline{S}}{\succeq 0,\overline{g}\in{\cal S},S_{\ell}^{+}[ \overline{S}]\preceq\overline{g}_{\ell}I_{d_{\ell}},\,\ell\leq L}{\overline{T }\succeq 0,\overline{h}\in{\cal T},T_{k}^{+}[\overline{T}]\preceq\overline{h}_{k}I_{ f_{k}},\,k\leq K\end{array}\right\}\end{array}\] (87)
3. The role of Lemma C.3 in the proof is now played by the following fact. **Lemma C.4**: _[_33_, Lemma 8]_ _Let_ \[{\cal V}=\{v\in{\bf R}^{d}:\exists r\in{\cal R}:R_{j}^{2}[v]\preceq r_{j}I_{ \nu_{j}},1\leq j\leq J\}\subset{\bf R}^{d}\] _be a basic spectratope,_ \(W\succeq 0\) _be symmetric_ \(d\times d\) _matrix such that_ \[\exists r\in{\cal R}:R_{j}^{+}[W]\preceq r_{j}I_{\nu_{j}},j\leq J,\] _and_ \(\omega\sim{\cal N}(0,W)\)_. Denoting by_ \(\rho(\cdot)\) _the norm on_ \({\bf R}^{d}\) _with the unit ball_ \({\cal V}\)_, we have_ \[{\bf E}\{\rho(\omega)\}\leq\varsigma\left(\sum\nolimits_{j}\nu_{j}\right),\, \varsigma(F)=2\sqrt{2\ln(2F)}.\] |
2309.00112 | DFT+DMFT study of the magnetic susceptibility and the correlated
electronic structure in transition-metal intercalated NbS$_2$ | The Co-intercalated NbS$_2$ (Co$_{1/3}$NbS$_2$) compound exhibits large
anomalous Hall conductance, likely due to the non-coplanar magnetic ordering of
Co spins. In this work, we study the relation between this novel magnetism and
the correlated electronic structure of Co$_{1/3}$NbS$_2$ by adopting dynamical
mean field theory (DMFT) to treat the correlation effect of Co $d$ orbitals. We
find that the hole doping of Co$_{1/3}$NbS$_2$ can tune the size of the Nb hole
pocket at the DMFT Fermi surface, producing features consistent with those
observed in angle resolved photoemission spectra [Phys. Rev. B 105, L121102
(2022)]. We also compute the momentum-resolved spin susceptibility, and
correlate it with the Fermi surface shape. We find that the magnetic ordering
wavevector of Co$_{1/3}$NbS$_2$ obtained from the peak in spin susceptibility
agrees with the one observed experimentally by neutron scattering; it is
compatible with commensurate non-coplanar $3q$ spin structure. We also discuss
how results change if some other than Co transition metal intercalations are
used. | Hyowon Park, Ivar Martin | 2023-08-31T20:10:27Z | http://arxiv.org/abs/2309.00112v1 | DFT+DMFT study of the magnetic susceptibility and the correlated electronic structure in transition-metal intercalated NbS\({}_{2}\)
###### Abstract
The Co-intercalated NbS\({}_{2}\) (Co\({}_{1/3}\)NbS\({}_{2}\)) compound exhibits large anomalous Hall conductance, likely due to the non-coplanar magnetic ordering of Co spins. In this work, we study the relation between this novel magnetism and the correlated electronic structure of Co\({}_{1/3}\)NbS\({}_{2}\) by adopting dynamical mean field theory (DMFT) to treat the correlation effect of Co \(d\) orbitals. We find that the hole doping of Co\({}_{1/3}\)NbS\({}_{2}\) can tune the size of the Nb hole pocket at the DMFT Fermi surface, producing features consistent with those observed in angle resolved photoemission spectra [Phys. Rev. B **105**, L121102 (2022)]. We also compute the momentum-resolved spin susceptibility, and correlate it with the Fermi surface shape. We find that the magnetic ordering wavevector of Co\({}_{1/3}\)NbS\({}_{2}\) obtained from the peak in spin susceptibility agrees with the one observed experimentally by neutron scattering; it is compatible with commensurate non-coplanar \(3q\) spin structure. We also discuss how results change if some other than Co transition metal intercalations are used.
## I Introduction
Understanding the relation between a novel electronic transport and the correlated electronic structure of complex materials has been a grand challenge in condensed matter physics. For instance, cobalt-intercalated NbS\({}_{2}\) (Co\({}_{1/3}\)NbS\({}_{2}\)) shows a very large anomalous Hall effect [1; 2]; however, the origin of this phenomenon has remained a subject of debate. While the'standard' anomalous Hall effect originates stems from finite uniform magnetization in ferromagnets [3], the antiferromagnetic (AFM) ground state of Co\({}_{1/3}\)NbS\({}_{2}\) implies a different, more exotic origin. One promising scenario is that topologically non-trivial magnetism such as the non-coplanar spin state of Co \(d\) orbitals can generate a strong fictitious magnetic field, thus inducing a "topological" Hall effect [4]. Equivalently, in the band language, the Co spin moments couple to the itinerant Nb bands and produce the topologically non-trivial band structure resulting in a large Berry curvature leading to anomalously large Hall currents.
Our previous first-principle calculations based on density functional theory (DFT) support this scenario: the energy of the non-coplanar \(3q\) magnetic structure is the lowest, compared to other \(1q\) or \(2q\) states [5]. Moreover, the Berry phase calculation based on the magnetic band structure supports a large anomalous Hall conductivity (AHC), comparable to \(e^{2}/h\) per crystalline layer [5]. Unfortunately, the DFT band structure of Co\({}_{1/3}\)NbS\({}_{2}\) fails to capture the angle-resolved photoemission spectra (ARPES), which requires going beyond the rigid-band shift picture due to the Co intercalation [6; 7]. One important feature observed in ARPES measurements in Co\({}_{1/3}\)NbS\({}_{2}\) is the appearance of the broad electron pocket around the high symmetry \(K\) point, which is not captured by DFT [6; 7; 8]. Moreover, the effective electron mass at the electron pocket of Co\({}_{1/3}\)NbS\({}_{2}\) is twice larger than that of NbS\({}_{2}\)[7]. These suggest that for a more accurate picture, one needs to treat the strong correlation effect of Co \(d\) orbitals beyond DFT.
Until recently, experimental support for exotic magnetic states in Co\({}_{1/3}\)NbS\({}_{2}\) had been lacking. In fact, early neutron scattering measurement on Co\({}_{1/3}\)NbS\({}_{2}\) argued that the scattering peak data fits well to the standard commensurate (\(1q\)) AFM structure, but with multiple magnetic domains [9]. (One should note, that it is quite difficult to distinguish between multi-domain \(1q\) AFM state and a mono-domain \(3q\) state in neutron scattering.) Recently, however, polarized neutron scattering measurements on the related material Co\({}_{1/3}\)TaS\({}_{2}\), which has the same structure but with Nb replaced by Ta, convincingly demonstrated the presence of non-coplanar Co magnetism. Moreover, they demonstrated connection between the appearance of this magnetic state and large spontaneous topological Hall effect [10].
What is the physical origin of noncoplanar magnetism? In pure-spin models it typically requires having multi-spin interactions (either four or six spin). When interactions are mediated by itinerant electrons, such higher order terms are generated naturally [11]. From the weak-coupling perspective, having multiple \(q\) orders present simultaneously allows to gap out larger total sections of Fermi. The key ingredients for noncoplanar order are (1) the large susceptibility with respect to simple collinear single-\(q\) order (e.g. if \(q\) connect nearly flat opposite sides of the Fermi surface - nesting effect), and (2) at least approximate commensuration of the magnetic order and the crystal lattice. The latter criterion allows for several \(q\) orderings to coexist with each other without suppressing the local amplitude of the magnetic order.
In this work, we focus on the first element, the analysis of magnetic susceptibility. We go beyond the standard DFT by adopting dynamical mean field theory (DMFT), which allows to treat the strong correlation effect on the transition metal (\(M\)) \(d\) orbitals in \(M_{1/3}\)NbS\({}_{2}\). As the first step, we match the main features of DMFT Fermi surface calculations to be consistent with the ARPES data. We then calculate the momentum-dependent mag
netic susceptibility \(\chi\) from first-principles and investigate the momentum \(q\) vector showing the leading instability. We note that DFT alone is not sufficient to study the correlated electronic structure of Co\({}_{1/3}\)NbS\({}_{2}\) as it fails to capture the essential features of the APRES measurement. Also, compared to our previous DFT study on the Co\({}_{1/3}\)NbS\({}_{2}\) that showed \(3q\) non-coplanar structure to be the lowest in energy compared to possible \(1q\) or \(2q\) states, now we are allowing for the possibility of instability at a wavevectors incommensurate with the lattice.
## II Methods
In this section, we explain computational methods used in the band structure and the magnetic susceptibility calculations of \(M\)Nb\({}_{3}\)S\({}_{6}\) (\(M\)= Co, Fe, and Ni). We also provide parameters used in the calculations.
### DMFT calculation
To study the band structure and the Fermi surface of \(M\)Nb\({}_{3}\)S\({}_{6}\) (\(M\)= Co, Fe, and Ni), we adopt DFT+DMFT treating the strong correlation effect of \(M\) ions. The procedure of the DMFT calculation is as follows. First, we obtain the non-spin-polarized (nsp) band structure from the experimental \(M\)Nb\({}_{3}\)S\({}_{6}\) crystal structures. We adopted the Vienna Ab-initio Simulation Package (VASP) [12; 13] code to compute the nsp band structure using a \(14\times 14\times 4\)\(k-\)mesh along with the energy cutoff of 400eV for the plane-wave basis. We used the Perdew-Burke-Ernzerhof (PBE) functional for the exchange and correlation energy of DFT. Using the nsp band structure, we construct the following tight-binding Hamiltonian by adopting the maximally localized Wannier function [14] as the basis,
\[\hat{H} = \sum_{\alpha\beta,ij\sigma}t_{\alpha\beta,ij}\hat{c}^{\dagger}_{i \alpha\alpha}\hat{c}_{j\beta\sigma}+\sum_{\alpha\beta,i}\sum_{\sigma\sigma^{ \prime}}U^{\sigma\sigma^{\prime}}_{\alpha\beta}\hat{n}_{i\alpha\sigma}\hat{n}_ {i\beta\sigma^{\prime}}\] \[+\sum_{ij\sigma}t_{ij}\hat{d}^{\dagger}_{i\sigma}\hat{d}_{j \sigma}+\sum_{\alpha,ij\sigma}(t_{\alpha,ij}\hat{c}^{\dagger}_{i\alpha\sigma} \hat{d}_{j\sigma}+t^{*}_{\alpha,ij}\hat{d}^{\dagger}_{j\sigma}\hat{c}_{i\alpha \sigma}),\]
where \(t\) is the inter-orbital and inter-site hopping matrix elements including the whole manifold of the Co \(d\) orbitals (\(\hat{c}^{\dagger},\hat{c}\)) and the Nb \(d_{z^{2}}\) orbital (\(\hat{d}^{\dagger},\hat{d}\)). \(\hat{n}_{i\alpha\sigma}(=\hat{c}^{\dagger}_{i\alpha\sigma}\hat{c}_{i\alpha\sigma})\) is the density operator for the orbital \(\alpha\) and the spin \(\sigma\) at the site \(i\). \(U^{\sigma\sigma^{\prime}}_{\alpha\beta}\) is the local Coulomb interaction matrix for the on-site Co \(d\) orbitals and approximated as the density-density interaction type.
Using the Hamiltonian in Eq. 1, we solve the DMFT self-consistent equations [15] using the continuous-time quantum Monte Carlo (CTQMC) method [16] as the impurity solver, then obtain the the local self-energy \(\Sigma(i\nu_{n})\) for the Co \(d\) orbitals. Here, we parameterize the \(U_{\alpha\beta}\) matrix elements by the Slater integrals using the local Hubbard interaction \(U\)=5eV and the Hund's coupling \(J\)=0.7eV. The temperature \(T\) is set to be 116K. In DMFT, we use a fine \(k-\)mesh of 30\(\times\)30\(\times\)10. For all compounds, the fixed double counting potential scheme is adopted using the following formula to subtract the double-counting correction from the DFT potential
\[V_{DC}=U(n^{0}_{d}-\frac{1}{2})-\frac{J}{2}(n^{0}_{d}-1) \tag{2}\]
where \(n^{0}_{d}\) is the nominal occupancy of the transition metal \(d\) orbitals, i.e. \(n^{0}_{d}=7.0\) for Co\({}^{2+}\), \(n^{0}_{d}=6.0\) for Fe\({}^{2+}\), and \(n^{0}_{d}=8.0\) for Ni\({}^{2+}\).
### Spin susceptibility \(\chi^{sp}(\mathbf{q})\) calculation
A general form of the spin susceptibility \(\chi^{sp}\) can be given by the retarded two-particle Green's function of two spin operators:
\[\chi^{sp}_{mn}(\mathbf{r}^{\prime}-\mathbf{r},t^{\prime}-t)=\Theta(t^{\prime} -t)\langle[\hat{S}^{m}(\mathbf{r}^{\prime},t^{\prime}),\,\hat{S}^{n}(\mathbf{ r},t)]_{-}\rangle, \tag{3}\]
where \(\hat{S}^{n}(\mathbf{r},t)\) is the \(n\)-th component of spin density and \(\Theta(t^{\prime}-t)\) is the step function that imposes causality. The spin operator can be expanded using the localized orbital basis set and the fermionic creation/annihilation operators:
\[\hat{S}^{n}(\mathbf{r},t)=\frac{1}{N}\sum_{\sigma\sigma^{\prime}}\hat{s}^{n}_{ \sigma\sigma^{\prime}}\sum_{nn^{\prime}}\phi^{*}_{n}(\mathbf{r})\phi_{n^{ \prime}}(\mathbf{r})\,\hat{c}^{\dagger}_{n\sigma}(t)\hat{c}_{n^{\prime} \sigma^{\prime}}(t), \tag{4}\]
where \(\hat{s}^{i}=g\mu_{B}\hat{\sigma}^{i}/2\) with \(\hat{\sigma}^{i}\) being the \(i\)-th Pauli matrix and \(\phi_{n}(\mathbf{r})\) is the basis function with the index \(n\,(=\{\mathbf{k},\alpha,\tau_{\alpha}\})\) for the orbital character \(\alpha\) located at the position \(\tau_{\alpha}\) with the momentum \(\mathbf{k}\). One can note that the spin operator can be diagonal for both spin and orbital basis sets if the spin arrangement is collinear and the spin-orbit coupling is neglected. Here, we consider the paramagnetic spin symmetry without the spin-orbit coupling.
For the longitudinal and paramagnetic spin symmetries (\(m=n=z\)), the spin susceptibility \(\chi^{sp}_{zz}\) can be obtained from
\[\chi^{sp}_{zz}(\mathbf{r}^{\prime}-\mathbf{r},t^{\prime}-t) = \frac{\Theta(t^{\prime}-t)}{N^{2}}\cdot\sum_{\sigma\sigma^{\prime }}\sum_{\beta\sigma^{\prime}}\hat{s}^{z}_{\sigma\sigma^{\prime}}\hat{s}^{z}_{ \beta\sigma^{\prime}}\] \[\sum_{nn^{\prime}}\sum_{\bar{n}\bar{n}^{\prime}}\phi^{*}_{n}( \mathbf{r}^{\prime})\phi_{n^{\prime}}(\mathbf{r})\phi^{*}_{n}(\mathbf{r})\phi_ {\bar{n}^{\prime}}(\mathbf{r})\] \[\cdot\langle[\hat{c}^{\dagger}_{nn^{\prime}}(t^{\prime})\hat{c}_{n^{ \prime}\sigma^{\prime}}(t^{\prime}),\,\hat{c}^{\dagger}_{\bar{n}\bar{\sigma}}(t )\hat{c}_{\bar{n}^{\prime}\bar{\sigma}^{\prime}}(t)]_{-}\rangle.\]
Here, the paramagnetic symmetry imposes that the two-particle response function should be invariant upon the spin flip, i.e. \(\sigma\rightarrow-\sigma\).
### Form factor \(F\) calculation
Using the paramagnetic symmetry, the momentum and frequency dependent susceptibility, \(\chi^{sp}_{zz}(\mathbf{q},\omega)\) can be simplified from the Fourier transform of Eq. 5:
\[\chi^{sp}_{zz}(\mathbf{q},\omega)=\frac{(g\mu_{B})^{2}}{2N^{2}}\sum_{nn^{\prime}} \sum_{\bar{n}\bar{n}^{\prime}}F_{nn^{\prime}\bar{n}\bar{n}^{\prime}}(\mathbf{q}) \cdot\chi_{nn^{\prime}\bar{n}\bar{n}^{\prime}}(\omega)\,, \tag{6}\]
where \(F({\bf q})\) is the atomic form factor describing the modulation of the charge density and \(\chi(\omega)\) is the orbital-dependent two-particle response function where the spin dependence is simplified due to the paramagnetic symmetry. In the susceptibility calculation based on DFT, the matrix element for \(F({\bf q})\) can be typically computed using the Kohn-Sham wavefunction, i.e. \(\psi^{\bf k}_{n_{\alpha}}\left({\bf r}\right)\) (the orbital index \(\{\alpha,\tau_{\alpha}\}\) changes to the band index \(n_{\alpha}\)), and it can be expanded continuum basis set, such as plane waves [17]. Here, we adopt the maximally localized Wannier function for the form factor \(F({\bf q})\) calculation, which is the same basis function for DMFT calculations:
\[\phi_{n}({\bf r})=\phi^{\bf k}_{n_{\alpha}}({\bf r})=\frac{1}{\sqrt{N}}\sum_{ \bf R}e^{-i{\bf k}\cdot{\bf R}}\phi^{\tau_{\alpha}}_{\alpha}({\bf r}-{\bf R}), \tag{7}\]
where \(\phi^{\bf k}_{n_{\alpha}}\left({\bf r}\right)\) is the Wannier function with the index \(n_{\alpha}\) (\(=\{\alpha,\tau_{\alpha}\}\)) defined in the \({\bf k}-\)space for the primitive unit cell. The index \(n_{\alpha}\) runs over both the orbital character \(\alpha\) and the internal atomic position \(\tau_{\alpha}\).
If the complete and orthonormalized basis set is used for the \(F({\bf q})\) calculation, one can obtain the product of delta functions imposing the momentum and orbital conservation:
\[F_{nn^{\prime}\bar{n}\bar{n}^{\prime}}({\bf q}) = \delta_{{\bf k}+{\bf q},{\bf k}^{\prime}}\cdot\delta_{\tilde{{ \bf k}}^{\prime}+{\bf q},\tilde{{\bf k}}}\cdot\delta_{n_{\alpha},n_{\alpha^{ \prime}}}\cdot\delta_{n_{\alpha},n_{\alpha^{\prime}}}. \tag{8}\]
However, in general, \(F({\bf q})\) needs to be modified if one uses the incomplete basis set. In the case of Co\({}_{1/3}\)NbS\({}_{2}\), the magnetic moments primarily reside on Co \(d\) orbitals, which are the subset of the complete band structure. As a result, the \(F({\bf q})\) expression for only Co \(d\) orbitals can be modified as follows:
\[F_{nn^{\prime}\bar{n}\bar{n}^{\prime}}({\bf q}) = \frac{1}{N_{\tau}^{2}}\bigg{(}\sum_{\tilde{{\bf k}},\tilde{{\bf k }}^{\prime}}\sum_{\tau_{\alpha},\tau_{\alpha^{\prime}}}e^{i(\tilde{{\bf k}}^{ \prime}\cdot\tau_{\alpha^{\prime}}-\tilde{{\bf k}}\cdot\tilde{{\bf r}}\cdot \tau_{\alpha})} \tag{9}\] \[\cdot\delta_{\alpha\alpha^{\prime}}\cdot\delta_{\{{\bf k}\},\tilde {{\bf k}}}\cdot\delta_{\{{\bf k}^{\prime}\},\tilde{{\bf k}}^{\prime}}\cdot \delta_{{\bf k}+\{{\bf q}\},\tilde{{\bf k}}^{\prime}}\bigg{)}\] \[\cdot\bigg{(}\sum_{\tilde{{\bf k}},\tilde{{\bf k}}^{\prime}}\sum_ {\tau_{\alpha},\tau_{\alpha^{\prime}}}e^{i(\tilde{{\bf k}}^{\prime}\cdot\tau_ {\alpha}-\tilde{{\bf k}}\cdot\tilde{{\bf r}}\cdot\tilde{{\bf r}}_{\alpha})}\] \[\cdot\delta_{\bar{\alpha}\bar{\alpha}^{\prime}}\cdot\delta_{\{ \tilde{{\bf k}}\},\tilde{{\bf k}}}\cdot\delta_{\{\tilde{{\bf k}}^{\prime}\}, \tilde{{\bf k}}^{\prime}}\cdot\delta_{\tilde{{\bf k}}^{\prime}+\{{\bf q}\}, \tilde{{\bf k}}}\bigg{)},\]
where the momentum \(\tilde{{\bf k}}\) is defined in an extended Brillouin zone (BZ) obtained for a single Co ion in triangular lattice, \(\{{\bf k}\}\) represents the set of the momentum \({\bf k}\) vectors shifted by the reciprocal vectors, \(\vec{\tau}_{\alpha}\) is the atomic position of correlated atoms, and \(N_{\tau}\) is the number of correlated atoms in the primitive unit cell. More detailed derivation of Eq. 9 is given in the Appendix.
### The Bethe-Salpeter equation
The orbital-dependent susceptibility \(\chi_{nn^{\prime}\bar{n}\bar{n}^{\prime}}(\omega)\) in Eq. 6 can be computed using the following Bethe-Salpeter equation:
\[\chi_{nn^{\prime}\bar{n}\bar{n}^{\prime}}(\omega)=\chi^{0}_{n_{\alpha}n_{ \alpha}}({\bf k},{\bf k}^{\prime},\omega)+\chi^{0}*\Gamma^{irr}*\chi, \tag{10}\]
where \(\chi^{0}\) is the polarizability obtained from the interacting Green's function and \(\Gamma^{irr}\) is the irreducible vertex function. One should note that the orbital indices for \(\chi^{0}_{n_{\alpha}n_{\alpha}}\) run over all Co \(d\) and Nb \(d_{z^{2}}\) orbitals in a unit cell while those on the \(F({\bf q})\) in Eq. 9 account only the correlated Co \(d\) orbitals. In general, \(\Gamma^{irr}\) is a complex function depending on momentum, frequency, spin, site, and orbital degrees of freedom. While the effect of \(\Gamma^{irr}\) is crucial to compare both momentum and frequency dependence of the susceptibility to the experimental neutron scattering data [18; 19], we adopt the static interaction type based on the random phase approximation (RPA), namely assuming that the interaction matrix to be independent on momentum and frequency. Here, we further approximate that it is independent of orbitals to account for the average interaction effect and has the spin rotation symmetry to consider only interactions in the spin channel. As a result, we consider both the on-site interaction \(\bar{U}\) within Co ions and the inter-site interaction \(\bar{V}\) between Co and Nb ions, then study the effects of \(\bar{U}\) and \(\bar{V}\) on the susceptibility calculations. For the on-site \(\bar{U}\) value, we used the \(\bar{U}\)=1.5eV, which is smaller than the DMFT \(U\) value. This is because the static two-particle interaction is further renormalized within the RPA diagrams while the diagrams within DMFT take the orbital and dynamical fluctuations into account explicitly. We also used different inter-site \(\bar{V}\) values (\(\bar{V}\)=0, 0.2, and 0.3 eV) to explore the effect of \(\bar{V}\) on the susceptibility. Our results show that the inter-site \(\bar{V}\) can enhance the momentum dependence of the susceptibility significantly.
The polarizability \(\chi^{0}\) can be given by the product of two Green's functions using the Wick's theorem. Our \(\chi^{0}\) is different from the bare susceptibility since it is obtained from the interacting Green's function. In DMFT, the Green's function is dressed with the local self-energy and the Matsubara frequency sum over \(i\nu\) can be evaluated by performing the contour integral to obtain the polarizability at \(\omega=0^{+}\):
\[\chi^{0\,{\bf k},{\bf k}^{\prime}}_{n_{\alpha}n_{\alpha}} = -T\sum_{i\nu}G_{n_{\alpha}n_{\alpha}}({\bf k},i\nu)\cdot G_{n_{ \alpha}n_{\alpha}}({\bf k}^{\prime},i\nu+i0^{+})\] \[= -\frac{T}{2\pi i}\oint dz\;\;G_{n_{\alpha}n_{\alpha}}({\bf k},z) \cdot G_{n_{\alpha}n_{\alpha}}({\bf k}^{\prime},z+i0^{+})\] \[= \frac{1}{\pi}\int d\nu\;[ImG_{n_{\alpha}n_{\alpha}}({\bf k},\nu) \cdot G_{n_{\alpha}n_{\alpha}}({\bf k}^{\prime},\nu+i0^{+})\] \[+G_{n_{\alpha}n_{\alpha}}({\bf k},\nu-i0^{+})\cdot ImG_{n_{\alpha}n _{\alpha}}({\bf k}^{\prime},\nu)]f(\nu),\]
where \(f(\nu)\) is the Fermi function. The real part of the susceptibility at \(\omega=0^{+}\) is given by
\[Re\chi^{0\,{\bf k},{\bf k}^{\prime}}_{n_{\alpha}n_{\alpha}} = \frac{1}{\pi}\int d\nu f(\nu)[ImG_{n_{\alpha}n_{\alpha}}({\bf k}, \nu)\cdot ReG_{n_{\alpha}n_{\alpha}}({\bf k}^{\prime},\nu) \tag{12}\] \[+ReG_{n_{\alpha}n_{\alpha}}({\bf k},\nu)\cdot ImG_{n_{\alpha}n_{ \alpha}}({\bf k}^{\prime},\nu)],\]
where both \(ReG\) and \(ImG\) are the real and imaginary parts of interacting Green's functions defined on the fine \({\bf k}\) mesh and the real frequency which are obtained from the analytic continuation of the DMFT self-energy using
the maximum entropy method. For the \(\chi^{0}\) susceptibility calculation in Eq. 12, we performed the summation over the dense \(\mathbf{k}-\)grid using 60\(\times\)60\(\times\)10 \(\mathbf{k}-\)points at each \(\mathbf{q}-\)points chosen along the high symmetry points.
Within static theories such as DFT, equivalent to setting the DMFT self-energy is zero, \(\chi^{0}(\mathbf{q},\omega)\) from Eq. 6 is given as the bare susceptibility and its evaluation using Eq. 12 and Eq. 8 reduces to the Lindhard formula for the susceptibility:
\[\chi^{0}(\mathbf{q},\omega) = \frac{(g\mu_{B})^{2}}{2N}\sum_{\mathbf{k},nm}\sum_{\alpha\beta} \frac{f_{\mathbf{k}+\mathbf{q}}^{m}-f_{\mathbf{k}}^{n}}{\omega+\epsilon_{ \mathbf{k}}^{n}-\epsilon_{\mathbf{k}+\mathbf{q}}^{m}+i0^{+}}\] \[\cdot\langle\phi_{\alpha}^{\mathbf{k}}|v_{n}^{\mathbf{k}}\rangle \langle\psi_{n}^{\mathbf{k}}|\phi_{\beta}^{\mathbf{k}}\rangle\langle\phi_{ \beta}^{\mathbf{k}+\mathbf{q}}|\psi_{m}^{\mathbf{k}+\mathbf{q}}\rangle\langle \psi_{m}^{\mathbf{k}+\mathbf{q}}|\phi_{\alpha}^{\mathbf{k}+\mathbf{q}}\rangle,\]
where \(\epsilon_{\mathbf{k}}^{n}\) is the eigenvalue of the DFT band at momentum \(\mathbf{k}\) and the band index \(n\). Therefore, it is expected that the \(\chi^{0}(\mathbf{q})\) will be enhanced near the Fermi surface nesting \(\mathbf{q}\) vector. This susceptibility calculation for multi-orbital systems based on DFT has been applied for various real materials [17; 20; 21].
## III Results and Discussions
We now present the results for the correlated electronic band structure and the Fermi surface, then relate them to the momentum dependent magnetic susceptibility of \(M_{1/3}\)NbS\({}_{2}\) (\(M\)=Co, Ni, and Fe) computed using DMFT. In particular, we study the effect of the hole doping on the electronic structure and magnetism.
### Correlated electronic structure of Co\({}_{1/3}\)NbS\({}_{2}\)
The Co\({}_{1/3}\)NbS\({}_{2}\) structure is formed by stacking NbS\({}_{2}\) layers with the intercalation of Co ions at two distinct Nb sites between the layers along the \(c-\)axis (see Fig. 1(a)). To study the strong correlation effect of the Co \(d\) orbitals on the band structure, we compare the DMFT spectral function \(A(\mathbf{k},\omega)\) (Fig. 2) to the DFT band structure in Fig. 1(b). Here, we impose the paramagnetic spin symmetry (nonmagnetic state). The DFT band (Fig. 1(b), the green thin lines in Fig. 2) shows that the hole pocket near the \(\Gamma\) and \(A\) points are quite small and the band crossing near \(K\) and \(H\) points occur above the Fermi energy. Fig. 1(b) shows the orbital characters of DFT band structure and the hole pockets are mostly the Nb \(d_{z^{2}}\) character. The Co \(d\) bands (blue color) are mostly located at \(K\) and \(H\) points above the Fermi energy with multiple degeneracy.
The DMFT bands in Fig. 2(a) show the strong modification of Co \(d\) bands near \(K\) and \(H\) points as they are dressed by the DMFT self-energy. The location of Co \(d\) bands is pushed below the Fermi energy and the spectra become much broader due to the large self-energy effect. The Nb \(d_{z^{2}}\) bands still show the quasi-particle dispersion along with the feature of the well-defined Fermi surface. We also study the doping effect by changing the total number of valence electrons within the DMFT calculation and computing the corresponding band structure. The hole-doping effect on the DMFT bands in Fig. 2(b) shows the crossing of Co \(d\) bands near the Fermi energy near \(K\) and \(H\) points due to the up-shift of bands. Upon the hole doping, the size of the hole pocket near the \(\Gamma\) point increases as the Nb \(d_{z^{2}}\) band shifts upward. An important feature of the hole-doping on the Co \(3d\) band structure is the appearance of the small broad electron pocket at the \(K\) point, which is consistent with the ARPES measurement [8].
In Co\({}_{1/3}\)NbS\({}_{2}\), our DMFT calculation shows that the occupancy of the Co \(d\) orbital is close to 7.0 meaning that the Co ion has the valence state of 2+. Since the S ion is the \(2-\) state, the valence state of Nb is close to \((10/3)+\). Therefore, the Nb ion has the occupancy of \(4d^{1.67}\), which is larger than the \(4d^{1}\) one (Nb\({}^{4+}\)) of the pure NbS\({}_{2}\) layer. In other words, the hole pocket at the \(\Gamma\) point of the pure NbS\({}_{2}\) is expected to be larger than that of Co\({}_{1/3}\)NbS\({}_{2}\) (Fig. 2(a)). Our hole doping effect of \(\delta\)=2.0 (two holes per CoNb\({}_{3}\)S\({}_{6}\)) means that 2/3 holes are mostly doped to the Nb ions since the DMFT occupancy of the Co \(d\) orbitals still remains close to 7.0. As a result, Co\({}_{1/3}\)NbS\({}_{2}\) has the strong hybridization between Co \(3d\) and Nb \(4d_{z^{2}}\) orbitals resulting the doped holes residing
Figure 1: (a) The crystal structure of Co\({}_{1/3}\)NbS\({}_{2}\), (b) The DFT band structure of Co\({}_{1/3}\)NbS\({}_{2}\) projected to different orbital characters (red: Nb \(d_{z^{2}}\) band, blue: Co \(d\) band)
mostly on the Nb \(d_{z^{2}}\) orbital. Therefore, this hole doping effect mainly affects the size the hole pocket near the \(\Gamma\) point.
Fig. 3 shows the Fermi surface of Co\({}_{1/3}\)NbS\({}_{2}\) computed using DFT+DMFT at different hole doping \(\delta\) values (\(\delta\)= the number of holes per Co ion). At \(\delta\)=0, the DMFT hole-pocket centered at the \(\Gamma\) point has a circular shape similarly as measured in ARPES although its size is smaller than the ARPES one. The outer larger pocket has mostly Co \(d\) orbital character and exhibits much weaker intensity due to the large scattering rate (\(Im\Sigma(\omega)\)). Upon hole-doping, the smaller hole-pocket gets larger in size, comparable to the ARPES measurement and the Co \(d\) states move closer to the \(K\) point in the BZ. This broad Co \(d\) spectra near the \(K\) point is also captured in ARPES. Our DMFT Fermi surface calculation shows that the hole doping of \(\delta\)=2.0 makes the size of Nb hole pocket in the Fermi surface similar to the ARPES measurement.
Since the ARPES measurement is sensitive to the surface state of possibly NbS\({}_{2}\) termination layers, the electronic structure will have the hole-doping effect due to the Co ion deficiency. In the bulk, Co or Nb vacancies can induce the effect of hole-dopings similarly to the surface state. Our DMFT calculation shows that this doping effect can tune the size of hole pocket significantly, possibly affecting the magnetic properties. Previous experimental study also shows that the AHE of Co\({}_{1/3}\)NbS\({}_{2-x}\) can be dramatically changed by the S deficiency, which can lead to the similar doping effect [8].
### Co\({}_{1/3}\)NbS\({}_{2}\) magnetic susceptibility
Our previous DFT calculation on Co\({}_{1/3}\)NbS\({}_{2}\) shows that a \(3q-\)type magnetic structure is energetically stable and may be responsible for the large observed anomalous Hall current in this material. This particular \(3q-\)type spin structure corresponds to the non-coplanar spins arrangement with the four spins within the magnetic unit cell of one Co layer pointing towards four vertices of a tetrahedron in the spin space. Such unusual spin configuration can be stabilized in a triangular lattice due to the Fermi surface nesting of itinerant electrons [4]. For every triangular plaquette of the spin lattice, scalar spin chirality, \(\chi_{ijk}=\mathbf{S}_{i}\cdot[\mathbf{S}_{j}\times\mathbf{S}_{k}]\) is constant, corresponding to a uniform Berry flux per plaquette. The modulation vector \(\mathbf{q}\) of this \(3q-\)type spin structure is half of the reciprocal lattice vectors of the primitive unit cell, i.e., the high-symmetry \(M\) points in the Brillouine zone. Early neutron scattering experiment [9] showed indeed the scattering peak at \(\mathbf{q}=(1/2,0,0)\) (\(M\) point). However, confirming the existence of the non-coplanar \(3q\) state requires more complex polarized neutron scattering experiments, as has been successfully realized in some cases [10; 22].
Neutron scattering measures magnetic susceptibility. We here compute the momentum-dependent magnetic susceptibility, \(\chi(\mathbf{q},\omega=0)\) of Co\({}_{1/3}\)NbS\({}_{2}\) at different hole-doping \(\delta\) values to understand the spin modulation vector \(\mathbf{q}\) of the leading magnetic instability and its relation to the correlated electronic structure. We first compute the real part of the bare susceptibility \(\chi^{0}\) using DMFT (Eq. 12) at \(\delta=0\) and 2.0, as shown in Fig. 4. In both doping levels, the Co bare spin susceptibility (\(\chi^{0}\)) obtained from DMFT shows very weak momentum dependence due to the the strongly localized nature of Co \(d\) orbitals. This can be understood from the one-particle DMFT spectra of Co \(d\) bands showing no clear evidence of quasi-particle peaks near the Fermi energy, rather very broad feature of band dispersion without much dependence on momenta. Unlike the Co \(d\) orbitals, the Nb \(d_{z^{2}}\) orbitals show a rather strong momentum dependence of the susceptibility at \(\delta=2.0\) due to their itinerant nature near the Fermi energy. The contribution of the Nb \(d_{z^{2}}\) orbitals to the susceptibility \(\chi^{0}\) is larger than that of the Co \(d\) orbitals and depends sensitively on momentum and the the doping levels (see Fig. 4).
We argue that the leading modulation vector \(\mathbf{q}\) of \(\chi^{0}\) in Co\({}_{1/3}\)NbS\({}_{2}\) can be mostly determined by the Fermi surface momentum (\(2k_{F}\)) of the hole pocket centered at
Figure 2: (a) The DMFT band structure of Co\({}_{1/3}\)NbS\({}_{2}\), (b) The DMFT band structure upon the hole doping \(\delta\)=2.0 (two holes per CoNb\({}_{3}\)S\({}_{6}\). The green lines represent the DFT band from Fig. 1(b).
the \(\Gamma\) point. As shown in Fig. 3, the size of the hole pocket can be sensitively dependent on the hole doping levels and it is consistent with the ARPES measurement at \(\delta\) = 2.0. At \(\delta\)=0, the Nb \(d_{z^{2}}\) orbital contribution to \(\chi^{0}\) has no clear momentum dependence although the spectra near the \(\Gamma\) point are slightly larger than those at other momenta. This is because the smallest hole-pocket near the \(\Gamma\) point has the largest spectral weight while the other Fermi surfaces have much smaller spectral weights. As the hole doping increases (\(\delta\)=2.0), the size of the hole pocket near the \(\Gamma\) point also increases and the contribution of the Nb \(d_{z^{2}}\) orbital to the susceptibility favors the modulation \(\mathbf{q}\) vector at the high-symmetry \(M\) point, which is close to the Nb Fermi surface momentum (\(2k_{F}\)). The susceptibility near the \(K\) point also shows the enhanced peak height at \(|\mathbf{q}|\simeq 2k_{F}\) although the \(M\) point shows the maximum peak height. We note that the peak heights of the susceptibility depend on the effect of the form factor in Eq. 9 - the peak height at the \(M\) point becomes much closer to that at the \(K\) point if the form factor is simplified using Eq. 8 (see Appendix).
While the bare magnetic susceptibility \(\chi^{0}\) of Co \(d\) orbitals does not have any significant momentum dependence due to the strongly localized nature of the band structure, the full magnetic susceptibility including the interaction effect shows the preference for a particular momentum suggesting the long-range spin ordering of Co \(d\) spins coupled via the RKKY interaction mediated by the itinerant Nb \(d_{z^{2}}\) bands. It turns out that the intersite interaction \(\bar{V}\) plays an important role in mediating the RKKY interaction. Our RPA susceptibility calculation shows that the local interaction \(\bar{U}\) enhances the absolute value of \(\chi(\mathbf{q})\) while retaining the weak momentum dependence. The increase of \(\bar{V}\) results in the momentum dependence of \(\chi(\mathbf{q})\), which is peaked at the \(M\) point (the same peak position as \(\chi^{0}\)) for the RPA susceptibility (see Fig. 5).
We find that the \(\chi\) diverges at the \(M\) point near \(\bar{U}\)=1.7eV and \(\bar{V}\)=0.3eV, supporting the occurrence of the magnetic instability. While this is a direct way to study the instability from the susceptibility, one can also further analyze the different band contributions to the
Figure 3: The DMFT Fermi surface for Co\({}_{1/3}\)NbS\({}_{2}\) as a function of hole-doping (a) \(\delta\)=0, (b) 1.0, and (c) 2.0. The hole-doping effect increases the size of the hole pocket centered at the \(\Gamma\) point. The calculated Fermi surface is consistent with the (d) experimental ARPES measurement [8] when \(\delta\)=2.0. The dashed line represents the BZ of Co\({}_{1/3}\)NbS\({}_{2}\) and the solid line shows the BZ of NbS\({}_{2}\). Note that the \(M\) point in (d) is defined in the solid-line BZ, while our \(M\) points are defined in the dashed-line BZ.
magnetic instability by decomposing the product of the \(\chi^{0}\) and the \(\Gamma^{irr}\) matrices while solving the Bethe-Salpeter equation in Eq. 10[21]. Again, the reasonable range of the RPA \(\bar{U}\) should be much smaller than the Hubbard \(U\) used in DMFT since it does not account for the orbital and dynamical screening process. The screened inter-site \(\bar{V}\) also should be much smaller than the on-site \(\bar{U}\) value. While determining \(\bar{U}\) and \(\bar{V}\) quantitatively will be a complicated task, we find that the qualitative feature of the magnetic susceptibility (i.e., the momentum dependence) does not vary depending on \(\bar{U}\) and \(\bar{V}\) values.
### Electronic structure of Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\)
While Co\({}_{1/3}\)NbS\({}_{2}\) shows large anomalous Hall effect likely originating from the non-coplanar spin structure, such effects have not been seen experimentally for Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\). Although our previous DFT calculation showed that both Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\) can favor the non-coplanar \(3q\) spin structure energetically, the ground-state magnetic state has been studied only for a small number of possible \(\mathbf{q}\) vectors allowed within a supercell. Therefore, it is plausible that the leading magnetic \(\mathbf{q}-\)vector can vary depending on the intercalated transition metal ions due to the electronic structure change. Here, we compute the magnetic susceptibility and the Fermi surface of Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\) at \(\delta=2.0\), similarly as the Co\({}_{1/3}\)NbS\({}_{2}\) case.
We find that the hole doping effects in Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\) can be quite different. Fig. 6 shows that the DMFT Fermi surface of Fe\({}_{1/3}\)NbS\({}_{2}\) has a slightly smaller hole pocket compared to the Co\({}_{1/3}\)NbS\({}_{2}\) one at the same hole doping (\(\delta\)=2.0). Our DMFT calculation shows that the hole doping induces the change of the Fe \(d\) occupancy as the Fe valence state becomes close to Fe\({}^{2.3+}\). In Fe\({}_{1/3}\)NbS\({}_{2}\), the hole doping mostly affects the Fe states near the Fermi energy since the hybridization between the Fe \(3d\) and Nb \(4d\) orbitals is rather weak. As a result, the size of the Nb hole pocket in Fe\({}_{1/3}\)NbS\({}_{2}\) is less sensitive to dopings while it becomes larger upon hole doping for the Co\({}_{1/3}\)NbS\({}_{2}\) case. Moreover, the Fe \(d\) character becomes much weaker and not visible near the \(K\) point, as also seen in the ARPES data [8]. In Ni\({}_{1/3}\)NbS\({}_{2}\), the DMFT valence of the Ni ion is still close to Ni\({}^{2+}\) and the Ni band is located lower than the Co or Fe bands in the other materials. This means that the Ni band has the negative charge-transfer effect, similarly to what is observed in some other rare-earth nickelates [23]. As a
Figure 5: The magnetic susceptibility \(\chi\) for Co\({}_{1/3}\)NbS\({}_{2}\) including the RPA-type interaction at different hole-dopings \(\delta=0\) (top panel) and \(\delta=2.0\) (bottom panel).
Figure 6: The DMFT Fermi surfaces (left panels) for \(M_{1/3}\)NbS\({}_{2}\) (\(M\)= (a) Fe (top panel) and (c) Ni (bottom panel)) at the hole doping of \(\delta\)= 2.0. The experimental ARPES [8] measurements are also compared (right panels).
result, the hole doping mostly affects the Nb hole states and the Nb hole pocket in Ni\({}_{1/3}\)NbS\({}_{2}\) becomes the largest among three materials.
### The magnetic susceptibility of Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\)
These notable variations of the DMFT Fermi surface due to the intercalation by different transition metals can also change the leading magnetic susceptibility momentum \(\mathbf{q}\). In Fe\({}_{1/3}\)NbS\({}_{2}\), the susceptibility \(\mathbf{q}\) peaks near \(M\) and \(K\) points are almost degenerate as the the Fermi momentum \(2k_{F}\) of the hole pocket in the Fermi surface gets closer to both \(M\) and \(K\) points. Similar to the Co\({}_{1/3}\)NbS\({}_{2}\) case, the inter-site interaction \(V\) strongly enhances the momentum dependence of the Fe \(d\) susceptibility. In Ni\({}_{1/3}\)NbS\({}_{2}\), the susceptibility peak is slightly higher at the \(K\) point as the leading modulation vector, while the momentum dependence of the susceptibility is the weakest among three materials. This is because the \(2k_{F}\) of the hole pocket in Ni\({}_{1/3}\)NbS\({}_{2}\) is much larger than the high symmetry points and, as a result, the susceptibility does not show the dominant momentum peak.
In both cases of Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\), the modulation factor (Eq. 14) in the susceptibility can change the \(\chi(\mathbf{q})\) profile. Without the factor, both susceptibilities enhance the peak near the \(K\) point (see Appendix). It is possible that both Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\) can have different magnetic \(\mathbf{q}\) instability compared to the Co\({}_{1/3}\)NbS\({}_{2}\) case. Finally, the spectral weight of the susceptibility is the smallest for Ni\({}_{1/3}\)NbS\({}_{2}\) as the static magnetic moment of the Ni ion is the smallest compared to the other materials [24].
## IV Conclusion
We studied the magnetic susceptibility and the correlated electronic structure of \(M_{1/3}\)NbS\({}_{2}\) (\(M\)= Co, Fe, and Ni) using DMFT to treat the strong correlation effect of transition metal ions. Our DMFT band structure
Figure 7: The magnetic susceptibility for Fe\({}_{1/3}\)NbS\({}_{2}\), the polarizability \(\chi^{0}\) (top panel) and the RPA \(\chi\) (bottom panel)
Figure 8: The magnetic susceptibility for Ni\({}_{1/3}\)NbS\({}_{2}\), the polarizability \(\chi^{0}\) (top panel) and the RPA \(\chi\) (bottom panel)
and the Fermi surface calculations upon hole-doping are consistent with the ARPES measurements [8] of these compounds. The size of the hole pocket centered at the \(\Gamma\) point is comparable to the ARPES data for all compounds and the appearance of the electron pocket at the \(K\) point in Co\({}_{1/3}\)NbS\({}_{2}\) is correctly captured. Due to the strong hybridization between Co/Ni \(3d\) orbitals and Nb \(4d\) orbitals, the doped holes mostly change the Nb valence states and the size of the hole pocket centered at the \(\Gamma\) point can be tuned upon the hole doping. In Fe\({}_{1/3}\)NbS\({}_{2}\), the hole-doping effect changes mostly the Fe valence state.
We also show that the spin susceptibility \(\chi(\mathbf{q})\) calculation using DMFT can help to identify the momentum \(\mathbf{q}\) of the leading magnetic instability in strongly correlated materials. This method allows to avoid the need for constructing a large supercell to study the magnetic instability at the arbitrary momentum \(\mathbf{q}\) vector. While the spin susceptibility of Co\({}_{1/3}\)NbS\({}_{2}\) is peaked at \(\mathbf{q}=(1/2,0,0)\) (the \(M\) point), which is consistent with the \(3q\)-type non-coplanar spin structure, the maximum peak positions change to the \(K\) point for Fe\({}_{1/3}\)NbS\({}_{2}\) and Ni\({}_{1/3}\)NbS\({}_{2}\). This suggests that the magnetic ground state of these two compounds will be distinct from that of Co\({}_{1/3}\)NbS\({}_{2}\).
## Acknowledgement
We would like to thanks Mike Norman and Chris Lane for useful discussions. This work was supported by the Materials Sciences and Engineering Division, Basic Energy Sciences, Office of Science, US Department of Energy. We gratefully acknowledge the computing resources provided on Bebop, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.
|
2309.10301 | Prominent Roles of Conditionally Invariant Components in Domain
Adaptation: Theory and Algorithms | Domain adaptation (DA) is a statistical learning problem that arises when the
distribution of the source data used to train a model differs from that of the
target data used to evaluate the model. While many DA algorithms have
demonstrated considerable empirical success, blindly applying these algorithms
can often lead to worse performance on new datasets. To address this, it is
crucial to clarify the assumptions under which a DA algorithm has good target
performance. In this work, we focus on the assumption of the presence of
conditionally invariant components (CICs), which are relevant for prediction
and remain conditionally invariant across the source and target data. We
demonstrate that CICs, which can be estimated through conditional invariant
penalty (CIP), play three prominent roles in providing target risk guarantees
in DA. First, we propose a new algorithm based on CICs, importance-weighted
conditional invariant penalty (IW-CIP), which has target risk guarantees beyond
simple settings such as covariate shift and label shift. Second, we show that
CICs help identify large discrepancies between source and target risks of other
DA algorithms. Finally, we demonstrate that incorporating CICs into the domain
invariant projection (DIP) algorithm can address its failure scenario caused by
label-flipping features. We support our new algorithms and theoretical findings
via numerical experiments on synthetic data, MNIST, CelebA, Camelyon17, and
DomainNet datasets. | Keru Wu, Yuansi Chen, Wooseok Ha, Bin Yu | 2023-09-19T04:04:59Z | http://arxiv.org/abs/2309.10301v2 | # Prominent Roles of Conditionally Invariant Components
###### Abstract
Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to evaluate the model. While many DA algorithms have demonstrated considerable empirical success, blindly applying these algorithms can often lead to worse performance on new datasets. To address this, it is crucial to clarify the assumptions under which a DA algorithm has good target performance. In this work, we focus on the assumption of the presence of conditionally invariant components (CICs), which are relevant for prediction and remain conditionally invariant across the source and target data. We demonstrate that CICs, which can be estimated through conditional invariant penalty (CIP), play three prominent roles in providing target risk guarantees in DA. First, we propose a new algorithm based on CICs, importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs help identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm can address its failure scenario caused by label-flipping features. We support our new algorithms and theoretical findings via numerical experiments on synthetic data, MNIST, CelebA, and Camelyon17 datasets.
## 1 Introduction
The classical statistical learning problem assumes that the data used for training and those used for testing are drawn from the same data distribution. While this assumption is often valid, distribution shifts are prevalent in real-world data problems. Distribution shifts happen when the distribution of training (or source) data differs from that of test (or target) data (Koh et al., 2021). For example, when a machine learning model is trained on labeled source data from a few hospitals and then deployed in a new hospital, often there are distributional shifts because the data collection and pre-processing in different hospitals can be different (Veta et al., 2016; Komura and Ishikawa, 2018; Zech et al., 2018). The statistical learning problem that tackles distributional shifts with labeled source data and unlabeled target data is called a _domain adaptation_ (DA) problem. A solution to DA is desired especially in situations where obtaining labeled data from target domain is difficult and expensive while unlabeled data is easily available. In this case, without collecting new labeled target data, one may attempt to pre-train the model on related large labeled datasets such as ImageNet (Deng et al., 2009) and adapt the model onto the unlabeled target data such as CT scan images (Cadrin-Chencevert, 2022). However, due to distributional shifts between the large labeled datasets and the target dataset, performance improvement is not always guaranteed (He et al., 2019). Without careful consideration, the presence of distributional shifts can result in a decrease in performance of many classical statistical learning algorithms.
While DA is an important learning problem, a generic cure is hopeless if there is no useful relation between the source and target data that can aid in prediction. In particular, the DA problem is ill-posed in general because for any given algorithm and source data, there will always be some arbitrarily chosen target data such that the algorithm trained on the source data will not perform well. Establishing reasonable assumptions relating the source and target data is critical, and depending on these assumptions, many ways to formulate a DA problem exist.
One common way to formulate a DA problem is to assume that the conditional distribution of label given the covariate, \(Y\mid X\), remains the same in both source and target data. When \(Y\mid X\) is invariant, it is implied that the covariate distribution changes. This formulation is known as _covariate shift_(Shimodaira, 2000; Quinonero-Candela et al., 2008). Successful approaches to tackle the covariate shift assumption include estimating the likelihood ratio between source and target covariates to correct for this shift (Shimodaira, 2000; Sugiyama et al., 2007; Sugiyama and Kawanabe, 2012). A related but different way of relating the source and target distributions is to assume that the conditional distribution of covariates, given the label, \(X\mid Y\), is invariant. In this case, the marginal distributions of the label can differ. For this reason, this formulation is named _label shift_(Lipton et al., 2018). DA solutions typically involve correcting the likelihood ratio of labels using the conditional invariance of \(X\mid Y\)(Azizzadenesheli et al., 2019; Tachet des Combes et al., 2020; Garg et al., 2020). Although covariate shift and label shift assumptions have been widely studied and successfully applied in some cases (Sugiyama et al., 2007; Wu et al., 2021), their applicability is often limited in more practical scenarios.
Moving beyond the scope of covariate shift and label shift, another popular way of formulating DA is to assume the presence of invariant feature mappings (i.e., transformations of covariates \(X\)). Then the DA problem is reduced to identifying features that are important for the underlying task and are invariant across the source and target domains. Domain Invariant Projection (DIP) (Baktashmotlagh et al., 2013) was proposed as an attempt to identify these invariant features through projecting the source and target covariates into a common subspace. Subsequent works (Ganin et al., 2016; Tzeng et al., 2017; Hoffman et al., 2018) have advanced the common subspace approach by incorporating neural network implementation, demonstrating empirical success across many datasets. Despite its empirical success, however, recent work by Johansson et al. (2019); Zhao et al. (2019) revealed that DIP may have a target risk much larger than its source risk, caused by the so-called label-flipping issue. Specifically, in the absence of target labels, in general there is no guarantee that DIP will find the true invariant representations. If DIP fails to do so, its target performance may deviate significantly from its source performance. What is worse is that currently there are no practical ways to check whether DIP has found the true invariant features, which presents a challenge for its practical use.
In this work, we make the assumption on the existence of conditionally invariant components (CICs) (Gong et al., 2016; Heinze-Deml and Meinshausen, 2017)--feature representations which are useful for prediction and whose distribution is invariant _conditioned_ on the labels across source and target domains (see Definition 1 for the formal definition of CICs). With access to multiple source domain data that are related to the target domain, it becomes practically plausible to estimate CICs. In this setting, the existence of CICs can be well-justified because any features that are invariant, conditioned on the labels across these heterogeneous source domains, are likely to remain conditionally invariant in the target domain. The idea that takes advantage of the heterogeneity in multiple datasets has its origins in causality, robust statistics (Peters et al., 2016; Buhlmann, 2020) as well as stability-driven statistical analysis in the PCS framework (Yu and Kumbier, 2020). Moreover, in anticausal learning scenarios (Scholkopf et al., 2012) or when datasets are generated through structural causal models (Pearl, 2009; Chen and Buhlmann, 2020), CICs naturally emerge when unperturbed covariates are descendents of the labels.
Under the assumption on the existence of CICs, Conditional Invariant Penalty (CIP) is a widely used algorithm to identify CICs (Gong et al., 2016; Heinze-Deml and Meinshausen, 2017), through enforcing the invariance of conditional feature distributions across multiple source domains. CIP is shown to achieve good target performance under several theoretical settings (Heinze-Deml and Meinshausen, 2017; Chen and Buhlmann, 2020) and empirically (Li et al., 2018, 2018; Jiang et al., 2020).
Despite the rapid development on CIP to identify CICs, the understanding of DA algorithms based on CICs beyond simple structural equation models is still limited, and their ability to handle DA problems with label shifts is unclear in the previous literature. Additionally, while the generalized label shift has been proposed in Tachet des Combes et al. (2020), it is not known how to reliably identify CICs via their proposed algorithms. Note that in the DA setting, target labels are unavailable, making it impossible to estimate target performance through validation or cross-validation. It is crucial to quantify target risk guarantees of DA algorithms under assumptions made.
Additionally, while DA algorithms based on CICs exhibit target performance comparable to that on the source data, often they are not the top performers (Heinze-Deml and Meinshausen, 2017; Chen and Buhlmann, 2020). This is mainly due to the fact that these algorithms only use source data, and the requirement for CICs to maintain invariance across multiple source domains may discard some features useful for target prediction. In contrast, DIP leverages a single source data and target covariates for enhanced target performance. DIP is preferred by many practitioners but it can lead to severe failure in several settings, with its performance much worse than the Empirical Risk Minimization (ERM) (Wu et al., 2019; Zhao et al., 2019; Chen and Buhlmann, 2020). Given the conservative nature of CICs-based methods and the potential advantage of DIP, it is natural to ask whether DA methods based on CICs can help DIP to detect its failure or be combined with DIP to address the shortcomings of both algorithms.
### Our contributions
To address the aforementioned challenges in DA algorithms based on CICs and the potential risk of DIP, in this work we focus on highlighting the significant roles that CICs can play in DA. Under the assumption on the existence of CICs and the availability of multiple source datasets, our main contributions are three-fold.
First, we introduce the importance-weighted conditional invariant penalty (IW-CIP) algorithm and analyze its target risk guarantees under the existence of CICs. Under structural equation models, we show that CICs can be correctly identified via both CIP and IW-CIP using labeled data from multiple source domains. Consequently, it is only the finite-sample error gap that accounts for the difference between the target risk of the IW-CIP classifier and that of the optimal conditionally invariant classifier.
Second, we demonstrate how CICs can be used to provide target risk lower bounds for other DA algorithms without requiring access to target labels. Provided that CICs are accurately identified, this lower bound allows for assessing the target performance of any other DA algorithms, making it possible to detect their failures using only source data and unlabeled target data.
Lastly, we introduce JointDIP, a new DA algorithm that extends the domain invariant projection (DIP). Under structural equation models, we prove that JointDIP reduces the possibility of label-flipping after incorporating CICs. Our findings are supported by numerical experiments on synthetic and real datasets, including MNIST, CelebA, and Camelyon17.
The rest of the paper is organized as follows. Section 2 provides the necessary technical background and formally sets up the domain adaptation problem. In Section 3, we present the first role of CICs by introducing the IW-CIP algorithm and establish finite-sample target risk bounds to characterize its target risk performance (cf. Theorem 1.A, 1.B). Section 4 describes the other two roles of CICs in DA, demonstrating how they can be used to detect the failure of other DA algorithms (cf. Theorem 2), and introducing the JointDIP algorithm to address the label-flipping issues of DIP (cf. Theorem 3). In Section 5, we complement our theoretical arguments with extensive numerical experiments on synthetic and real datasets, emphasizing the importance of learning CICs as an essential part of domain adaptation
pipeline. Finally, Section 6 provides a more complete review of related work in literature than in the introduction.
## 2 Background and problem setup
In this section, we begin by defining the domain adaptation problem and introducing the concept of conditionally invariant components (CICs). We then outline two baseline DA algorithms: the conditional invariant penalty (CIP) algorithm, which finds conditionally invariant representation across multiple source domains, and the domain invariant projection (DIP) algorithm, which works with a single source domain and takes advantage of additional unlabeled target data.
### Domain adaptation problem setup
We consider the domain adaptation problem with \(M\) (\(M\geq 1\)) labeled source environments and one unlabeled target environment. By an _environment_ or a _domain_, we mean a dataset \(\mathcal{D}\) with i.i.d. samples drawn from a common distribution \(\mathcal{P}\). Specifically, for \(m\in\{1,\ldots,M\}\), in the \(m\)-th source environment, we observe \(n^{(m)}\) i.i.d. samples
\[\mathcal{D}^{(m)}=\{(X_{k}^{(m)},Y_{k}^{(m)})\}_{k=1}^{n^{(m)}},\]
drawn from the \(m\)-th source data distribution \(\mathcal{P}^{(m)}\). Independently of the source data, there are \(n^{(\mathbb{T})}\) i.i.d. samples
\[\mathcal{D}^{(\mathfrak{T})}=\{(X_{k}^{(\mathfrak{T})},Y_{k}^{(\mathfrak{T})} )\}_{k=1}^{n^{(\mathfrak{T})}},\]
drawn from the target distribution \(\mathcal{P}^{(\mathfrak{T})}\). We denote general random variables drawn from \(\mathcal{P}^{(m)}\) and \(\mathcal{P}^{(\mathfrak{T})}\) as \((X^{(m)},Y^{(m)})\) and \((X^{(\mathfrak{T})},Y^{(\mathfrak{T})})\), respectively. In the domain adaptation setting, all the source data are observed, while only the target covariates \(\mathcal{D}^{(\mathfrak{T})}_{X}=\{X_{k}^{(\mathfrak{T})}\}_{k=1}^{n^{( \mathfrak{T})}}\) are observed in the target domain. For simplicity, throughout the paper, we assume that each covariate lies in a \(p\)-dimensional Euclidean space \(\mathbb{R}^{p}\), and the labels belong to the set \(\mathcal{Y}=\{1,\ldots,L\}\) where \(L\) represents the total number of classes.
To measure the performance of a DA algorithm, we define the _target population risk_ of a classifier \(h:\mathbb{R}^{p}\to\{1,\ldots,L\}\), mapping covariates to labels, via the 0-1 loss as
\[\mathcal{R}^{(\mathfrak{T})}(h)=\mathbb{E}\left[\mathbf{1}_{h(X^{(\mathfrak{ T})})\neq Y^{(\mathfrak{T})}}\right]=\mathbb{P}\left\{h(X^{(\mathfrak{T})}) \neq Y^{(\mathfrak{T})}\right\}. \tag{1}\]
Consequently, \(1-\mathcal{R}^{(\mathfrak{T})}(h)\) is the target population classification accuracy. Similarly, we define the \(m\)-th source population risk as
\[\mathcal{R}^{(m)}(h)=\mathbb{E}\left[\mathbf{1}_{h(X^{(m)})\neq Y^{(m)}} \right]=\mathbb{P}\left\{h(X^{(m)})\neq Y^{(m)}\right\}. \tag{2}\]
The main goal of the domain adaptation problem is to use source and unlabeled target data to estimate a classifier \(h:\mathbb{R}^{p}\to\{1,\ldots,L\}\), from a set of functions called the hypothesis class \(\mathcal{H}\), such that the target population risk is small. To quantify this discrepancy, we
compare the target population risk with the _oracle target population risk_\(\mathcal{R}^{(\mathfrak{T})}(h_{\text{oracle}})\) that we may aspire to achieve, where
\[h_{\text{oracle}}=\operatorname*{arg\,min}_{h\in\mathcal{H}}\mathcal{R}^{( \mathfrak{T})}(h). \tag{3}\]
Without specifying any relationship between the source distribution \(\mathcal{P}^{(m)}\) and the target distribution \(\mathcal{P}^{(\mathfrak{T})}\), there is no hope that the target population risk of a classifier learned from the source and unlabeled target data is close to the oracle target population risk. Throughout the paper, we focus on DA problems where conditionally invariant components (CICs) across all source and target environments are present and they are correlated with the labels. The existence of CICs was first assumed in Gong et al. (2016) and Heinze-Deml and Meinshausen (2017). Under assumptions of arbitrary large interventions and infinite data, Heinze-Deml and Meinshausen (2017) established that their classifier built on CICs achieves distributional robustness. In this paper, instead of discussing distributional robustness of a classifier, we construct classifiers that have target population risks close to the oracle target risk. Before that, we introduce CICs and the best possible classifier built upon CICs.
Definition 1 (Conditionally invariant components (CICs)): Suppose that there exist \(M\) source distributions \(\{\mathcal{P}^{(m)}\}_{1\leq m\leq M}\) and a target distribution \(\mathcal{P}^{(\mathfrak{T})}\) on \(\mathbb{R}^{p}\times\{1,2,\ldots,L\}\). We say that a function \(\phi:\mathbb{R}^{p}\to\mathbb{R}^{q}\) is a conditionally invariant feature mapping, if
\[\mathcal{P}^{(m)}_{\phi(X)|Y=y}=\mathcal{P}^{(\mathfrak{T})}_{\phi(X)|Y=y},~{ }~{}\forall m\in\{1,\ldots,M\},~{}y\in\{1,\ldots,L\}. \tag{4}\]
The corresponding feature representation \(\phi(X)\) is called a conditionally invariant component (CIC) if it has a single dimension (\(q=1\)), and CICs if it is multidimensional (\(q>1\)). When \(\phi\) maps \(\mathbb{R}^{p}\) to \(\{1,2,\ldots,L\}\) and satisfies Eq. (4), we refer to it as a conditionally invariant classifier.
We will use the term "CICs across source distributions" instead if the feature representations is conditionally invariant on all \(\mathcal{P}^{(m)}\) but not necessarily on \(\mathcal{P}^{(\mathfrak{T})}\). With this definition, our assumption on the "existence of CICs" can be stated as: there exists a conditionally invariant mapping \(\phi\) such that \(\phi(X)\) is CIC(s) across \(M\) source distributions \(\{\mathcal{P}^{(m)}\}_{1\leq m\leq M}\) and a target distribution \(\mathcal{P}^{(\mathfrak{T})}\). In Definition 1, constant representations can be viewed as a trivial case of CICs. However, only CICs that are useful for prediction are beneficial for DA problems. Thus, our assumption on the existence of CICs refers to the existence of CICs useful for prediction. The best possible classifier built upon such CICs is the following optimal classifier.
Definition 2 (Optimal conditionally invariant classifier): Let \(\Phi\) and \(\mathcal{G}\) are classes of functions where each function \(\phi\in\Phi\) maps \(\mathbb{R}^{p}\) to \(\mathbb{R}^{q}\), and each function \(g\in\mathcal{G}\) maps \(\mathbb{R}^{q}\) to \(\{1,2,\ldots,L\}\). Under the assumption on the existence of CICs, we define the optimal conditionally invariant classifier \(h^{\star}\) as
\[h^{\star} =g^{\star}\circ\phi^{\star}, \tag{5}\] \[g^{\star},\phi^{\star} =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi}~{}~{} \mathcal{R}^{(\mathfrak{T})}(g\circ\phi)\] \[\text{subject to}~{}\mathcal{P}^{(m)}_{\phi(X)|Y=y}=\mathcal{P}^{( \mathfrak{T})}_{\phi(X)|Y=y},~{}~{}\forall m\in\{1,\ldots,M\},~{}y\in\{1, \ldots,L\}.\]
The conditionally invariant classifier above is optimal in the sense that it minimizes the target population risk while the learned representation is conditionally invariant across \(\mathcal{P}^{(m)}\) (\(1\leq m\leq M\)) and \(\mathcal{P}^{(\mathfrak{T})}\). When evaluating the target performance of a CICs-based classifier \(h=g\circ\phi\), instead of directly comparing it with \(h_{\text{oracle}}\), we consider comparing it with \(h^{\star}\) first, and then relating \(h^{\star}\) to \(h_{\text{oracle}}\). Intuitively, the target risk difference between \(h^{\star}\) and \(h_{\text{oracle}}\) will not be significant when the dimension of CICs is sufficiently large (c.f. Proposition 4). In this case, to build a CICs-based classifier with a guaranteed target risk bound compared to \(h_{\text{oracle}}\), it suffices to find a classifier which achieves a low target risk compared to \(h^{\star}\).
Another widely-used class of DA algorithms known as domain invariant projection (DIP) (cf. Section 2.2.2) seeks to find a feature mapping \(\phi\) which matches the source and target marginal distribution of \(\phi(X)\). Although it has been shown to be successful in some practical scenarios (Ganin et al., 2016; Mao et al., 2017; Hoffman et al., 2018; Peng et al., 2019), in general there is no guarantee for the low target risk. Both Johansson et al. (2019) and Zhao et al. (2019) provide simple examples where DIP can even perform worse than a random guess, as if features learned by DIP "flip" the labels. We formulate the rationale behind their examples by defining label-flipping features as follows.
**Definition 3** (Label-flipping feature): _Without loss of generality, consider the first source distribution \(\mathcal{P}^{(1)}\) and the target distribution \(\mathcal{P}^{(\mathfrak{T})}\) on \(\mathbb{R}^{p}\times\{1,2,\ldots,L\}\). We say that a function \(f:\mathbb{R}^{p}\rightarrow\mathbb{R}\) is a label-flipping feature mapping, if there exists \(y\in\{1,2,\ldots,L\}\) such that1_
Footnote 1: When \(Y\) is binary, the definition is equivalent to \(\rho\left(f(X^{(1)}),Y^{(1)}\right)\cdot\rho\left(f(X^{(\mathfrak{T})}),Y^{( \mathfrak{T})}\right)<0\).
\[\rho\left(f(X^{(1)}),\mathbf{1}_{Y^{(1)}=y}\right)\cdot\rho\left(f(X^{( \mathfrak{T})}),\mathbf{1}_{Y^{(\mathfrak{T})}=y}\right)<0, \tag{6}\]
_where \(\rho(\cdot,\cdot)\) denotes the correlation between random variables. The corresponding feature \(f(X)\) is called a label-flipping feature._
If the label-flipping features exist between a source distribution and the target distribution, they can be inadvertently learned by DIP as part of its learning algorithm for domain invariant representation. This can lead to degraded prediction performance on the target domain, as the sign of the correlation between these features and the labels changes under source and target distributions. We refer to it as the label-flipping issue of DIP.
Next we define an anticausal data generation model that serves as a concrete working example for validating our assumptions and establishing new results. While not all of our theoretical results depend on this model, it nevertheless aids in illustrating how our methods work.
**Definition 4** (General anticausal model): _We say that the data generation model is a general anticausal model, if the source and target distributions are specified as follows. Under the \(m\)-th source distribution \(\mathcal{P}^{(m)}\), source covariates and label are generated by_
\[Y^{(m)}\sim\ \text{Categorical}\left(p_{1}^{(m)},p_{2}^{(m)},\ldots,p_{L}^{(m )}\right),\]
\[X^{(m)}=f^{(m)}(Y^{(m)})+\epsilon^{(m)},\ \epsilon^{(m)}\perp Y^{(m)}\text{,}\]
where \(p_{y}^{(m)}\in(0,1),\sum_{y=1}^{L}p_{y}^{(m)}=1\), and \(f^{(m)}:\mathcal{Y}=\{1,\ldots,L\}\to\mathbb{R}^{p}\) is a deterministic function defining the mechanism between the \(m\)-th source covariates \(X^{(m)}\) and label \(Y^{(m)}\). Under the target distribution \(\mathcal{P}^{(\mathfrak{T})}\), target covariates and label are generated independently of the source data by
\[Y^{(\mathfrak{T})} \sim\ \text{Categorical}\left(p_{1}^{(\mathfrak{T})},p_{2}^{( \mathfrak{T})},\ldots,p_{L}^{(\mathfrak{T})}\right)\text{,}\] \[X^{(\mathfrak{T})} =f^{(\mathfrak{T})}(Y^{(\mathfrak{T})})+\epsilon^{(\mathfrak{T} )},\ \epsilon^{(\mathfrak{T})}\perp Y^{(\mathfrak{T})}\text{,}\]
where \(p_{y}^{(\mathfrak{T})}\in(0,1),\sum_{y=1}^{L}p_{y}^{(\mathfrak{T})}=1\), and \(f^{(\mathfrak{T})}:\mathcal{Y}=\{1,\ldots,L\}\to\mathbb{R}^{p}\) is a deterministic function defining the mechanism between the target covariates \(X^{(\mathfrak{T})}\) and label \(Y^{(\mathfrak{T})}\). The noise terms \(\epsilon^{(m)},\epsilon^{(\mathfrak{T})}\in\mathbb{R}^{p}\), are generated i.i.d. from a zero-mean distribution \(\mathcal{P}_{\epsilon}\).
Under the general anticausal model, conditioned on the labels, the mechanism functions \(f^{(m)}\), \(f^{(\mathfrak{T})}\), \(m=1,\ldots,M\), determine the difference between the source and target conditional distributions \(X\mid Y\) because the noise terms share the same distribution \(\mathcal{P}_{\epsilon}\). This generative model generalizes various perturbations that can occur in an anticausal model (Pearl, 2009). For example, label shift might occur if the marginal distributions of \(Y\) differ. Covariate shift, conditioned on the labels, can occur when the deterministic functions \(f^{(m)},f^{(\mathfrak{T})}\) vary. We present an explicit example under this model, including the presence of both CICs and label-flipping features in Appendix A.
NotationTo distinguish subscripts from coordinates, we represent the \(j\)-th coordinate of a constant vector \(x\) as \(x_{[j]}\), and similarly, for a random vector \(X\), we use \(X_{[j]}\). Next, we introduce several notations for the empirical equivalents of population quantities. For the \(m\)-th source and target datasets \(\mathcal{D}^{(m)}\), \(\mathcal{D}^{(\mathfrak{T})}\), \(m=1,\ldots,M\), we use \(\widehat{\mathcal{P}}^{(m)}\) and \(\widehat{\mathcal{P}}^{(\mathfrak{T})}\) to denote the empirical data distributions, respectively. We define the \(m\)-th source and target empirical risk of a classifier \(h\in\mathcal{H}\) as
\[\widehat{\mathcal{R}}^{(m)}(h) =\mathbb{E}_{(X,Y)\sim\widehat{\mathcal{P}}^{(m)}}\left[\mathbf{ 1}_{h(X)\neq Y}\right]=\frac{1}{n^{(m)}}\sum_{k=1}^{n^{(m)}}\mathbf{1}_{h(X_{ k}^{(m)})\neq Y_{k}^{(m)}}\text{,}\quad\text{and}\] \[\widehat{\mathcal{R}}^{(\mathfrak{T})}(h) =\mathbb{E}_{(X,Y)\sim\widehat{\mathcal{P}}^{(\mathfrak{T})}} \left[\mathbf{1}_{h(X)\neq Y}\right]=\frac{1}{n^{(\mathfrak{T})}}\sum_{k=1}^{ n^{(\mathfrak{T})}}\mathbf{1}_{h(X_{k}^{(\mathfrak{T})})\neq Y_{k}^{( \mathfrak{T})}}\text{.}\]
For any mapping \(\phi\) defined on \(\mathbb{R}^{p}\), we use \(\mathcal{P}^{(m)}_{\phi(X)}\) and \(\mathcal{P}^{(m)}_{\phi(X)|Y=y}\) to denote the \(m\)-th source marginal distribution of \(\phi(X)\) and the \(m\)-th source conditional distribution of \(\phi(X)\) given its label \(Y=y\). Similarly, we use \(\mathcal{P}^{(\mathfrak{T})}_{\phi(X)}\) and \(\mathcal{P}^{(\mathfrak{T})}_{\phi(X)|Y=y}\) to denote the target marginal distribution of \(\phi(X)\) and the target conditional distribution of \(\phi(X)\) given its label \(Y=y\). The corresponding empirical quantities are denoted by \(\widehat{\mathcal{P}}^{(m)}_{\phi(X)}\), \(\widehat{\mathcal{P}}^{(m)}_{\phi(X)|Y=y}\), \(\widehat{\mathcal{P}}^{(\mathfrak{T})}_{\phi(X)}\), and \(\widehat{\mathcal{P}}^{(\mathfrak{T})}_{\phi(X)|Y=y}\), respectively.
Letting \(\mathcal{P}\) be a distribution on \(\mathbb{R}^{q}\) and \(\mathcal{G}\) be a function class where each function \(g\in\mathcal{G}\) maps \(\mathbb{R}^{q}\) to \(\mathbb{R}\), we recall the Rademacher complexity as
\[\mathfrak{R}_{n,\mathcal{P}}\left(\mathcal{G}\right)\coloneqq\mathbb{E}_{Z_{k} \overset{\text{i.i.d.}}{\sim}\mathcal{P},\sigma_{k}\overset{\text{i.i.d.}}{ \sim}\sigma}\left[\sup_{g\in\mathcal{G}}\left|\frac{1}{n}\sum_{k=1}^{n}\sigma_ {k}g(Z_{k})\right|\right], \tag{7}\]
where \(\sigma_{k}\)'s are random variables drawn independently from the Rademacher distribution, i.e., \(\mathbb{P}\left\{\sigma_{k}=1\right\}=\mathbb{P}\left\{\sigma_{k}=-1\right\}=1/2\). Additionally, for any two distributions \(\mathcal{P}\) and \(\mathcal{Q}\) on \(\mathbb{R}^{q}\) and a class of classifiers \(\mathcal{G}\) where each function \(g\in\mathcal{G}\) maps \(\mathbb{R}^{q}\) to \(\mathcal{Y}=\{1,2,\cdots,L\}\), we define the \(\mathcal{G}\)-divergence between these two distributions as
\[D_{\mathcal{G}}\left(\mathcal{P},\mathcal{Q}\right)\coloneqq\sup_{g\in \mathcal{G}}\max_{y=1,2,\ldots,L}\left|\mathbb{E}_{Z\sim\mathcal{P}}\left[ \mathbf{1}_{g(Z)=y}\right]-\mathbb{E}_{Z\sim\mathcal{Q}}\left[\mathbf{1}_{g(Z )=y}\right]\right|. \tag{8}\]
Note that the \(\mathcal{G}\)-divergence defined in Eq. (8) can be seen as an extension of the \(\mathcal{H}\)-divergence introduced in Ben-David et al. (2010) to multiclass classification.
### Two baseline DA algorithms
With the background of domain adaptation in place, we now proceed to introduce two DA algorithms in this subsection. The first algorithm is the conditional invariant penalty (CIP) that finds conditionally invariant representation across multiple source domains. The second algorithm is domain invariant projection (DIP), which works with a single source domain but also requires target covariates. These two DA algorithms serve as baselines for evaluating the methods we introduce in the subsequent sections.
#### 2.2.1 Conditional invariant penalty (CIP)
The _conditional invariant penalty_ (CIP) algorithm uses the multiple labeled source environments to learn a feature representation that is conditionally invariant across all source domains (Gong et al., 2016; Heinze-Deml and Meinshausen, 2017). More precisely, the CIP algorithm is a two-stage algorithm which minimizes the average source risk across domains while enforcing the first-stage features to be conditionally invariant.
Population CIP:The population CIP classifier is formulated as a constrained optimization problem with a matching penalty on the conditional:
\[h_{\text{CIP}} =g_{\text{CIP}}\circ\phi_{\text{CIP}}, \tag{9}\] \[g_{\text{CIP}},\phi_{\text{CIP}} =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi}\ \ \frac{1}{M}\sum_{m=1}^{M}\mathcal{R}^{(m)}(g\circ\phi)\] \[\text{subject to}\ \ \ \mathfrak{D}\left(\mathcal{P}_{\phi(X)|Y}^{(m)}, \mathcal{P}_{\phi(X)|Y}^{(m^{\prime})}\right)=0\text{ for all }m\neq m^{\prime}\in\{1,\ldots,M\},\]
where \(\mathcal{R}^{(m)}(\cdot)\) is the \(m\)-th population source risk, and \(\mathfrak{D}\left(\cdot,\cdot\right)\) is a distributional distance between two distributions such as the maximum mean discrepancy (MMD) Gretton et al. (2012) or generative adversarial networks (GAN) based distance (Ganin et al., 2016). The optimization is over the set of all two-stage functions where the first stage function belongs
to \(\Phi\) and the second stage to \(\mathcal{G}\). The constraint on the conditional \(\phi(X)\mid Y\) enforces CIP to use CICs across all \(\mathcal{P}^{(m)}\) (\(1\leq m\leq M\)) to build the final classifier. Here the hope is that if feature mappings are conditionally invariant across the heterogeneous source distributions \(\mathcal{P}^{(m)}\), they are likely to be also conditionally invariant under the target distribution \(\mathcal{P}^{(\mathfrak{T})}\). As a result, a classifier built on these features would generalize to the target distribution.
Finite-sample CIP:In finite-sample case, instead of putting a strict constraint in the optimization, CIP adds the conditional invariant penalty on the empirical distributions, using a pre-specified parameter \(\lambda_{\text{CIP}}>0\) to control the strength of regularization as follows:
\[\widehat{h}_{\text{CIP}} =\widehat{g}_{\text{CIP}}\circ\widehat{\phi}_{\text{CIP}}, \tag{10}\] \[\widehat{g}_{\text{CIP}},\widehat{\phi}_{\text{CIP}} =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi}\ \ \frac{1}{M}\sum_{m=1}^{M}\widehat{\mathcal{R}}^{(m)}(g\circ\phi)+\frac{ \lambda_{\text{CIP}}}{LM^{2}}\cdot\sum_{y=1}^{L}\sum_{m\neq m^{\prime}} \mathfrak{D}\left(\widehat{\mathcal{P}}^{(m)}_{\phi(X)|Y=y},\widehat{ \mathcal{P}}^{(m^{\prime})}_{\phi(X)|Y=y}\right).\]
While CIP makes use of CICs across multiple source distributions to construct the classifier, it does not exploit the target covariates that are also available in our DA setting. Next we introduce another class of DA algorithm which takes advantage of target covariates.
#### 2.2.2 Domain invariant projection (DIP)
In contrast to CIP which utilizes multiple source data, _domain invariant projection_ (DIP) (Baktashmotlagh et al., 2013; Ganin et al., 2016) uses labeled data from a single source as well as unlabeled data from the target domain to seek a common representation that is discriminative about the source labels. The idea of finding a common representation is realized via matching feature representations across source and target domains. Without loss of generality, we formulate DIP using the first source distribution \(\mathcal{P}^{(1)}\), but in principle it can be formulated with any source distribution \(\mathcal{P}^{(m)}\).
Population DIP:We define the population DIP as a minimizer of the source risk under the constraint of marginal distribution matching in feature representations:
\[h_{\text{DIP}} =g_{\text{DIP}}\circ\phi_{\text{DIP}}, \tag{11}\] \[g_{\text{DIP}},\phi_{\text{DIP}} =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi}\ \ \mathcal{R}^{(1)}(g\circ\phi)\] \[\text{subject to}\ \ \ \mathfrak{D}\left(\mathcal{P}^{(1)}_{\phi(X)}, \mathcal{P}^{(\mathfrak{T})}_{\phi(X)}\right)=0,\]
where \(\mathfrak{D}\left(\cdot,\cdot\right)\) is a distributional distance between two distributions as in Eq. (9). The constraint ensures that the source marginal distribution in the feature representation space is well aligned with the target marginal distribution. While our formulation only utilizes the single source, DIP also has a multi-source pooled version (Peng et al., 2019) where marginal distributions in the representation space are matched across all \(\mathcal{P}^{(m)}\) (\(1\leq m\leq M\)) and \(\mathcal{P}^{(\mathfrak{T})}\).
Finite-sample DIP:In the finite sample setting, the hard constraint utilized in population DIP is relaxed to take a regularization form, and therefore
\[\begin{split}\widehat{h}_{\text{DIP}}&=\widehat{g}_{ \text{DIP}}\circ\widehat{\phi}_{\text{DIP}},\\ \widehat{g}_{\text{DIP}},\widehat{\phi}_{\text{DIP}}& =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi}\quad \widehat{\mathcal{R}}^{(1)}(g\circ\phi)+\lambda_{\text{DIP}}\cdot\mathfrak{D} \left(\widehat{\mathcal{P}}^{(1)}_{\phi(X)},\widehat{\mathcal{P}}^{(\mathfrak{ T})}_{\phi(X)}\right),\end{split} \tag{12}\]
where \(\lambda_{\text{DIP}}>0\) is a regularization parameter that balances between the source risk and the matching penalty across the empirical source and target marginal distributions for the feature representation.
While DIP makes use of the target covariates to learn domain-invariant representations, in general it does not have target risk guarantees. In particular, the matching penalty can force DIP to learn label-flipping features (see Definition 3) because unlike CIP, it aligns features in the marginal representation space. DIP leverages information about target covariates to learn invariant features, but this comes at the cost of potential label-flipping issue (c.f. see Figure 1 and Theorem 3). Furthermore, DIP can fail when the marginal distribution of \(Y\) is perturbed under an anticausal data generation model. For example, Chen and Buhlmann (2020) demonstrates that DIP can be perform worse than CIP in the presence of label shift.
## 3 Importance-weighted CIP with target risk guarantees
In the previous section, we introduced two baseline DA algorithms, namely CIP and DIP. However, it is crucial to acknowledge the limitations of these algorithms, as they rely on specific assumptions that may not hold in more complex DA scenarios. For instance, while CIP identifies CICs to build the classifier, its ability to generalize to the target domain is limited when the marginal label distributions shift across source and target. DIP faces similar limitations and is also subject to the additional uncertainty of learning label-flipping features.
In this section, we present our first contribution: the importance-weighted conditional invariant penalty (IW-CIP) algorithm. IW-CIP is an extension of CIP and is designed to address more general DA problems including but not limited to situations where neither the covariate shift nor the label shift assumptions are valid. The intuition behind IW-CIP is as follows. It is known that one can correct for label distribution shift if the conditional \(X\mid Y\) is invariant and only the label \(Y\) distribution changes across source and target (Lipton et al., 2018). However, the assumption on invariance of \(X\mid Y\) can be too rigid. Here we assume the existence of datasets from multiple source domains and a conditionally invariant feature mapping \(\phi_{\text{inv}}\) such that \(\phi_{\text{inv}}(X)\mid Y\) remains invariant across source and target distributions. By identifying a conditionally invariant \(\phi_{\text{inv}}(X)\) via CIP, we can apply the label shift correction algorithm to correct the label shift. Once the label shift is corrected, the joint distribution of \((\phi_{\text{inv}}(X),Y)\) becomes invariant under source and target distributions, allowing us to control the target risk of any algorithms built upon the source \((\phi_{\text{inv}}(X),Y)\).
The rest of the section is structured as follows. In Section 3.1, we offer a review on label shift correction and its application in our context. Then, we introduce our new algorithm, IW-CIP, in Section 3.2. In Section 3.3, we establish target risk guarantees for IW-CIP.
### Importance weights estimation
When the \(m\)-th source distribution and the target distribution share the same conditional \(X\mid Y\) but have different label distributions, the main idea of label shift correction in Lipton et al. (2018) lies in that the true importance weights vector \(w^{(m)}\in\mathbb{R}_{+}^{L}\), defined as
\[w^{(m)}_{[j]}\coloneqq\frac{\mathbb{P}\left\{Y^{(\mathfrak{T})}=j\right\}}{ \mathbb{P}\left\{Y^{(m)}=j\right\}}, \tag{13}\]
can be estimated by exploiting the invariance of the conditional \(X\mid Y\). In our setting, while the source and target distribution does not share the same conditional \(X\mid Y\), according to our assumption, we have a feature mapping \(\phi_{\text{inv}}\) whose corresponding feature representation is a CIC by Definition 1, i.e., \(\phi_{\text{inv}}(X)\mid Y\) is invariant under source and target distributions. Then by treating \(\phi_{\text{inv}}(X)\) as the new features, we can still correct for the label shifts.
More precisely, for \(\mathcal{Y}=\{1,\ldots,L\}\), we have the following distribution matching equation between the \(m\)-th source domain and the target domain: for any \(g\in\mathcal{G}\) mapping to \(\mathcal{Y}\), and for any \(i,j\in\mathcal{Y}\),
\[\mathbb{P}\left\{g\circ\phi_{\text{inv}}(X^{(\mathfrak{T})})=i \right\} =\sum_{j=1}^{L}\mathbb{P}\left\{g\circ\phi_{\text{inv}}(X^{( \mathfrak{T})})=i\mid Y^{(\mathfrak{T})}=j\right\}\mathbb{P}\left\{Y^{( \mathfrak{T})}=j\right\}\] \[\overset{(*)}{=}\sum_{j=1}^{L}\mathbb{P}\left\{g\circ\phi_{\text {inv}}(X^{(m)})=i\mid Y^{(m)}=j\right\}\mathbb{P}\left\{Y^{(\mathfrak{T})}=j\right\}\] \[=\sum_{j=1}^{L}\mathbb{P}\left\{g\circ\phi_{\text{inv}}(X^{(m)})=i,Y^{(m)}=j\right\}w^{(m)}_{[j]}, \tag{14}\]
where step \((*)\) follows from the invariance of \(\phi_{\text{inv}}(X)\mid Y\). In the matrix-vector form, we can write
\[\mu_{g\circ\phi_{\text{inv}}}=C^{(m)}_{g\circ\phi_{\text{inv}}}w^{(m)}, \tag{15}\]
where \(\mu_{h}\) denotes the predicted probability of \(h\) under the target covariate distribution, and \(C^{(m)}_{h}\) is the confusion matrix on the \(\mathcal{P}^{(m)}\) given by
\[C^{(m)}_{h}[i,j]=\mathbb{P}\left\{h(X^{(m)})=i,Y^{(m)}=j\right\}. \tag{16}\]
It is then sufficient to solve the linear system (15) to obtain \(w^{(m)}\). In practice, with finite-sample source data, we use \(\widehat{h}_{\text{CIP}}=\widehat{g}_{\text{CIP}}\circ\widehat{\phi}_{ \text{CIP}}\) in place of \(g\circ\phi_{\text{inv}}\). To obtain the estimated importance weights \(\widehat{w}^{(m)}\), we replace \(\mu_{\widehat{h}_{\text{CIP}}}\) and \(C^{(m)}_{\widehat{h}_{\text{CIP}}}\) with their empirical estimates. In our multiple source environments scenario, we write \(w=(w^{(1)},w^{(2)},\ldots,w^{(M)})\in\mathbb{R}^{L\times M}\) and \(\widehat{w}=(\widehat{w}^{(1)},\widehat{w}^{(2)},\ldots,\widehat{w}^{(M)})\in \mathbb{R}^{L\times M}\) to denote the true and estimated importance weights for all source distributions, respectively.
### Our proposed IW-CIP algorithm
When the target label distribution remains unchanged, the population CIP in Eq. (9) is capable of generalizing to target data because of the invariance of the joint distribution \((\phi_{\text{CIP}}(X),Y)\). However, although CIP ensures invariance in the conditional distribution \(\phi_{\text{CIP}}(X)\mid Y\), the joint distribution \((\phi_{\text{CIP}}(X),Y)\) may not be invariant when label shift is present. Indeed, CIP may perform poorly on the target data if the target label distribution substantially deviates from the source label distribution (see experiments on synthetic data and rotated MNIST in Section 5). To address such distributional shift, we propose the IW-CIP algorithm, which combines importance weighting for label shift correction with CIP.
We define the \(m\)-th weighted source risk for a hypothesis \(h\in\mathcal{H}\) and a weight vector \(\text{w}^{(m)}\in\mathbb{R}^{L}\) as follows:
\[\mathcal{R}^{(m)}(h;\text{w}^{(m)})=\mathbb{E}\left[\text{w}^{(m)}_{[Y^{(m)}]} \cdot\mathbf{1}_{h(X^{(m)})\neq Y^{(m)}}\right].\]
In particular, if \(\text{w}^{(m)}=w^{(m)}\), i.e., \(\text{w}^{(m)}_{[j]}=\frac{\mathbb{P}\left\{Y^{(\mathfrak{I})}=j\right\}}{ \mathbb{P}\left\{Y^{(m)}=j\right\}}\) for all \(j=1,\ldots,L\), then it is easy to see that \(\mathcal{R}^{(m)}(h;\text{w}^{(m)})=\mathcal{R}^{(\mathfrak{I})}(h)\) as long as the conditional distributions \(h(X)\mid Y=y\) are invariant across \(\mathcal{P}^{(m)}\) and \(\mathcal{P}^{(\mathfrak{I})}\). Hence, with an appropriate choice of the weight vector \(\text{w}^{(m)}\), the weighted source risk can serve as a proxy for the target risk. We are ready to introduce the importance-weighted CIP (IW-CIP) algorithm.
Population IW-CIP:The population IW-CIP is obtained in three steps.
1. Obtain a conditionally invariant feature mapping \(\phi_{\text{CIP}}\) and the corresponding CIP classifier \(h_{\text{CIP}}\) via the CIP algorithm in Eq. (9).
2. Use \(h_{\text{CIP}}\) in place of \(g\circ\phi_{\text{inv}}\) in Eq. (15) to obtain importance weights \(w^{(m)}\).
3. Compute a function \(g\in\mathcal{G}\) as well as a new conditionally invariant feature mapping \(\phi\in\Phi\) to minimize the importance-weighted source risks as follows. \[h_{\text{IW-CIP}} =g_{\text{IW-CIP}}\circ\phi_{\text{IW-CIP}},\] \[g_{\text{IW-CIP}},\phi_{\text{IW-CIP}} =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi} \frac{1}{M}\sum_{m=1}^{M}\mathcal{R}^{(m)}(g\circ\phi;w^{(m)})\] (17) \[\text{subject to}\quad\mathfrak{D}\left(\mathcal{P}^{(m)}_{\phi(X )|Y},\mathcal{P}^{(m^{\prime})}_{\phi(X)|Y}\right)=0,\ \forall m\neq m^{\prime}\in\{1,\ldots,M\}.\]
IW-CIP enforces the same constraint on the data representation \(\phi\) as in Eq. (9). However, unlike CIP, the objective of IW-CIP is the importance-weighted source risk, which can serve as a proxy for the target risk under the label distribution shifts. Therefore, IW-CIP can generalize better on the target environment in the presence of label shifts.
Finite-sample IW-CIP:The finite-sample IW-CIP is obtained from the population IW-CIP after replacing all the population quantities by the corresponding empirical estimates
and turning constraints into a penalty form. We first solve finite-sample CIP from Eq. (10), then estimate importance weights and solve
\[\widehat{h}_{\text{IW-CIP}} =\widehat{g}_{\text{IW-CIP}}\circ\widehat{\phi}_{\text{IW-CIP}},\] \[\widehat{g}_{\text{IW-CIP}},\widehat{\phi}_{\text{IW-CIP}} =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi} \frac{1}{M}\sum_{m=1}^{M}\widehat{\mathcal{R}}^{(m)}(g\circ\phi;\widehat{w }^{(m)}) \tag{18}\] \[\qquad\qquad\qquad+\frac{\lambda_{\text{IW-CIP}}}{LM^{2}}\cdot \sum_{y=1}^{L}\sum_{m\neq m^{\prime}}\mathfrak{D}\left(\widehat{\mathcal{P}}^{ (m)}_{\phi(X)|Y=y};\widehat{\mathcal{P}}^{(m^{\prime})}_{\phi(X)|Y=y}\right),\]
where \(\widehat{w}^{(m)}\) is an estimate of \(w^{(m)}\), obtained by solving \(\widehat{\mu}_{\widehat{h}_{\text{CIP}}}=\widehat{C}^{(m)}_{\widehat{h}_{ \text{CIP}}}\,\widehat{w}^{(m)}\). Here \(\widehat{\mu}_{\widehat{h}_{\text{CIP}}}\) and \(\widehat{C}^{(m)}_{\widehat{h}_{\text{CIP}}}\) are empirical estimates of \(\mathbb{P}\left\{\widehat{h}_{\text{CIP}}(X^{(\mathfrak{T})})=i\right\}\) and \(\mathbb{P}\left\{\widehat{h}_{\text{CIP}}(X^{(m)})=i,Y^{(m)}=j\right\}\), respectively. For any \(\phi\in\Phi\), we define a shorthand for the empirical conditional invariant penalty used in the finite-sample IW-CIP by
\[\widehat{\Lambda}_{\phi}\coloneqq\frac{\lambda_{\text{IW-CIP}}}{LM^{2}}\sum_ {y=1}^{L}\sum_{m\neq m^{\prime}}\mathfrak{D}\left(\widehat{\mathcal{P}}^{(m)}_ {\phi(X)|Y=y},\widehat{\mathcal{P}}^{(m^{\prime})}_{\phi(X)|Y=y}\right). \tag{19}\]
### Target risk upper bounds of IW-CIP
In this subsection, we state our main theorems on the target risk upper bounds for IW-CIP. In a nutshell, the target risk bound can be decomposed into multiple terms involving the source risk or the optimal target risk, error in importance weights estimation, and the deviation from conditional invariance of the finite-sample CIP or IW-CIP feature mapping. Consequently, when the importance weights are accurately estimated and the identified features are near conditional invariance, IW-CIP achieves high target accuracy.
To simplify the theoretical analysis that follows, our finite-sample results are stated by considering the case where the whole dataset is split into three parts of equal size. That is, the \(\ell\)-th (\(\ell=1,2,3\)) dataset \(\mathcal{D}_{i}=\{\mathcal{D}_{\ell}^{(1)},\mathcal{D}_{\ell}^{(2)},\ldots, \mathcal{D}_{\ell}^{(M)},\mathcal{D}_{\ell,X}^{(\mathfrak{T})}\}\) is denoted by
\[\mathcal{D}_{\ell}^{(m)}=\{(X_{\ell,k}^{(m)},Y_{\ell,k}^{(m)})\}_{k=1}^{n(m)} \text{ for }m\in\{1,\ldots,M\}\quad\text{ and }\quad\mathcal{D}_{\ell,X}^{(\mathfrak{T})}=\{X_{\ell,k}^{( \mathfrak{T})}\}_{k=1}^{n(\mathfrak{T})}. \tag{20}\]
The number of samples for each source and target dataset is given by \(n^{(m)}\) and \(n^{(\mathfrak{T})}\). When it is clear which dataset we are referring to, we simply omit the dataset subscript \(\ell\) in covariates and labels by writing \(X_{k}^{(m)},Y_{k}^{(m)},X_{k}^{(\mathfrak{T})},Y_{k}^{(\mathfrak{T})}\). In our finite-sample theory, the first \(\mathcal{D}_{1}\) is used for solving the finite-sample CIP, the second \(\mathcal{D}_{2}\) is used for estimating importance weights and correcting the label shift, and the last \(\mathcal{D}_{3}\) is used for solving the finite-sample IW-CIP.
Given importance weights \(\mathrm{w}^{(m)}\in\mathbb{R}^{L}\) (\(1\leq m\leq M\)) for each source distribution, we write \(\mathrm{w}=(\mathrm{w}^{(1)},\ldots,\mathrm{w}^{(M)})\in\mathbb{R}^{L\times M}\) and introduce the following shorthand for the average weighted source risk across \(M\) environments,
\[\overline{\mathcal{R}}(h;\mathrm{w})\coloneqq\frac{1}{M}\sum_{m=1}^{M} \mathcal{R}^{(m)}(h;\mathrm{w}^{(m)}).\]
Before we introduce our main theorems, we define a key quantity called _deviation from conditional invariance_ of a feature mapping, which measures how conditionally invariant it is across source and target distributions.
**Definition 5**: **(Deviation from Conditional Invariance)** _Recall the \(\mathcal{G}\)-divergence given in Eq. (8). For any feature mapping \(\phi:\mathbb{R}^{p}\to\mathbb{R}^{q}\), we define its deviation from conditional invariance as_
\[\Psi_{\mathcal{G},\phi}\coloneqq\max_{\begin{subarray}{c}m=1,\ldots,M,\\ y=1,\ldots,L\end{subarray}}D_{\mathcal{G}}\left(\mathcal{P}^{(\mathfrak{T})}_{ \phi(X)|Y=y},\mathcal{P}^{(m)}_{\phi(X)|Y=y}\right). \tag{21}\]
The deviation from conditional invariance is defined via the maximal \(\mathcal{G}\)-divergence between any pair of conditionals \(\phi(X)\mid Y=y\) in the source and target environments. When the feature representation is exactly conditionally invariant, this quantity attains its minimum value of zero. Our first theorem bounds the difference between target population risk and the average source population risk of a classifier, where \(w^{(m)}\), \(m=1,\ldots,M\), are the true importance weights introduced in Eq. (13).
**Theorem 1.A**: _For any classifier \(h=g\circ\phi\) where \(g\in\mathcal{G},\phi\in\Phi\) and any estimated importance weights \(\widehat{w}=(\widehat{w}^{(1)},\ldots,\widehat{w}^{(M)})\in\mathbb{R}^{L \times M}\), the following target risk bound holds:_
\[\mathcal{R}^{(\mathfrak{T})}(h)\leq\overline{\mathcal{R}}(h;\widehat{w})+ \max_{m=1,\ldots,M}\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{\infty}+\Psi_{ \mathcal{G},\phi}.\]
The proof of this theorem is given in Appendix B.1. We observe that IW-CIP is designed to minimize the upper bounds in Theorem 1.A. Specifically, it is expected that for the case of IW-CIP, the estimated importance weights are close to the true importance weights and \(\phi\) is close to the conditionally invariant feature mapping. Then the average weighted source risk can closely approximate the target risk, and Theorem 1.A allows us to establish an upper bound for the target risk of IW-CIP via the average weighted source risk.
To provide more specific risk guarantees for the finite-sample IW-CIP, we establish the following bound on the target risk of the finite-sample IW-CIP via the target risk of the optimal conditionally invariant classifier \(h^{\star}\), as defined in Definition 2. Here, \(\widehat{\Lambda}_{\phi}\) denotes the empirical IW-CIP penalty given in Eq. (19) and \(w^{(m)}\)'s are the true importance weights given in Eq. (13).
**Theorem 1.B**: _Let \(\widehat{w}=(\widehat{w}^{(1)},\ldots,\widehat{w}^{(M)})\in\mathbb{R}^{L \times M}\) be the estimated importance weights. Then, for any \(\delta\in(0,1)\), with probability at least \(1-\delta\), the following target risk bound holds for the finite-sample IW-CIP,2_
Footnote 2: The probability is with respect to the randomness of source samples \((X^{(m)}_{k},Y^{(m)}_{k})\stackrel{{\text{i.i.d.}}}{{\sim}} \mathcal{P}^{(m)}\) in \(\mathcal{D}_{3}\)\((1\leq m\leq M,1\leq k\leq n^{(m)})\); see Eq. (20).
\[\mathcal{R}^{(\mathfrak{T})}(\widehat{h}_{\text{\scriptsize IW-CIP}})\leq \mathcal{R}^{(\mathfrak{T})}(h^{\star})+\widehat{\Lambda}_{\phi^{\star}}+2 \max_{m=1,\ldots,M}\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{\infty}+\Psi_{ \mathcal{G},\widehat{\phi}_{\text{\scriptsize IW-CIP}}}+\gamma,\]
_where3_
Footnote 3: The sample complexity term \(\gamma\) depends on \(\delta,\mathcal{G},\Phi,w^{(m)},n^{(m)}\), and \(\mathcal{P}^{(m)}\) for \(m=1,\ldots,M\).
\[\gamma=\max_{m=1,\ldots,M}\left[4\mathfrak{R}_{n^{(m)},\mathcal{P}^{(m)}} \left(\mathcal{H}^{(m)}\right)+2\left\|w^{(m)}\right\|_{\infty}\sqrt{\frac{2 \log(M/\delta)}{n^{(m)}}}\right],\]
_and \(\mathcal{H}^{(m)}\coloneqq\Big{\{}f(x,y)=w^{(m)}_{[y]}\mathbf{1}_{g(\phi(x))\neq y}: g\in\mathcal{G},\phi\in\Phi\Big{\}}\)._
The proof proceeds by decomposing the target risk of IW-CIP into multiple components and bounding each term individually, which is given in Appendix B.2. According to Theorem 1.B, the target risk of \(\widehat{h}_{\text{IW-CIP}}\) is bounded by that of the optimal conditionally invariant target classifier \(h^{\star}\) with additional error terms: the empirical IW-CIP penalty of \(\phi^{\star}\), the importance weight estimation error, the deviation from conditional invariance of \(\widehat{\phi}_{\text{IW-CIP}}\), and the sample complexity term \(\gamma\). The empirical IW-CIP penalty term \(\widehat{\Lambda}_{\phi^{\star}}\) measures the conditional invariance of \(\phi^{\star}(X)\) across empirical source environments. Because \(\phi^{\star}\) is a conditionally invariant feature mapping, this term is expected to decrease in large sample scenarios. Similarly, the sample complexity term \(\gamma\), which is based on Rademacher complexity, also diminishes as the sample size \(n^{(m)}\) increases in the source environments. In this case, the theorem shows that accurate estimation of importance weights and minimal deviation from conditional invariance of feature representations of IW-CIP can guarantee IW-CIP to achieve a target risk similar to that of \(h^{\star}\).
Theorem 1.B provides the target risk bound of the finite sample IW-CIP in the most generic settings. In the following subsections, we demonstrate how the remaining terms can be controlled with additional assumptions. Specifically, in Section 3.3.1, we establish refined bounds for the empirical IW-CIP penalty by constraining the choice of IW-CIP penalty (see Proposition 1). Then we bound the weight estimation error using the deviation from conditional invariance of \(\widehat{\phi}_{\text{CIP}}\) (see Proposition 2). In Section 3.3.2, under the general anticausal model 4, we bound the deviation from conditional invariance of feature mappings via a form of conditional invariant penalty (see Proposition 3), and bound the target risk of \(h^{\star}\) relative to the oracle classifier \(h_{\text{oracle}}\) (see Proposition 4).
#### 3.3.1 Upper bounds on the empirical IW-CIP penalty and weight estimation error
To refine the target risk upper bounds established in Theorem 1.B, we present two propositions. The first proposition shows that the empirical IW-CIP penalty diminishes to zero as the source sample size grows to infinity. The second proposition shows that the weight estimation error can be controlled through the deviation from conditional invariance \(\Psi\)--this result is intuitive because Eq. (15) for weight estimation relies on the conditional invariance of CICs. Recalling that \(\phi^{\star}\) is the feature mapping for the optimal conditionally invariant target classifier given in Definition 2, we present the following proposition regarding the empirical IW-CIP penalty term \(\widehat{\Lambda}_{\phi^{\star}}\) in Eq. (19).
**Proposition 1**: _Assume that the \(\mathcal{G}\)-divergence in Eq. (8) is used as the distributional distance in the empirical IW-CIP penalty. Then for any \(\delta\in(0,1)\), with probability at least
\(1-\delta\), the following bound holds, 4
Footnote 4: The probability is with respect to the randomness of source samples \((X_{k}^{(m)},Y_{k}^{(m)})\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{P} ^{(m)}\) in \(\mathcal{D}_{3}\) (\(1\leq m\leq M,1\leq k\leq n^{(m)}\)); see Eq. (20).
\[\widehat{\Lambda}_{\phi^{\star}}\leq 2\lambda_{\text{IW-CIP}}\left(2\max_{ \begin{subarray}{c}m\in\{1,\ldots,M\}\\ y,y^{\prime}\in\{1,\ldots,L\}\end{subarray}}\mathfrak{R}_{n^{(m)},\mathcal{P} ^{(m)}_{\phi^{\star}(X)|Y=y}}\left(\mathcal{G}_{y^{\prime}}\right)+\max_{m=1,2,\ldots,M}\sqrt{\frac{\log\left(2LM/\delta\right)}{2n^{(m)}}}\right),\]
_where \(\mathcal{G}_{y}\coloneqq\left\{f(z)=\mathbf{1}_{g(z)=y},g\in\mathcal{G}\right\}\)._
This proposition is proved in Appendix B.3. The bound on \(\widehat{\Lambda}_{\phi^{\star}}\) is given by the sum of a Rademacher complexity term and a finite-sample error term, both of which diminish to zero as the sample size grows to infinity--this result is expected given that \(\phi^{\star}\) is conditionally invariant across the population source distributions. While calculating the exact \(\mathcal{G}\)-divergence may be challenging in practice, this result offers a vanishing bound without additional assumptions on the underlying data generation model or on the class of feature mappings \(\Phi\). A more practical and simpler choice of the distributional distance is the squared mean distance, which penalizes the squared difference of conditional means of \(\phi(X)\mid Y\) between source distributions. In this case, a refined bound of \(\widehat{\Lambda}_{\phi^{\star}}\) can be obtained with additional assumptions about the data generation model and the class of feature mappings \(\Phi\). For more details on the calculation of this bound, see Appendix B.4.
Next, we show that when the true importance weights \(w^{(m)}\) in Eq. (13) are estimated using the finite-sample CIP, the weight estimation error can be upper bounded via the deviation from conditional invariance of the feature mapping \(\widehat{\phi}_{\text{CIP}}\) learned through CIP. Let \(\widehat{w}^{(m)}\) denote the estimated importance weight obtained by solving the linear system in Eq. (15), where the population quantitites are replaced by their empirical estimates and the finite-sample CIP is used. Then the following proposition provides the upper bound on the estimation error \(\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{2}\).
**Proposition 2**: _Assume that the confusion matrix in the \(m\)-th source distribution \(C^{(m)}_{\widehat{h}_{\text{CIP}}}\), given in Eq. (16), is invertible with conditional number \(\kappa_{m}\). Then, for any \(\delta\in(0,1)\), with probability at least \(1-\delta\), the error of importance weights estimation is bounded by5_
Footnote 5: The probability is with respect to the randomness of \((X_{k}^{(m)},Y_{k}^{(m)})\stackrel{{\text{i.i.d.}}}{{\sim}} \mathcal{P}^{(m)}\) and \((X_{k}^{(\mathbb{T})},Y_{k}^{(\mathbb{T})})\stackrel{{\text{i.i.d. }}}{{\sim}}\mathcal{P}^{(\mathbb{T})}\) in \(\mathcal{D}_{2}\); see Eq. (20).
\[\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{2}\leq 2\kappa_{m}\left(\sqrt{L} \Psi_{\mathcal{G},\widehat{\phi}_{\text{CIP}}}+\sqrt{\frac{L\log(4L/\delta)} {2n^{(\mathbb{T})}}}+\sqrt{\frac{3\log(4L/\delta)}{n^{(m)}}}\left\|w^{(m)} \right\|_{2}\right),\]
_as long as \(n^{(m)}\geq 12\kappa_{m}^{2}\log(4L/\delta)\)._
The proof of this proposition is given in Appendix B.5. Proposition 2 reveals that the finite-sample estimation error of importance weights is determined by the condition number of the confusion matrix of \(\widehat{h}_{\text{CIP}}\), the deviation from conditional invariance of \(\widehat{\phi}_{\text{CIP}}\), and sample error terms decaying roughly on the order of \(1/\sqrt{n^{(m)}}\) or \(1/\sqrt{n^{(\mathbb{T})}}\). The condition
number \(\kappa_{m}\) reflects the performance of the CIP classifier--if \(\widehat{h}_{\text{CIP}}\) achieves perfect classification accuracy on the \(m\)-th source distribution, the condition number \(\kappa_{m}\) takes the value of \(1/\min_{y=1,2,\ldots,L}\mathbb{P}\left\{Y^{(m)}=y\right\}\); however, if \(\widehat{h}_{\text{CIP}}\) performs poorly on the \(m\)-th source distribution, the condition number \(\kappa_{m}\) can be large, which can lead to inaccurate estimation of importance weights. The bound also introduces additional deviation from invariance term \(\Psi_{\mathcal{G},\widehat{\phi}_{\text{CIP}}}\), similar to \(\Psi_{\mathcal{G},\widehat{\phi}_{\text{W-CIP}}}\) that appeared in Theorem 1.B.
3.2 Deviation from conditional invariance and target risk bound on the optimal conditionally invariant classifier
With Proposition 1 and 2 now established, it remains to control the deviation from conditional invariance for both \(\widehat{\phi}_{\text{CIP}}\) and \(\widehat{\phi}_{\text{IW-CIP}}\) and the target risk of the optimal conditionally invariant target classifier \(h^{\star}\) in Theorem 1.B. In this subsection, we quantify both of these terms under additional assumptions about the data generation model. By quantifying these terms, we gain a comprehensive understanding of the upper bounds presented in Theorem 1.B.
To facilitate our analysis, we focus on the general anticausal model as defined in Definition 4 and introduce the following assumption regarding the type of perturbations on the mechanism functions.
**Assumption 1**: _Suppose that source and target data are generated under the general anticausal model in Definition 4. Assume that \(f^{(m)},f^{(\mathfrak{T})}\) are perturbed linearly as follows. For each \(y\in\{1,\ldots,L\}\), there exist an orthogonal matrix \(P_{y}\in\mathbb{R}^{p\times d_{y}}\), \(0\leq d_{y}\leq p\), and vectors \(v_{y}^{(m)},v_{y}^{(\mathfrak{T})}\in\mathbb{R}^{d_{y}}\), \(m=1,\ldots,M\) with \(v_{y}^{(1)}=0\), such that_
\[\begin{split} f^{(m)}(y)&=f^{(1)}(y)+P_{y}v_{y}^{(m )},\text{ for all }m\in\{1,2,\ldots,M\}\,,\text{ and }\\ f^{(\mathfrak{T})}(y)&=f^{(1)}(y)+P_{y}v_{y}^{( \mathfrak{T})}.\end{split} \tag{22}\]
_In addition, the noise terms follow a normal distribution \(\epsilon^{(m)},\epsilon^{(\mathfrak{T})}\overset{\text{i.i.d.}}{\sim}\mathcal{ P}_{\epsilon}=\mathcal{N}(0,\Sigma)\)._
Note that the perturbation matrix \(P_{y}\) is dependent only on the label \(y\), and remains fixed across source and target data generation processes, whereas the vectors \(v_{y}^{(m)},v_{y}^{(\mathfrak{T})}\in\mathbb{R}^{d_{y}}\) are allowed to vary with both the source index \(m\) and the labels \(y\). The assumption states that, conditional on the labels, the perturbation in each environment lies within the low-dimensional space spanned by the columns of \(P_{y}\). Consequently, for a classifier to generalize to the target data, it is important that the classifier only utilizes the covariates that remain invariant, i.e., those which are orthogonal to the columns of \(P_{y}\).
Before we state our result, we introduce several population terms related to a feature mapping \(\phi\). Let \(\Delta_{\phi}^{(m)}(y)\) denote the difference of expected mean of \(\phi(X)\) conditional on \(Y=y\) between the first and the \(m\)-th source distributions, and let \(\Sigma_{\phi}^{(m)}(y)\) denote the conditional covariance matrix of \(\phi\) under the \(m\)-th source distribution, i.e.,
\[\begin{split}&\Delta_{\phi}^{(m)}(y)\coloneqq\mathbb{E}\left[ \phi(X^{(m)})\ \Big{|}\ Y^{(m)}=y\right]-\mathbb{E}\left[\phi(X^{(1)})\ \Big{|}\ Y^{(1)}=y\right],\text{ and }\\ &\Sigma_{\phi}^{(m)}(y)\coloneqq\text{Var}\left[\phi(X^{(m)})\ \Big{|}\ Y^{(m)}=y\right].\end{split} \tag{23}\]
Assuming that \(\Sigma_{\phi}^{(m)}(y)\) is invertible, we further define
\[\Pi_{\phi}(y)\coloneqq\frac{1}{M-1}\sum_{m=2}^{M}\Delta_{\phi}^{(m)}(y)^{\top} \Sigma_{\phi}^{(m)}(y)^{-1}\Delta_{\phi}^{(m)}(y). \tag{24}\]
\(\Pi_{\phi}(y)\) evaluates the differences of conditional means in the direction of the eigenvector of the covariance matrix. With these notions in hand, we can now establish the following deterministic bound for \(\Psi_{\mathcal{G},\phi}\).
**Proposition 3**: _Suppose that source and target data are generated under Assumption 1. Let \(\Phi=\{\phi(x)=Ax+b,A\in\mathbb{R}^{q\times p},b\in\mathbb{R}^{q}\}\) be the linear class of feature mappings from \(\mathbb{R}^{p}\) to \(\mathbb{R}^{q}\) (\(q\leq p\)), and let \(\mathcal{G}\) be any hypothesis class mapping \(\mathbb{R}^{q}\) to \(\mathcal{Y}\). If for each \(y\in\{1,2,\ldots,L\}\), there exists \(\zeta_{y}>0\) such that for all \(m\in\{1,2,\cdots,M\}\),_
\[\left(v_{y}^{(\mathfrak{I})}-v_{y}^{(m)}\right)\left(v_{y}^{( \mathfrak{I})}-v_{y}^{(m)}\right)^{\top}\preceq\frac{\zeta_{y}}{M-1}\sum_{m=2 }^{M}v_{y}^{(m)}(v_{y}^{(m)})^{\top}, \tag{25}\]
_then for any \(\phi\in\Phi\) such that \(A\Sigma A^{\top}\) is non-singular, its deviation from conditional invariance (21) satisfies_
\[\Psi_{\mathcal{G},\phi}^{2}\leq 2\max_{y=1,2,\ldots,L}\bigg{\{}\zeta_{y}\Pi_{ \phi}(y)\bigg{\}}.\]
The proof of this proposition is given in Appendix B.6. The proof proceeds by connecting the \(\mathcal{G}\)-divergence with the total variation distance and applying Pinsker's inequality and data processing inequality. Proposition 3 shows that under a general anticausal model with a specific form of linear perturbations, the deviation from conditional invariance \(\Psi_{\mathcal{G},\phi}\) is governed by \(\zeta_{y}\) and \(\Pi_{\phi}(y)\). The term \(\zeta_{y}\) captures perturbations in the underlying target data generation model, which are beyond our control. Therefore, to obtain good conditionally invariant representations, it is important to make the term \(\Pi_{\phi}(y)\) small. Because \(\Pi_{\phi}(y)\) measures the discrepancy in conditional means across source distributions, under Assumption 1, the squared mean distance can be utilized as a penalty in both CIP and IW-CIP.
Condition (25) is necessary for the validity of Proposition 3. In practice, verifying the correctness of this condition may be challenging. However, we can show that the condition is satisfied with high probability when the perturbations follow a Gaussian distribution and a sufficient number of source distributions are present. Specifically, in Lemma 1, we establish that approximately \(M=\mathcal{O}\left(\max_{y}\{d_{y}\}\right)\) many source domains are required to ensure the validity of condition (25) with high probability (See Appendix B.7 for more details on the precise statement of Lemma 1 and its proof). This indicates that the number of source domains needs to be at least linear with respect to the dimension of perturbations.
Applying Proposition 1, 2 and 3 to Theorem 1.B, we know that IW-CIP has a guaranteed target risk compared to the optimal conditionally invariant classifier \(h^{\star}\). The last goal of this subsection is to connect \(h^{\star}\) with the oracle target classifier \(h_{\text{oracle}}\) defined in Eq. (3). Again, we consider the general anticausal model under Assumption 1, but in a simpler binary classification setting given as follows.
**Assumption 2**: _Suppose that source and target data are generated under Assumption 1 with binary labels \(\mathcal{Y}=\{1,2\}\) and \(p_{1}^{(\mathbb{T})}=p_{2}^{(\mathbb{T})}=1/2\). Assume that \(\Sigma=\sigma^{2}\mathbb{I}_{p},P_{1}=P_{2}=\left(\mathbb{I}_{d},\mathbf{0}_{d \times(p-d)}\right)^{\top}\), and \(f^{(1)}(1)=-f^{(1)}(2)=\xi\cdot\sigma\mathbb{I}_{p}\) for some \(\xi>0\), where \(d_{1}=d_{2}=d\). Assume further that the source perturbations \(\{v_{t}^{(m)}\}_{1\leq t\leq 2,1\leq m\leq M}\) span the whole space of \(\mathbb{R}^{d}\) and the target perturbations follow \(v_{1}^{(\mathbb{T})},v_{2}^{(\mathbb{T})}\stackrel{{\mathrm{i.i.d }}}{{\sim}}\mathcal{N}(0,\tau^{2}\cdot\sigma^{2}\mathbb{I}_{d})\) for some \(\tau>0\)._
The assumption illustrates a specific binary classification problem in domain adaptation, where only the first \(d\) coordinates of the covariates are perturbed, with the remaining \(p-d\) coordinates remaining invariant conditioned on the label. It also assumes that we have observed a sufficient number of perturbations in source domains, which span the entire space of the first \(d\) dimensions, while perturbations in the target domains are allowed to vary according to a normal distribution. Under this data generation model, we prove that the risk of \(h^{\star}\) is close to that of \(h_{\mathrm{oracle}}\) when the dimension of CICs is substantial.
**Proposition 4**: _Consider the domain adaptation problem under Assumption 2. Let the hypothesis class of classifiers \(\mathcal{H}\) be \(\mathcal{G}\circ\Phi\), where \(\mathcal{G}=\{g:\mathbb{R}\rightarrow\{1,2\},g(x)=1\cdot\mathbf{1}_{x\leq 0}+2 \cdot\mathbf{1}_{x>0}\}\) consists of one fixed function, and \(\Phi=\{\phi:\mathbb{R}^{p}\rightarrow\mathbb{R},\phi(x)=\beta^{\top}x+\beta_{ 0},\|\beta\|_{2}=1,\beta\in\mathbb{R}^{p},\beta_{0}\in\mathbb{R}\}\) consists of linear feature mappings. There exists a constant \(c_{\xi,\tau}>0\) such that for any \(\delta\in(0,1)\), the risk difference between \(h_{\mathrm{oracle}}\) and \(h^{\star}\) is bounded by_
\[\mathcal{R}^{(\mathbb{T})}(h^{\star})-\mathcal{R}^{(\mathbb{T})}(h_{\mathrm{ oracle}})\leq c_{\xi,\tau}\left(\sqrt{d}+\sqrt{\log{(1/\delta)}}\right)\exp\left(- \frac{\xi^{2}(p-d)}{8}\right), \tag{26}\]
_with probability at least \(1-\delta\).6_
Footnote 6: The probability is with respect to the randomness of target perturbations \(v_{1}^{(\mathbb{T})},v_{2}^{(\mathbb{T})}\stackrel{{\mathrm{i.i.d }}}{{\sim}}\mathcal{N}(0,\tau^{2}\sigma^{2}\mathbb{I}_{d})\).
The proof of this proposition is given in Appendix B.8. As the dimension of covariates \(p\) goes to infinity, we can see that the risk difference between \(h^{\star}\) and \(h_{\mathrm{oracle}}\) converges to zero at an exponential rate, provided that the dimension of perturbations \(d\) is smaller than \(\alpha p\) for some constant \(\alpha<1\). This result shares similarities with those obtained in a regression setting in (Chen and Buhlmann, 2020, Corollary 6), where the authors demonstrate a polynomial decay rate for the risk difference.
**Remark 1**: _We have established Propositions 1, 2, 3, 4 to control terms appeared in Theorem 1.A, 1.B. These propositions respectively bound the empirical IW-CIP penalty term, error of importance weights estimation, the deviation from conditional invariance, and the risk of the optimal conditionally invariant classifier. As a result, Theorem 1.B now guarantees that finite-sample IW-CIP has a target risk close to that of the oracle classifier \(h_{\mathrm{oracle}}\)._
## 4 Enhancing DA with CICs: risk detection and improved DIP
In this section, we explore two additional roles of CICs in DA. First, we investigate how CICs can be used to detect large target risks for other DA algorithms. Second, we examine how CICs enhance the reliability of DIP by addressing its label-flipping issue. The empirical
study for the second role is presented in Section 5.1.2 and Section 5.2.2, while the third rod is demonstrated throughout Section 5. For the purpose of this section, we assume the absence of label shift, meaning that the label distributions are invariant across both the source and target domains. If label shift is present, the procedure outlined in Section 3 can be employed to correct it by applying CIP and adjusting for the importance weights.
Assessing the success or failure of a DA algorithm is challenging in practice due to the unavailability of target labels. While much research has been devoted in establishing target risk guarantees of DIP, it is still considered a "risky" algorithm. DIP seeks invariant feature representations across source and target domains, enforcing only the marginal distribution of these feature representations to be invariant across these domains. However, having the marginal distribution be invariant is not sufficient to ensure conditional invariance. Consequently, DIP may only learn representations that maintain marginal invariance across source and target domains but fail to be conditionally invariant given the labels. Because DIP merely minimizes the source risk, those representations may entirely flip the prediction of labels when applied to the target data (Wu et al., 2019; Zhao et al., 2019; Wu et al., 2020); see Figure 1 (a)(b) for the illustrative examples. Furthermore, without target labels, it is difficult to detect this potential label flipping of DIP.
The label-flipping issue in DIP raises concerns about its reliability when applied blindly without additional validation. In the absence of target labels, previous works often rely on assumptions about the data generation process in order for DIP-type of algorithms to achieve low target risk, e.g. linear structural equation models (Chen and Buhlmann, 2020). In this paper, given the availability of multiple source domains, we propose the use of CICs to enhance the reliability of DIP algorithms.
Figure 1: A binary classification example illustrating the difference between DIP and Joint-DIP. (a) DIP correctly matches the source and target covariates by projecting onto the feature \(X_{[2]}\), which can generalize to the target distribution. (b) DIP matches the source and target covariates by projecting onto the label-flipping feature \(X_{[1]}\), which cannot generalize to the target distribution. (c) JointDIP finds the correct feature \(X_{[2]}\) by jointly matching the covariates with a conditional invariant component (CIC) \(\phi_{\text{inv}}(X)\). (d) Joint-DIP discards the label-flipping feature \(X_{[1]}\) because the joint distribution of \((X_{[1]},\phi_{\text{inv}}(X))\) cannot be matched across source and target distributions.
Suppose that we have obtained the conditionally invariant feature mapping \(\phi_{\rm inv}\), and the classifier \(h_{\rm inv}\) built upon CICs denoted by
\[h_{\rm inv}=g_{\rm inv}\circ\phi_{\rm inv}. \tag{27}\]
As seen in the previous sections, both \(\phi_{\rm inv}\) and \(h_{\rm inv}\) can be approximately obtained by solving CIP--in this case, while finite-sample feature mapping \(\widehat{\phi}_{\rm CIP}\) may not be perfectly conditionally invariant, we expect that they exhibit a small deviation from conditional invariance under the appropriate assumptions, as implied by Proposition 3.7 Therefore, we assume the exact conditional invariance of \(\phi_{\rm inv}\) for simplicity throughout this section. By leveraging \(\phi_{\rm inv}\) and \(h_{\rm inv}\), we demonstrate that CICs can guide other DA algorithms in two significant ways:
Footnote 7: Because labeled data is available for multiple source domains, one can verify the conditional invariance of finite-sample CIP by comparing the conditional distributions across these source distributions. If one is willing to assume that CICs exist across source and target, then conditional invariance also holds for the target domain.
1. Large risk detection: when the classifier \(h_{\rm inv}\) built on CICs has low target risk, it can be used to lower-bound the risk of other DA algorithms, even in the absence of target labels. This enables the identification of label-filpping issues in algorithms like DIP.
2. Joint matching: we propose the JointDIP algorithm which uses CICs \(\phi_{\rm inv}(X)\) to learn invariant features between source and target covariates. JointDIP, as a new DA algorithm, enhances the reliability of DIP by addressing the label-flipping issue often encountered in DIP when the CICs are generic enough.
For the remainder of the section, when a single source is considered, we assume without loss of generality that the first source domain is used.
### Detect failed DA algorithms using CICs
We present the second role of CICs in DA, specifically, in detecting whether another DA classifier has large target risk. Given the conditionally invarinat classifier \(h_{\rm inv}\) as defined in Eq. (27), we prove the following theorem which controls the difference in source and target risks for any classifier.
**Theorem 2**: _Let \(h_{\rm inv}\) be a conditionally invariant classifier and assume that there is no label shift between source and target distributions. For any classifier \(h\), its risk difference in source and target is controlled as follows, 8_
Footnote 8: In practice, it is difficult to find \(h_{\rm inv}\) that is conditionally invariant across source and target distributions, for instance, if we approximate it via CIP, i.e., \(h_{\rm inv}=\widehat{h}_{\rm CIP}\). In this case, the upper bound has an additional term \(L\Psi_{\mathcal{G},\phi_{\rm inv}}\); see the proof for a generalized version of the theorem.
\[\left|\mathcal{R}^{(\mathfrak{T})}(h)-\mathcal{R}^{(1)}(h)\right| \leq 2\mathcal{R}^{(1)}(h_{\rm inv})\] \[+\left|\mathbb{P}\left\{h(X^{(1)})\neq h_{\rm inv}(X^{(1)}) \right\}-\mathbb{P}\left\{h(X^{(\mathfrak{T})})\neq h_{\rm inv}(X^{(\mathfrak{ T})})\right\}\right|. \tag{28}\]
This result is proved in Appendix C.1. It shows that a large discrepancy between source and target risks of any classifier \(h\) can be detected by examining the source risk of \(h_{\mathrm{inv}}\) and the alignments between the predictions of \(h\) and \(h_{\mathrm{inv}}\) across source and target, which are always available. In practice, we may take \(\widehat{h}_{\mathrm{CIP}}\) as a proxy for \(h_{\mathrm{inv}}\). By rearranging terms, an empirical version of Eq. (28) can be derived to establish a lower bound for the target risk
\[\mathcal{R}^{(\mathfrak{T})}(h)\geq\widehat{\mathcal{R}}^{(1)}(h )-2\widehat{\mathcal{R}}^{(1)}(\widehat{h}_{\mathrm{CIP}})\\ -\left|\frac{1}{n^{(1)}}\sum_{k=1}^{n^{(1)}}\mathbf{1}_{h(X^{(1)} _{k})\neq\widehat{h}_{\mathrm{CIP}}(X^{(1)}_{k})}-\frac{1}{n^{(\mathfrak{T})} }\sum_{k=1}^{n^{(\mathfrak{T})}}\mathbf{1}_{h(X^{(\mathfrak{T})}_{k})\neq \widehat{h}_{\mathrm{CIP}}(X^{(\mathfrak{T})}_{k})}\right|. \tag{29}\]
This lower bound on the target risk can be directly translated to an upper bound on accuracy, which can serve as a certificate to detect the failure of any DA classifier \(h\). In Section 5.1.2 and Section 5.2.2, we demonstrate this with numerical examples using CIP to detect the failure of DIP, where we test on both synthetic data and the MNIST dataset. We observe that the accuracy upper bound derived from Eq. (29) to detect the failures of DIP becomes more accurate when CIP has a higher predictive accuracy. If CIP does not perform well across the entire input space, by restricting to a subset of the space where CIP performs well (e.g. region far from decision boundary of a CIP classifier), we can obtain similar results which we will discuss in the subsequent subsection.
#### 4.1.1 Detect large target risk by restricting to a subset
The result of Theorem 2 can be extended in a straightforward manner to the case where we condition on the event \(X^{(1)}\in\mathcal{A}\) for any subset \(\mathcal{A}\subseteq\mathcal{X}\). Specifically, we write \(\mathcal{R}^{(1)}_{\mathcal{A}}(h)\) to denote the source risk conditioned on \(\mathcal{A}\), i.e.,
\[\mathcal{R}^{(1)}_{\mathcal{A}}(h)\coloneqq\mathbb{P}\left\{h(X^{(1)})\neq Y^ {(1)}\ \Big{|}\ X^{(1)}\in\mathcal{A}\right\},\]
and similarly for the target risk \(\mathcal{R}^{(\mathfrak{T})}_{\mathcal{A}}(h)\). Analogous to Theorem 2, we can obtain the following bound on the risk difference conditioned on set \(\mathcal{A}\).
**Corollary 1**: _Let \(h_{\mathrm{inv}}\) be a conditionally invariant classifier and assume that there is no label shift between source and target distributions. For any classifier \(h\) and any subset \(\mathcal{A}\subseteq\mathcal{X}\), we have_
\[\left|\mathcal{R}^{(\mathfrak{T})}_{\mathcal{A}}(h)-\mathcal{R}^{(1 )}_{\mathcal{A}}(h)\right| \leq 2\mathcal{R}^{(1)}_{\mathcal{A}}(h_{\mathrm{inv}})\] \[+\left|\mathbb{P}_{\mathcal{A}}\left\{h(X^{(1)})\neq h_{\mathrm{ inv}}(X^{(1)})\right\}-\mathbb{P}_{\mathcal{A}}\left\{h(X^{(\mathfrak{T})})\neq h _{\mathrm{inv}}(X^{(\mathfrak{T})})\right\}\right|, \tag{30}\]
_where for a random vector \(X\), we write_
\[\mathbb{P}_{\mathcal{A}}\left\{h(X)\neq h_{\mathrm{inv}}(X)\right\}\coloneqq \mathbb{P}\left\{h(X)\neq h_{\mathrm{inv}}(X)\ \middle|\ X\in\mathcal{A}\right\}.\]
From Eq. (30), we can obtain a target risk lower bound for \(h\), conditioned on set \(\mathcal{A}\), analogous to Eq. (29). If one can identify a region \(\mathcal{A}\subseteq\mathcal{X}\) where the classifier \(h_{\mathrm{inv}}\) has near perfect
source accuracy, due to its conditional invariance, the output of \(h_{\mathrm{inv}}\) can serve as a good proxy of the unobserved target labels. One example of such a choice is to choose \(\mathcal{A}\) as the region where \(h_{\mathrm{inv}}\) gives a high predicted probability. Indeed, our experiments in Section 5.1.2 and Section 5.2.2 compare the tightness of the bounds across the entire space and on a subset \(\mathcal{A}\) where \(h_{\mathrm{inv}}\) exhibits a high predicted probability. We find that Corollary 1 provides a more accurate estimation of the target risk because \(h_{\mathrm{inv}}\) has higher accuracy when constrained to \(\mathcal{A}\). In this case, a desired property for \(h\) to have a low target risk is to align its output with the output of \(h_{\mathrm{inv}}\) on \(\mathcal{A}\). We formalize this idea which is a direct consequence of Corollary 1.
**Corollary 2**: _Let \(h_{\mathrm{inv}}\) be a conditionally invariant classifier and assume that there is no label shift between source and target distributions. Suppose that \(h\in\mathcal{H}\) satisfies_
\[h(x)=h_{\mathrm{inv}}(x)\text{ for all }x\in\mathcal{A}. \tag{31}\]
_Then the target risk of \(h\) on \(\mathcal{A}\) is bounded by_
\[\mathcal{R}^{(\mathfrak{T})}_{\mathcal{A}}(h)\leq 3\mathcal{R}^{(1)}_{ \mathcal{A}}(h_{\mathrm{inv}}).\]
_In particular, if \(h_{\mathrm{inv}}\) has a near perfect accuracy on \(\mathcal{A}\) under the source distribution, i.e., \(\mathcal{R}^{(1)}_{\mathcal{A}}(h_{\mathrm{inv}})\leq C\) for a small \(C\geq 0\), then \(\mathcal{R}^{(\mathfrak{T})}_{\mathcal{A}}(h)\leq 3C\)._
Corollary 2 guarantees the low target population risk of the classifier on the region where the invariant classifier \(h_{\mathrm{inv}}\) is confident about its prediction and can serve as a good proxy for the target labels \(Y\). This does not necessarily imply that the representation learned via Eq. (31) can generalize to target data outside the region \(\mathcal{A}\). Nevertheless, if domain experts all recognize the importance of the region \(\mathcal{A}\), Eq. (31) becomes a natural requirement for assessing the quality of any DA classifier.
### JointDIP by matching DIP features jointly with CICs
In this subsection, we demonstrate the third role of CICs in enhancing DIP. According to Theorem 2, if the predictions of \(h\) and \(h_{\mathrm{inv}}\) align well across source and target, the target risk of \(h\) won't be too far from its source risk. It turns out that we can enforce this alignment by incorporating the CICs into the DIP matching penalty. This leads to our new algorithm, joint domain invariant projection (JointDIP).
Population JointDIPThe population JointDIP minimizes the source risk while matching the joint distributions of \(\phi\) and \(\phi_{\mathrm{inv}}\) in the representation space across source and target:9
Footnote 9: \((\phi,\phi_{\mathrm{inv}})(X)\) denotes the vector concatenated by \(\phi(X)\) and \(\phi_{\mathrm{inv}}(X)\).
\[\begin{split} h_{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text
Unlike DIP which only matches the marginal distribution of \(\phi\), JointDIP takes advantage of CICs to extract invariant feature representations. Intuitively, if \(\phi_{\text{inv}}\) is highly correlated with the labels \(Y\) and the joint feature mappings \((\phi,\phi_{\text{inv}})\) are marginally invariant, \(\phi\) is not likely to learn label-filpping features due to conditional invariance of \(\phi_{\text{inv}}\). Note that if label shift is present, it is not hard to extend the current formulation to an importance-weighted JointDIP (IW-JointDIP) by applying CIP to correct label shift prior to applying JointDIP.
Finite-sample JointDIPThe finite sample formulation of JointDIP enforces the joint invariance of the representations via a regularization term,
\[\begin{split}\widehat{h}_{\text{j-DIP}}&=\widehat{g }_{\text{j-DIP}}\circ\widehat{\phi}_{\text{j-DIP}},\\ \widehat{g}_{\text{j-DIP}},\widehat{\phi}_{\text{j-DIP}}& =\operatorname*{arg\,min}_{g\in\mathcal{G},\phi\in\Phi}\ \widehat{\mathcal{R}}^{(1)}(g\circ\phi)+\lambda_{\text{j-DIP}}\cdot\mathfrak{D }\left(\widehat{\mathcal{P}}^{(1)}_{(\phi,\phi_{\text{inv}})(X)},\widehat{ \mathcal{P}}^{(\mathfrak{T})}_{(\phi,\phi_{\text{inv}})(X)}\right),\end{split} \tag{33}\]
where \(\lambda_{\text{j-DIP}}>0\) is a regularization parameter that controls the strength of the joint matching penalty.
To illustrate the advantage of JointDIP over the ordinary DIP, consider a binary classification example under the general anticausal model Definition 4 where the data is generated as in Figure 1 (a)(b), similar to the example in Johansson et al. (2019). Suppose that \(\Phi=\{\phi_{1}(x)=x_{[1]},\phi_{2}(x)=x_{[2]}\}\) and \(\mathcal{G}=\{g_{a}(z)=\mathbf{1}_{z>a}+1,a\in\mathbb{R}\}\). Since DIP only matches the marginal distributions in the representation space, it could choose either \(\phi_{1}\) or \(\phi_{2}\) as the feature mappings to perfectly match the marginal distributions of the features and achieve zero loss under the source distribution. However, choosing \(\phi_{1}\) leads to zero accuracy under the target distribution, as \(\phi_{1}(X)=X_{[1]}\) is the label-flipping feature. Only \(\phi_{2}(X)=X_{[2]}\) is the conditionally invariant feature that can generalize to the target distribution. Now if we have access to CICs through the conditionally invariant feature mapping \(\phi_{\text{inv}}\), and we match these jointly with DIP, we would only get the correct feature \(\phi_{2}(X)=X_{[2]}\), as illustrated in Figure 1 (c)(d). JointDIP would never select \(\phi_{1}(X)=X_{[1]}\) because the joint distribution of \((X_{[1]},\phi_{\text{inv}}(X))\) is different between the source and target distributions.
#### 4.2.1 Theoretical comparison of CIP, DIP, and JointDIP under general anticausal model
To quantitatively compare the target risk of DA classifiers, we focus on data generated from the general anticausal model defined in Definition 4. Additionally, we introduce Assumption 3 where marginal distribution of \(Y\) is uniform under source and target distributions, as follows.
**Assumption 3**: _Suppose that data is generated according to the general anticausal model defined in Definition 4. Further, assume that the label distribution is uniform under source and target distributions, i.e.,\(\forall m\in\{1,2,\ldots,M\}\) and \(\forall y\in\{1,2,\ldots,L\}\), we have \(p_{y}^{(m)}=p_{y}^{(\mathfrak{T})}=1/L\)._
With the above assumptions on the data generation process in place, the following theorem compares the target risk of population CIP, DIP, and JointDIP.
**Theorem 3**: _Suppose that source and target data are generated under Assumption 3. Let \(\Phi=\{\phi(x)=Ax+b,A\in\mathbb{R}^{q\times p},b\in\mathbb{R}^{q}\}\) be the class of linear feature mapping from \(\mathbb{R}^{p}\) to \(\mathbb{R}^{q}\), which is used in the optimization of population DIP (11) and population JointDIP (32). Assume that both algorithms match feature distributions exactly, i.e.,_
\[\mathcal{P}^{(1)}_{\phi_{\text{DIP}}(X)}=\mathcal{P}^{(\mathfrak{T})}_{\phi_{ \text{DIP}}(X)}\text{ and }\mathcal{P}^{(1)}_{(\phi_{\text{\rm{-DIP}}},\phi_{\text{\rm{inv}}})(X)}= \mathcal{P}^{(\mathfrak{T})}_{(\phi_{\text{\rm{-DIP}}},\phi_{\text{\rm{inv}}}) (X)}.\]
_Then the following statements hold:_
* _There exist distributions_ \(\mathcal{P}^{(1)},\mathcal{P}^{(\mathfrak{T})}\) _such that the feature mapping of population DIP,_ \(\phi_{\text{\rm DIP}}\)_, flips the labels after matching the marginal distributions in the representation space, i.e.,_ \[\mathcal{P}^{(1)}_{\phi_{\text{\rm DIP}}(X)|Y=y}=\mathcal{P}^{(\mathfrak{T})}_ {\phi_{\text{\rm DIP}}(X)|Y=\pi(y)},\ \ \forall y\in\{1,\ldots,L\},\] (34) _for some permutation_ \(\pi\neq\mathbf{I}\) _over_ \(\mathcal{Y}\)_._
* _Suppose_ \(\phi_{\text{\rm inv}}:\mathbb{R}^{p}\to\mathbb{R}^{r}\) _is a linear conditionally invariant feature mapping such that_ \[\mathbb{E}\left[\phi_{\text{\rm inv}}(X^{(1)})\ \Big{|}\ Y^{(1)}=i\right]\neq\mathbb{E}\left[\phi_{\text{\rm inv}}(X^{(1)})\ \Big{|}\ Y^{(1)}=j\right],\ \ \forall i\neq j\in \mathcal{Y}.\] (35) _Then the feature mapping of population JointDIP,_ \(\phi_{\text{\rm{-DIP}}}\)_, is conditionally invariant across_ \(\mathcal{P}^{(1)}\) _and_ \(\mathcal{P}^{(\mathfrak{T})}\)_. If additionally_ \(r\leq q\)_, then the target risk of JointDIP is no greater than that of the optimal classifier built on_ \(\phi_{\text{\rm inv}}^{0}\coloneqq\left(\phi_{\text{\rm inv}},\mathbf{0}_{q-r}\right)\)_, i.e.,_ \[\mathcal{R}^{(\mathfrak{T})}(h_{\text{\rm{-DIP}}})\leq\min_{g\in\mathcal{G}} \mathcal{R}^{(\mathfrak{T})}(g\circ\phi_{\text{\rm inv}}^{0}),\] _and when_ \(\phi_{\text{\rm inv}}=\phi^{\star}\)_, we have_ \(\mathcal{R}^{(\mathfrak{T})}(h_{\text{\rm{-DIP}}})\leq\mathcal{R}^{(\mathfrak{ T})}(h^{\star})\)_._
* _Suppose_ \(\phi_{\text{\rm inv}}:\mathbb{R}^{p}\to\mathbb{R}^{r}\) _is a conditionally invariant feature mapping such that the matrix_ \[C_{\phi_{\text{\rm inv}}}(a)=\begin{pmatrix}1&1&\ldots&1\\ m_{1}^{1}(a)&m_{2}^{1}(a)&\cdots&m_{L}^{1}(a)\\ \vdots&\vdots&\ddots&\vdots\\ m_{1}^{L-1}(a)&m_{2}^{L-1}(a)&\cdots&m_{L}^{L-1}(a)\end{pmatrix}\in\mathbb{R}^{ L\times L},\] (36) _is full rank for some vector_ \(a\in\mathbb{R}^{r}\)_, where_ \(m_{j}^{\ell}(a)=\mathbb{E}\left[\left(a^{\top}\phi_{\text{\rm inv}}(X^{(1)}) \right)^{\ell}\Big{|}\ Y^{(1)}=j\right]\)_. Then the feature mapping of population JointDIP,_ \(\phi_{\text{\rm{-DIP}}}\)_, is conditionally invariant across_ \(\mathcal{P}^{(1)}\) _and_ \(\mathcal{P}^{(\mathfrak{T})}\)_._
The proof of this result is given in Appendix C.2. The proofs for Theorem 3(a) and Theorem 3(b) rely on the matching property of the two mixing distributions, whereas the proof for Theorem 3(c) analyzes distribution matching in the space of characteristic functions. Theorem 3(a) shows that while DIP uses the additional target covariates information, the corresponding representations can fail to satisfy conditional invariance across source and target distributions; and in fact, if the features correspond to the label-flipping features, as illustrated in Figure 1, it can potentially hurt the target prediction performance.
By contrast, Theorem 3(b) shows that when \(\phi_{\text{inv}}\) is a linear conditionally invariant feature mapping, the features obtained by JointDIP are conditionally invariant across \(\mathcal{P}^{(1)}\) and \(\mathcal{P}^{(\mathfrak{T})}\), therefore avoiding the label-flipping issue of DIP. The condition (35) requires that conditional means of \(\phi_{\text{inv}}\) are different for any pair of labels \(i\) and \(j\). This condition is a reasonable expectation for any \(\phi_{\text{inv}}\) exhibiting a good prediction performance as otherwise there is no way for classifiers built on \(\phi_{\text{inv}}\) to distinguish between labels \(i\) and \(j\). See Appendix A.2 for an example of the general anticausal model that satisfies the condition (35). Additionally, Theorem 3(b) assures that JointDIP cannot be worse than the optimal classifier built upon \(\phi_{\text{inv}}\). This result is expected because CICs tend to be conservative. For instance, CICs identified via CIP are forced to be conditionally invariant across \(M\) many source distributions (and ideally the target distribution), which can potentially eliminate features that are useful for predicting target labels. JointDIP addresses this issue by seeking conditional invariant representations across a single source and target domains to construct a more effective classifier than one solely based on CICs.
Finally, for any conditionally invariant feature mapping \(\phi_{\text{inv}}\) which may not necessarily be linear, Theorem 3(c) guarantees that the features obtained by JointDIP are conditionally invariant across \(\mathcal{P}^{(1)}\) and \(\mathcal{P}^{(\mathfrak{T})}\), as long as the matrix \(C_{\phi_{\text{inv}}}(a)\) given in Eq. (36) is full rank. The matrix \(C_{\phi_{\text{inv}}}(a)\) is full rank if the CICs under different labels are in a generic position. Comparing with the condition (35) in the linear case, this condition may be more difficult to verify generally as it requires computation of higher order moments. We provide concrete examples in Appendix A.3 where this condition is satisfied. In practice, we can either use \(\widehat{\phi}_{\text{CIP}}\) as \(\phi_{\text{inv}}\), or refer to domain experts for suggestions on reasonable CICs.
## 5 Numerical experiments
In this section, we investigate the target performance of our proposed DA algorithms and compare them with existing methods. In particular, we demonstrate through our experiments the effectiveness of the importance-weights correction in the presence of both covariate and label distribution shifts, the capability of detecting DIP's failure using estimated CICs, and the superior performance of JointDIP over DIP when label-flipping features are present. We consider DA classification tasks across four datasets: synthetic data generated from linear Structural Causal Models (SCMs), the MNIST data (LeCun, 1998) under rotation intervention, the CelebA data (Liu et al., 2015) under color intervention, and the Camelyon17 data from WILDS (Koh et al., 2021; Sagawa et al., 2021). Except for the Camelyon17 data, the domain shifts in other datasets are synthetically created.
We conduct a thorough comparative analysis where our methods are benchmarked against various DA algorithms. In addition to DIP, CIP, IW-CIP, and JointDIP which have been introduced in previous sections, we additionally explore several variants of these algorithms, including _IW-DIP_ and _IW-JointDIP_. IW-DIP is the algorithm that applies CIP for importance weighting to correct label shift prior to DIP, while IW-JointDIP applies this importance weighting step before JointDIP. Moreover, for ERM, DIP, and JointDIP, we consider both their single-source versions (e.g. DIP) and their multi-source versions (e.g. DIP-Pool). In terms of distributional distances, both squared mean distance and Maximum Mean Discrepancy (MMD) are considered. Lastly, we compare these methods with existing
well-known DA algorithms such as Invariant Risk Minimization (IRM) (Arjovsky et al., 2019), V-REx (Krueger et al., 2021), and groupDRO (Sagawa et al., 2019). A detailed description of each DA algorithm, models architectures, and the training setup are presented in Appendix D.
Certain DA algorithms, such as DIP or IW-DIP, rely on a single source domain to learn invariant feature representations and construct the final classifiers (see Appendix D). In this work, we do not focus on how to choose the best single source domain for these algorithms, but instead simply select the last source domain. By design, this single source domain is usually similar to the target domain in terms of the considered covariate shifts, such as mean shift (linear SCMs), rotation angle (MNIST), and color balance (CelebA). However, this domain may have label-flipping features compared to the target domain. We deliberately design these settings to demonstrate that, under such settings, DIP might induce label flipping, while JointDIP avoids this issue.
### Linear structural causal models (SCMs)
We perform experiments on synthetic datasets generated according to linear SCMs under the general anticausal model defined in Definition 4. We first compare the performance of various DA methods, and then show how to use CICs to detect the failure of DIP without access to target labels.
#### 5.1.1 Linear SCMs under different interventions
We consider three different types of domain shifts: mean shift, label shift, and a shift to introduce label-flipping features. In all SCMs, we introduce the mean shift across domains. Depending on the presence or absence of label shift and the shift that introduces label-flipping features, we obtain four combinations and their corresponding SCMs. We use \(M^{\prime}=M+1\) to denote the total number of source and target domains. The last source domain (the \(M\)-th domain) is always set as the single source domain for DA algorithms which utilize only one source domain (e.g. DIP). The last domain (the \(M^{\prime}\)-th domain) serves as the target domain, and we generate 1000 samples per domain. Instead of specifying each \(f^{(m)}(\cdot)\) and \(f^{(\mathfrak{T})}(\cdot)\) in Definition 4, we provide explicit representations of the data generation model for each SCM. For simplicity, we only present the data generation model for source data, and the target data generation model can be obtained by replacing the superscript \((m)\) with \((\mathfrak{T})\).
* SCM I: mean shift exists; no CICs; no label shift; no label-flipping features; \(M^{\prime}=4\) and \(p=10\). The data generation model is \[Y^{(m)} \sim\text{Bernoulli}(0.5),\] \[X^{(m)} =0.2Y\cdot\mathds{1}_{10}+0.25\cdot\mathcal{N}(0,\mathbb{I}_{10 })+A^{(m)},\] where \(\mathds{1}_{10}\in\mathbb{R}^{10}\) denotes a vector consisting of ones and \(\mathbb{I}_{10}\in\mathbb{R}^{10\times 10}\) denotes the identity matrix. For all source domains, the mean shift \(A^{(m)}\), \(1\leq m\leq M\), is generated by \(0.2\cdot\mathcal{N}(0,\mathbb{I}_{10})\), while the target domain suffers a large intervention
\(\operatorname{sign}\left(\mathcal{N}(0,\mathbb{I}_{10})\right)\). Note that we do not introduce any CICs in SCM I. This allows us to examine the most extreme case of mean shift where all coordinates of \(X\) are perturbed. However, in the following three SCMs (SCM II, III, and IV), we ensure the presence of CICs and expect that certain DA algorithms can exploit these CICs.
* SCM II: mean shift exists; CICs exist; label shift exists; no label-flipping features; \(M^{\prime}=12\) and \(p=9\). The data generation model is \[Y^{(m)} \sim\text{Bernoulli}(p^{(m)}),\] \[X^{(m)}_{[1:6]} =0.2(Y^{(m)}-0.5)\cdot\mathds{1}_{6}+0.25\cdot\mathcal{N}(0, \mathbb{I}_{6})+A^{(m)},\] \[X^{(m)}_{[7:9]} =0.2(Y^{(m)}-0.5)\cdot\mathds{1}_{3}+0.25\cdot\mathcal{N}(0, \mathbb{I}_{3}),\] where the label distribution \(p^{(m)}=0.5\), for \(1\leq m\leq M\), is balanced in source domains but perturbed in target domain with \(p^{(\mathfrak{T})}=0.1\). The mean shift only exists in the first six coordinates of \(X\), where \(A^{(m)}\), \(1\leq m\leq M\) is generated by \(\mathcal{N}(0,\mathbb{I}_{6})\) in the source domains, while the target domain suffers a large intervention \(A^{(\mathfrak{T})}\sim 2\cdot\operatorname{sign}\left(\mathcal{N}(0,\mathbb{I}_{6})\right)\). The last three coordinates of \(X\) remain unperturbed and they serve as CICs.
* SCM III: mean shift exists; CICs exist; no label shift; label-flipping features exist; \(M^{\prime}=12\) and \(p=18\). The data generation model is \[Y^{(m)} \sim\text{Bernoulli}(0.5),\] \[X^{(m)}_{[1:6]} =0.3(Y^{(m)}-0.5)\cdot\mathds{1}_{6}+0.4\cdot\mathcal{N}(0, \mathbb{I}_{6})+A^{(m)},\] \[X^{(m)}_{[7:12]} =\begin{cases}0.3(0.5-Y^{(m)})\cdot\mathds{1}_{6}+0.1\cdot \mathcal{N}(0,\mathbb{I}_{6}),&\text{m is odd},\\ 0.3(Y^{(m)}-0.5)\cdot\mathds{1}_{6}+0.1\cdot\mathcal{N}(0,\mathbb{I}_{6}),& \text{m is even},\end{cases}\] \[X^{(m)}_{[13:18]} =0.3(Y^{(m)}-0.5)\cdot\mathds{1}_{6}+0.4\cdot\mathcal{N}(0, \mathbb{I}_{6}),\] The first six coordinates of \(X\) suffer mean shift, with \(A^{(m)}\), \(1\leq m\leq M\) and \(A^{(\mathfrak{T})}\sim\mathcal{N}(0,\mathbb{I}_{d})\) across all domains. In particular, we make the last source domain and target domain share similar mean shift interventions: \(A^{(11)}=a_{0}+a_{1}\) and \(A^{(12)}=a_{0}+a_{2}\), where \(a_{0}\sim 0.8\cdot\mathcal{N}(0,\mathbb{I}_{6})\) and \(a_{1},a_{2}\sim 0.6\cdot\mathcal{N}(0,\mathbb{I}_{6})\). This setting potentially enables DIP-based methods to better rely on the last source domain because of the similarity of \(X_{[1:6]}\) between this domain and the target domain. However, the \(7^{\text{th}}\) to \(12^{\text{th}}\) coordinates of \(X\) are label-flipping features according to Definition 3: in the first six domains \(X_{[7:12]}\) has positive correlation with \(Y\), while the correlation is negative in the remaining six domains, including the target domain. There is no interventions on the last six coordinates of \(X\), and they serve as CICs.
* SCM IV: mean shift exists; CICs exist; label shift exists; label-flipping features exist, \(M^{\prime}=12\) and \(p=18\). The data generation model of \(X\mid Y\) is the same as SCM III, but we additionally perturb the marginal distribution of \(Y\) across source and target distributions. Specifically, the label distribution \(Y^{(m)}\sim\text{Bernoulli}(0.5)\) is balanced in source domains but perturbed in target domain with \(Y^{(\mathfrak{T})}\sim\text{Bernoulli}(0.3)\).
We use a linear model in all SCM experiments to predict the labels; see Appendix D.2 for details. Table 1 compares the target performance of different DA algorithms. In SCM I where only mean shift exists and no CICs exist, DIP gives the best performance, with JointDIP showing comparable accuracy. ERM and other DA algorithms which aim to find invariance across all source domains (e.g. CIP, IRM, V-REx) fail to generalize to the target domain due to the lack of CICs and the substantial mean shift in the target domain. In SCM II where mean shift exists and label shift is added as another intervention, IW-DIP achieves the highest accuracy. Both IW-CIP and IW-JointDIP also achieve over 90% correct predictions. However, IW-ERM which directly applies importance weighting without using CICs completely fails, indicating that identifying conditionally invariant features before applying label correction is necessary in this scenario.
In SCM III where label-flipping features exists, we observe that DIP results in an accuracy lower than a random guess. On the contrary, JointDIP achieves the highest accuracy. To gain a deeper understanding of the classifiers obtained by each algorithm, in Figure 2 we illustrate the \(L^{1}\) norm of the coefficients of three categories of coordinates: mean-shifted \(X_{[1:6]}\), label-flipping \(X_{[7:12]}\), and CICs \(X_{[13:18]}\). Three key observations can be made from the figure: First, DIP shows large coefficients on the label-flipping features \(X_{[7:12]}\), providing insight for its suboptimal performance. Second, DA methods that seek invariant features across all source domains, such as CIP, IRM, V-REx, show small coefficients on both the mean-shifted features and label-flipping features, but large coefficients on the CICs. This result aligns with their fundamental objective of identifying invariant representations across source domains. Lastly, JointDIP demonstrates small coefficients on label-flipping features,
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline & SCM I & SCM II & SCM III & SCM IV \\ \hline Mean shift & Y & & Y & & Y \\ CICs & N & & Y & Y & Y \\ Label shift & N & & Y & N & Y \\ Label-flipping features & N & & N & Y & Y \\ \hline DA Algorithm & src\_acc & tar\_acc & src\_acc & tar\_acc & src\_acc & tar\_acc & src\_acc & tar\_acc \\ \hline Tar & 70.6\(\pm\)13.8 & **89.0\(\pm\)0.7** & 69.4\(\pm\)3.9 & **92.9\(\pm\)1.0** & 70.1\(\pm\)2.9 & **97.9\(\pm\)0.4** & 69.8\(\pm\)3.0 & **96.9\(\pm\)0.7** \\ ERM & 89.6\(\pm\)1.0 & 56.1\(\pm\)12.0 & 87.5\(\pm\)1.0 & 57.4\(\pm\)33.3 & 97.9\(\pm\)0.3 & 58.9\(\pm\)0.5 & 98.0\(\pm\)0.3 & 58.7\(\pm\)10.7 \\ ERM-Pool & 88.3\(\pm\)2.3 & 54.4\(\pm\)10.2 & 78.5\(\pm\)1.3 & 58.6\(\pm\)29.3 & 83.3\(\pm\)0.8 & 75.3\(\pm\)7.7 & 83.3\(\pm\)0.7 & 78.9\(\pm\)6.8 \\ DIP & 88.3\(\pm\)1.0 & **87.6\(\pm\)1.5** & 84.5\(\pm\)3.1 & 62.0\(\pm\)2.9 & 93.5\(\pm\)3.4 & 34.5\(\pm\)14.9 & 94.4\(\pm\)2.7 & 35.3\(\pm\)14.6 \\ DIP-Pool & 86.7\(\pm\)2.8 & **86.4\(\pm\)2.2** & 75.8\(\pm\)0.7 & 60.1\(\pm\)3.1 & 84.1\(\pm\)0.5 & 82.0\(\pm\)1.1 & 84.4\(\pm\)0.5 & 82.3\(\pm\)3.8 \\ CIP & 87.4\(\pm\)3.3 & 55.9\(\pm\)12.0 & 75.2\(\pm\)0.7 & 75.7\(\pm\)6.5 & 82.2\(\pm\)0.4 & **81.8\(\pm\)1.3** & 82.2\(\pm\)0.4 & 82.1\(\pm\)1.2 \\ IW-ERM & 54.8\(\pm\)11.6 & 52.4\(\pm\)10.4 & 59.3\(\pm\)9.6 & 54.1\(\pm\)37.7 & 80.3\(\pm\)9.1 & 75.1\(\pm\)8.9 & 77.2\(\pm\)9.0 & 79.2\(\pm\)4.8 \\ IW-CIP & 53.7\(\pm\)9.9 & 54.0\(\pm\)11.8 & 50.3\(\pm\)0.7 & **90.4\(\pm\)0.8** & 82.5\(\pm\)0.4 & 81.2\(\pm\)4.1 & 81.0\(\pm\)0.9 & **83.8\(\pm\)2.2** \\ IW-DIP & 56.2\(\pm\)12.8 & 54.3\(\pm\)11.7 & 71.1\(\pm\)10.8 & **92.1\(\pm\)2.7** & 88.4\(\pm\)13.0 & 37.2\(\pm\)14.9 & 83.7\(\pm\)8.6 & 64.2\(\pm\)7.3 \\ JointDIP & 87.1\(\pm\)1.3 & **86.8\(\pm\)1.9** & 82.8\(\pm\)2.7 & 70.6\(\pm\)6.2 & 88.8\(\pm\)1.3 & **85.4\(\pm\)2.1** & 88.6\(\pm\)1.9 & 52.8\(\pm\)1.9 \\ IW-JointDIP & 68.4\(\pm\)19.1 & 68.0\(\pm\)19.4 & 51.7\(\pm\)3.9 & **90.0\(\pm\)1.3** & 87.8\(\pm\)1.5 & **82.9\(\pm\)8.1** & 84.1\(\pm\)3.4 & **85.1\(\pm\)3.7** \\ IRM & 87.6\(\pm\)2.2 & 56.7\(\pm\)10.4 & 70.9\(\pm\)2.5 & 71.9\(\pm\)17.9 & 80.1\(\pm\)1.8 & 80.2\(\pm\)1.8 & 84.2\(\pm\)0.6 & 83.7\(\pm\)3.3 \\ V-REx & 87.3\(\pm\)2.5 & 55.6\(\pm\)11.5 & 77.9\(\pm\)1.2 & 62.9\(\pm\)25.7 & 83.8\(\pm\)0.8 & 80.4\(\pm\)7.6 & 84.3\(\pm\)0.6 & 83.8\(\pm\)3.7 \\ groupDRO & 88.3\(\pm\)2.3 & 54.4\(\pm\)10.0 & 77.8\(\pm\)1.1 & 64.2\(\pm\)25.7 & 84.0\(\pm\)0.7 & 81.3\(\pm\)7.4 & 83.9\(\pm\)0.6 & **84.3\(\pm\)3.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Source and target accuracy in linear SCMs under different interventions. For each setting of SCM, DA algorithms are run on 10 different datasets, each generated using 10 different random seeds. Tar represents the oracle method where the model is trained on the labeled target data. The top four methods are highlighted in bold. The best method (excluding Tar) is colored in red.
but relatively larger coefficients on mean-shifted coordinates, confirming that JointDIP effectively discards label-flipping features \(X_{[7:12]}\), retains some of the invariant features \(X_{[13:18]}\) identified by CIP while exploiting the useful mean-shifted features \(X_{[1:6]}\).
Lastly in SCM IV where mean shift, label shift, and label flipping features exist simultaneously, we find that IW-JointDIP outperforms other methods in the target accuracy. This can be attributed to its three step approach--first finding CICs, then correcting label shift via CICs, and finally aligning potential features via JointDIP--which effectively addresses the combination of these three shifts.
Overall, our experiments across various SCM settings reveal that while traditional DA methods like CIP and DIP may struggle with certain types of shifts, JointDIP excels in handling label-flipping features, and importance-weighted variants of the DA methods perform well in scenarios with label shift.
#### 5.1.2 Detecting failure of DIP in linear SCMs
Previously in Section 4.1, we discussed the second role of CICs to detect the failure of DA algorithms without requiring access to target labels. To demonstrate this second role in practice, here we apply Theorem 2 and Corollary 1 to DIP and JointDIP within the context of SCM III. We take the finite-sample CIP as \(\phi_{\text{inv}}\), and compare the accuracy upper bound derived from Theorem 2 against the actual accuracy of DIP and JointDIP as shown in Figure 3 (a)(b). We vary the DIP penalty parameter in DIP and the JointDIP penalty parameter in JointDIP (both represented by \(\lambda\)). The CIP penalty parameter utilized in CIP and JointDIP is set to the optimal value found via hyperparmeter search. The figure
Figure 2: \(L^{1}\) norm of coefficients in SCM III. Coefficients are grouped into three categories, depending on how corresponding coordinates are perturbed. Contrary to DIP, JointDIP does not have large coefficients on label-flipping features, demonstrating that JointDIP can avoid label-flipping issue observed in DIP.
illustrates that while the accuracy upper bound from Theorem 2 is valid, it exceeds the true accuracy by a wide margin, which is undesirable.10 Consequently, it is difficult to directly apply Theorem 2 to test the failure of DIP in SCM III.
Footnote 10: This issue might result from the relatively low accuracy of CIP (around 80%) in SCM III. In our subsequent experiments with MNIST, such issue does not appear because CIP has much higher accuracy (over 90%). See Figure 3 (c)(d) for the comparison.
We then turn to apply Corollary 1 to compare the accuracy upper bound with the actual accuracy of DIP within region \(\mathcal{A}\), where we define \(\mathcal{A}\) as the region such that the CIP predicted probability exceeds a threshold \(q_{\alpha}\). This threshold \(q_{\alpha}\) is defined as the \(\alpha\times 100\)-th percentile of CIP predicted probability for the target covariates, i.e., \((1-\alpha)\times 100\%\) of target covariates have a CIP predicted probability greater than \(q_{\alpha}\). As shown in Figure 4,
Figure 4: DIP in SCM III: target accuracy upper bound obtained via Corollary 1 vs. actual target accuracy in region \(\mathcal{A}\). We define region \(\mathcal{A}\) as \(\mathcal{A}=Q_{\alpha}\coloneqq\{X\in\mathcal{X}\mid\text{CIP predicted probability }\geq q_{\alpha}\}\), where \(q_{\alpha}\) is the threshold such that \((1-\alpha)\times 100\%\) of target covariates have a CIP predicted probability greater than \(q_{\alpha}\). The larger \(\alpha\) is, the more confident CIP is about the prediction on the covariate samples in \(Q_{\alpha}\). The red line indicates \(y=x\).
Figure 3: SCM III and MNIST III: target accuracy upper bound obtained via Theorem 2 vs. actual target accuracy. For SCM III, the accuracy of CIP is low, resulting in an upper bound that does not provide a tight estimation of the true target accuracy for both DIP and JointDIP. Conversely, in MNIST III, CIP achieves high accuracy and the accuracy upper bound precisely reflects the true target accuracy. The red line indicates \(y=x\).
by increasing \(\alpha\), we observe that our upper bound becomes increasingly precise within region \(\mathcal{A}\). For example, DIP with certain values of \(\lambda\) yields an upper bound lower than 0.5, suggesting that DIP flips the label. Although a low accuracy within region \(\mathcal{A}\) does not necessarily translate to low accuracy across the entire target domain, in practice we can refer to domain experts and ask them whether such suboptimal performance of DIP within region \(\mathcal{A}\) is reasonable or not, allowing us to avoid the need to acquire and validate all target labels.
### MNIST under rotation intervention
In this section, we consider binary classification of the MNIST dataset (LeCun, 1998) under rotation interventions, where digits 0-4 are categorized as label 0 and digits 5-9 are categorized as label 1. Similar to our experiments in SCMs, we first evaluate the prediction performance of various DA methods, then discuss how to detect potential failure of DIP without requiring access to target labels.
#### 5.2.1 Rotated MNIST under different interventions
We create five source domains and one target domain (the \(6^{\text{th}}\) domain). The \(5^{\text{th}}\) domain is fixed as the single source domain for DA algorithms that rely on only one source domain. Each source domain consists of 20% of the images from the MNIST training set, and the target domain includes all images from the MNIST test set. We introduce three different types of interventions as follows:
* Rotation shift: Each image in the \(m\)-th domain is rotated clockwise by \((m\times 15-30)^{\circ}\).
* Label shift: In the target domain, we remove 50% of the images labelled as 0.
* Label-flipping features: In the \(1^{\text{st}},3^{\text{rd}}\), and \(5^{\text{th}}\) domains, 90% of the images with label 1 and 10% of the images with label 0 are patched by a white bar of \(6\times 16\) pixels at the bottom-left corner. This creates a correlation of 0.8 between the label and the
Figure 5: Sample images from MNIST III. In this dataset, two different types of interventions have been applied: a rotation shift and the inclusion of label-flipping features. For rotation shift, each image in the \(m\)-th domain is rotated clockwise by \((m\times 15-30)^{\circ}\). As for the label-flipping features, a white bar of \(6\times 16\) pixels is added at the bottom-left corner of the images. The correlation between the label and the white bar patch changes across the domains, with positive correlation in the \(1^{\text{st}},3^{\text{rd}}\), and \(5^{\text{th}}\) domains, and negative correlation in the \(2^{\text{nd}},4^{\text{th}}\), and \(6^{\text{th}}\) (target) domains.
patch. Conversely, in the \(2^{\text{nd}},4^{\text{th}}\), and \(6^{\text{th}}\) domains, we add this white bar to 90% of the images with label 0 and 10% of the images with label 1. This creates a correlation of \(-0.8\) between the label and the patch.
Similar to the settings of SCMs, we consider four DA problems under different combinations of these interventions. The rotation shift intervention is applied to all MNIST experiments, and four different cases are constructed based on whether the label shift or label-flipping features exist: MNIST I includes rotation shift, but does not have label shift and label-flipping features; MNIST II includes rotation shift and label shift, but does not have label-flipping features; MNIST III includes rotation shift and label-flipping features, but does not have label shift; and MNIST IV includes all three interventions--rotation shift, label shift, and the presence of label-flipping feature. Image samples under MNIST III are displayed in Figure 5 for illustration.
We train a Convolutional Neural Network (CNN) model similar to LeNet5 (LeCun et al., 1998) for all MNIST experiments; see Appendix D.2 for more details. Table 2 summarizes the performance of various DA methods on MNIST. Similar to our findings in linear SCM experiments, we conclude that DIP effectively addresses rotation shift in the absence of label-flipping features (MNIST I), while label correction addresses label shift (MNIST II and IV), and JointDIP mitigates the issue of learning label-flipping features for DIP (MNIST III and IV). Consequently, DA methods which incorporate steps specifically designed to address each shift have the best performance or are among the top-performing methods.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{MNIST I} & \multicolumn{2}{c|}{MNIST II} & \multicolumn{2}{c|}{MNIST III} & \multicolumn{2}{c}{MNIST IV} \\ \hline Rotation shift & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c}{Y} \\ Label shift & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c|}{N} & \multicolumn{2}{c}{Y} \\ Label-flipping features & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c}{Y} \\ \hline DA Algorithm & src\_acc & tar\_acc & src\_acc & tar\_acc & src\_acc & tar\_acc & src\_acc & tar\_acc \\ \hline Tar & 71.2\(\pm\)0.9 & **100.0\(\pm\)0.0** & 69.3\(\pm\)1.5 & **99.7\(\pm\)0.3** & 67.4\(\pm\)1.1 & **100.0\(\pm\)0.1** & 66.2\(\pm\)1.9 & **99.9\(\pm\)0.1** \\ ERM & 99.9\(\pm\)0.1 & 96.9\(\pm\)0.4 & **99.9\(\pm\)0.2** & **96.9\(\pm\)0.5** & 100.0\(\pm\)0.1 & 87.8\(\pm\)1.9 & 100.0\(\pm\)0.1 & 86.8\(\pm\)1.8 \\ ERM-Pool & 99.5\(\pm\)0.4 & 94.6\(\pm\)0.8 & 99.5\(\pm\)0.3 & 94.6\(\pm\)1.3 & 99.4\(\pm\)0.5 & 88.7\(\pm\)2.0 & 99.6\(\pm\)0.2 & 89.1\(\pm\)2.5 \\ DIP & 100.0\(\pm\)0.1 & **97.5\(\pm\)0.3** & 99.9\(\pm\)0.2 & 96.7\(\pm\)0.5 & 100.0\(\pm\)0.0 & 91.0\(\pm\)0.1 & 100.0\(\pm\)0.0 & 90.1\(\pm\)1.6 \\ DIP-Pool & 99.6\(\pm\)0.2 & 95.7\(\pm\)0.4 & 99.6\(\pm\)0.2 & 94.6\(\pm\)1.5 & 99.4\(\pm\)0.5 & **92.8\(\pm\)1.2** & 99.4\(\pm\)0.4 & 91.2\(\pm\)2.1 \\ CIP & 99.6\(\pm\)0.3 & 94.9\(\pm\)0.9 & 99.6\(\pm\)0.3 & 94.9\(\pm\)2.1 & 99.4\(\pm\)0.4 & 89.1\(\pm\)1.7 & 99.1\(\pm\)0.9 & 90.0\(\pm\)1.3 \\ IW-ERM & 99.4\(\pm\)0.4 & 94.8\(\pm\)0.8 & 99.4\(\pm\)0.2 & 94.8\(\pm\)0.5 & 99.4\(\pm\)0.5 & 88.0\(\pm\)1.7 & 98.8\(\pm\)0.8 & 88.3\(\pm\)3.3 \\ IW-CIP & 99.7\(\pm\)0.2 & 94.8\(\pm\)0.8 & 99.2\(\pm\)0.4 & 94.9\(\pm\)1.2 & 99.6\(\pm\)0.2 & 98.9\(\pm\)1.7 & 99.4\(\pm\)0.4 & 90.1\(\pm\)1.0 \\ IW-DIP & 99.9\(\pm\)0.3 & **97.3\(\pm\)0.4** & 99.8\(\pm\)0.2 & **97.4\(\pm\)0.2** & 100.0\(\pm\)0.0 & **92.6\(\pm\)1.1** & 100.0\(\pm\)0.0 & **90.9\(\pm\)1.5** \\ JointDIP & 99.9\(\pm\)0.1 & **97.3\(\pm\)0.3** & 99.9\(\pm\)0.2 & **96.9\(\pm\)0.6** & 99.9\(\pm\)0.2 & **93.5\(\pm\)1.2** & 99.7\(\pm\)0.2 & **91.5\(\pm\)1.1** \\ IW-JointDIP & 99.6\(\pm\)0.6 & 97.2\(\pm\)0.5 & 99.5\(\pm\)0.2 & 96.5\(\pm\)1.3 & 99.9\(\pm\)0.2 & **93.1\(\pm\)1.1** & 99.6\(\pm\)0.5 & **93.4\(\pm\)0.9** \\ IRM & 99.5\(\pm\)0.5 & 94.3\(\pm\)1.4 & 99.6\(\pm\)0.1 & 94.3\(\pm\)1.4 & 99.3\(\pm\)0.7 & 88.4\(\pm\)1.9 & 99.1\(\pm\)0.3 & 89.3\(\pm\)1.4 \\ V-REx & 99.4\(\pm\)0.2 & 94.6\(\pm\)0.8 & 99.6\(\pm\)0.1 & 94.5\(\pm\)0.9 & 99.1\(\pm\)0.3 & 89.5\(\pm\)1.4 & 99.4\(\pm\)0.6 & 89.8\(\pm\)1.3 \\ groupDRO & 99.6\(\pm\)0.3 & 94.9\(\pm\)1.1 & 99.5\(\pm\)0.3 & 95.1\(\pm\)1.2 & 99.0\(\pm\)0.2 & 90.7\(\pm\)1.1 & 99.0\(\pm\)0.2 & 90.8\(\pm\)1.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Source and target accuracy in rotated MNIST under different interventions. For each setting of the rotated MNIST, DA algorithms are run on 10 different datasets, each generated using 10 different random seeds. Tar represents the oracle method where the model is trained on the labeled target data. The top four methods are highlighted in bold. The best method (excluding Tar) is colored in red.
#### 5.2.2 Detecting failure of DIP in rotated MNIST
Analogous to Section 5.1.2, we apply Theorem 2 and Corollary 1 to MNIST III to detect potential failures of DIP, demonstrating the second role of CICs. Figure 3 (c)(d) illustrate that the theoretical upper bound on target accuracy obtained via Theorem 2 matches closely with the actual target accuracy, and that large values of \(\lambda\) such as \(10\) and \(100\) may lead DIP to learn label-flipping features, while no such issues appear in JointDIP. Figure 6 presents the result of applying Corollary 1 with different choices of region \(\mathcal{A}=Q_{\alpha}\) for \(\alpha\in\{0,0.25,0.5,0.75\}\). Similar to our findings in Figure 4, we observe that as CIP becomes more confident in region \(\mathcal{A}\) (i.e., as \(\alpha\) grows), the upper bound becomes more accurate. In Figure 7, we visualize images from the target domain sampled within the region \(Q_{\alpha}\) for different values of \(\alpha\). As \(\alpha\) increases, the hand-written digits become more clear and distinguishable. In practice where we suspect label flipping by DIP but cannot access or afford to obtain many target labels, we can investigate target samples where CIP and DIP disagree in the region \(Q_{\alpha}\) with a large value of \(\alpha\), and refer to domain experts to evaluate whether these distinguishable images should be correctly classified by DIP. If DIP fails to do so, our procedure based on Corollary 1 can serve as a diagnostic tool to accurately estimate DIP's target performance and evaluate its validity in region \(\mathcal{A}\), especially when Theorem 2 provides an uninformative upper bound.
### CelebA under color shift
The CelebA dataset introduced by (Liu et al., 2015) is a large-scale face image dataset with multiple face attributes. It includes over 200K celebrity images, each annotated with 40 binary attributes such as Gender, Eyeglasses, and Smiling. In our experiments, we take Smiling as the label and create six different settings for DA problems. The first three CelebA settings are designed to study the color shift together with label-flipping features, similar to the approach taken with SCM III and MNIST III. The remaining three CelebA
Figure 6: DIP in MNIST III: target accuracy upper bound obtained via Corollary 1 vs. actual target accuracy in region \(\mathcal{A}\). We define region \(\mathcal{A}\) as \(\mathcal{A}=Q_{\alpha}\coloneqq\{X\in\mathcal{X}\mid\text{CIP predicted probability }\geq q_{\alpha}\}\), where \(q_{\alpha}\) is the threshold such that \((1-\alpha)\times 100\%\) of target covariates have a CIP predicted probability greater than \(q_{\alpha}\). The larger \(\alpha\) is, the more confident CIP is about the prediction on the covariate samples in \(Q_{\alpha}\). The red line indicates \(y=x\).
settings are developed to examine color shift in conjunction with label shift, similar to the approach taken with SCM II and MNIST II. For each setting, we construct three source domains and one target domain, each consisting of 20K images randomly sampled from CelebA. The last source domain is used as the single source domain for DA algorithms that rely on a single source domain. To predict the labels, we utilize a CNN model architectures across all problem settings (see Appendix D.2 for more details).
#### 5.3.1 Color shift with label-flipping features
The settings of CelebA I, CelebA II, and CelebA III are presented in Table 3. In these settings, we manipulate two features, Color_Balance and Mouth_Slightly_Open, to generate distribution shift across domains. The Color_Balance feature is a synthetic attribute, which takes a value of 1 for a full-color image and 0 for a black-and-white image. The Mouth_Slightly_Open feature is an original binary attribute annotated in the CelebA dataset, with the value 1 if the mouth in the image is slightly open, and 0 otherwise. We vary these two features so that they are conditionally independent given the label Smiling, and
Figure 8: Sample images from CelebA III. In this dataset, we synthetically create a feature Color_Balance, which takes 1 for a full-color image and 0 for a black-and-white image. We also consider the original attribute Mouth_Slightly_Open, which takes 1 if the mouth in the image is slightly open, and 0 otherwise. The correlations between these two features and the label Smiling are varied across domains, as shown in Table 3.
Figure 7: Sample images from MNIST III target domain within region \(Q_{\alpha}\) where CIP disagrees with DIP. The images are shown for varying \(\alpha\in\{0,0.25,0.5,0.75,0.9,0.95\}\). As \(\alpha\) increases, the selected hand-written digits become more distinguishable.
their correlations with this label match the values shown in Table 3. In all three settings, we align the Color_Balance feature of the last source domain and the target domain most closely, while making the correlation between Smiling and Mouth_Slightly_Open reversed. As a result, the Color_Balance feature can somewhat generalize to the target domain for label prediction, whereas the Mouth_Slightly_Open feature serves as a label-flipping feature between the last source domain and the target domain as defined in Definition 3. Figure 8 illustrates image examples from each domain in CelebA III. An ideal DA algorithm that utilizes the last source domain should be able to partially make use of the Color_Balance feature, but should avoid the Mouth_Slightly_Open feature to predict the labels.
Given that there is no label shift in these settings, we evaluate DA methods without the importance-weighted label shift correction (IW) step, and present the results in Table 4. We observe that in CelebA I, where the correlations between Smiling and both the Color_Balance and Mouth_Slightly_Open features vary significantly across domains, Joint-DIP achieves the best performance. In CelebA II and CelebA III where we reduce the variation in correlations between Smiling and one of the features, JointDIP continues to demonstrate comparable accuracy. GroupDRO also shows a good performance, likely due
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{CelebA I} & \multicolumn{2}{c|}{CelebA II} & \multicolumn{2}{c}{CelebA III} \\ \hline DA Algorithm & src\_acc & tar\_acc & src\_acc & tar\_acc & src\_acc & tar\_acc \\ \hline ERM & 97.0\(\pm\)0.2 & 77.7\(\pm\)1.2 & 94.4\(\pm\)0.4 & **90.3\(\pm\)0.9** & 97.0\(\pm\)0.2 & 77.7\(\pm\)1.2 \\ ERM-Pool & 93.3\(\pm\)0.3 & 77.2\(\pm\)1.4 & 92.8\(\pm\)0.4 & 83.6\(\pm\)1.0 & 93.1\(\pm\)0.2 & 89.6\(\pm\)0.4 \\ DIP & 96.2\(\pm\)0.4 & **82.0\(\pm\)2.2** & 93.1\(\pm\)0.3 & **90.6\(\pm\)1.0** & 96.2\(\pm\)0.4 & 82.0\(\pm\)2.2 \\ DIP-Pool & 92.3\(\pm\)0.5 & 80.4\(\pm\)1.3 & 90.9\(\pm\)0.4 & 85.5\(\pm\)0.6 & 91.6\(\pm\)0.3 & **90.4\(\pm\)0.4** \\ CIP & 89.1\(\pm\)0.3 & 76.6\(\pm\)1.4 & 92.7\(\pm\)0.6 & 83.7\(\pm\)1.4 & 93.0\(\pm\)0.4 & **89.7\(\pm\)0.7** \\ JointDIP & 93.0\(\pm\)0.8 & **87.3\(\pm\)0.6** & 93.1\(\pm\)0.3 & **90.4\(\pm\)1.2** & 91.4\(\pm\)1.1 & 88.7\(\pm\)1.0 \\ IRM & 90.5\(\pm\)0.7 & 76.3\(\pm\)3.2 & 92.6\(\pm\)0.5 & 83.6\(\pm\)1.4 & 90.3\(\pm\)0.9 & 88.8\(\pm\)1.3 \\ V-REx & 86.9\(\pm\)1.2 & **83.3\(\pm\)1.2** & 81.8\(\pm\)0.2 & 85.1\(\pm\)0.7 & 91.8\(\pm\)0.3 & **91.1\(\pm\)0.8** \\ groupDRO & 91.7\(\pm\)1.0 & **84.3\(\pm\)1.0** & 91.5\(\pm\)0.5 & **86.4\(\pm\)1.5** & 92.4\(\pm\)0.4 & **91.6\(\pm\)0.5** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Source and target accuracy in CelebA I, II, III with color shift and label-flipping features. For each setting of the CelebA, DA algorithms are run on 10 different datasets, each generated using 10 different random seeds. The top four methods are highlighted in bold. The best method is colored in red.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{2}{c|}{\(\rho\)(Smiling, Color\_Balance)} & \multicolumn{2}{c}{\(\rho\)(Smiling, Mouth\_Slightly\_Open)} \\ \hline Domain & CelebA I & CelebA II & CelebA III & CelebA I & CelebA II & CelebA III \\ \hline Source Domain 1 & -0.8 & -0.8 & 0.2 & -0.9 & -0.5 & -0.9 \\ Source Domain 2 & -0.6 & -0.6 & 0.4 & 0.9 & 0.5 & 0.9 \\ Source Domain 3 & 0.6 & 0.6 & 0.6 & 0.9 & 0.5 & 0.9 \\ Target Domain & 0.8 & 0.8 & 0.8 & -0.9 & -0.5 & -0.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Settings of CelebA I, II, III under color shift with label-flipping features. Smiling is the label attribute for prediction. We vary the Color_Balance and the Mouth_Slightly_Open features across domains. The correlation between Color_Balance and Smiling remains relatively stable between the last source domain and the target domain, while Mouth_Slightly_Open serves as the label-flipping feature.
to the substantial shift across domains. Overall, the results show that JointDIP maintains strong and robust performance in the presence of label-flipping features.
#### 5.3.2 Color shift with label shift
The settings of CelebA IV, CelebA V, and CelebA VI are shown in Table 5. In these settings, we manipulate the Color_Balance feature and additionally introduce interventions on the distribution of Smiling to create label shift between the source and target domains. Specificfally, the correlation between Smiling and the Color_Balance feature remains consistent with Table 3, but we no longer vary the Mouth_Slightly_Open feature, thereby eliminating label-flipping features between the source and target domains. Instead, we change the label distribution across domains: the distribution of Smiling is balanced (50% each) in all three source domains, but becomes unbalanced (either 75% and 25%, or 25% and 75%) in the target domain. We evaluate various DA methods, including those that incorporate the importance-weighted label shift correction (IW) step, as shown in Table 6. Given the absence of label-flipping features in these settings, we do not evaluate JointDIP. We observe that IW-DIP outperforms all other methods. Note that ERM also achieves a high accuracy and outperforms ERM-Pool, indicating that training on all source domains may not be beneficial when substantial distribution shifts, in particular the label shift, exist across domains. Overall, our experimental results highlight the importance of the label shift correction (IW) step in the presence of substantial label shift.
### Camelyon17 from WILDS
In this subsection, we conduct numerical experiments on Camelyon17 to demonstrate the effectiveness of JointDIP on the real data. The Camelyon17 image dataset is one of the WILDS benchmarks (Koh et al., 2021; Sagawa et al., 2021) for domain adaptation, which comprises whole-slide images (WSIs) of breast cancer metastases in lymph node sections from five hospitals. The task involves predicting whether a \(96\times 96\) image patch extracted from a WSI contains any tumor pixels within its central \(32\times 32\) area. The labels are obtained through manual pixel-level annotations of each WSI. Among the five different hospitals (domains) from which the WSIs are collected, the first three hospitals are treated
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{\(\rho\)(Smiling, Color\_Balance)} & \multicolumn{3}{c}{\(P\)(Smiling = 1)} \\ \hline Domain & CelebA IV & CelebA V & CelebA VI & CelebA IV & CelebA V & CelebA VI \\ \hline Source Domain 1 & -0.8 & -0.8 & 0.2 & 0.5 & 0.5 & 0.5 \\ Source Domain 2 & -0.6 & -0.6 & 0.4 & 0.5 & 0.5 & 0.5 \\ Source Domain 3 & 0.6 & 0.6 & 0.6 & 0.5 & 0.5 & 0.5 \\ Target Domain & 0.8 & 0.8 & 0.8 & 0.75 & 0.25 & 0.75 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Settings of CelebA IV, V, VI under color shift and label shift. Smiling is the label attribute for prediction. We vary the Color_Balance feature and the Smiling label distribution across domains. The correlation between Smiling and the Color_Balance feature remains consistent with Table 3. The distribution of Smiling is balanced (50% each) in all three source domains, but becomes unbalanced (either 75% and 25%, or 25% and 75%) in the target domain.
as source domains (with 302.4K patches) and the remaining two serve as validation domain (with 34.9K patches) and target domain (with 85K patches), respectively. In addition to these labeled examples, WILDS also provides additional unlabeled WSIs from the same five hospitals and patches extracted from them, including 600K unlabeled patches from the target domain. See (Koh et al., 2021; Sagawa et al., 2021) for a detailed description of the Camelyon17 dataset.
Although the 600K unlabeled target patches are collected from the same hospital as the 85K labeled target patches, there is no overlap in the WSIs where these patches are extracted from (Sagawa et al., 2021). Furthermore, the label distribution of unlabeled target patches is heavily skewed towards negative, while labeled target patches are sampled in a class-balanced manner (Sagawa et al., 2021). Consequently, this could create distribution shifts between the "unlabeled target data" and the "labeled target data" in the Camelyon17 dataset. In particular, this can lead DA algorithms, which utilize target covariate information (e.g. DIP or its variants), to identify different latent features depending on the source of target image patches used by these algorithms. In the WILDS leaderboard, methods that leverage data from the target domain utilize these 600K unlabeled patches, rather than treating the 85K labeled target patches as if they were unlabeled. However, if a distribution shift exists between the "unlabeled target data" and the "labeled target data", there is no guarantee that our propose DA algorithms will improve target performance. Therefore, our experiments examine two approaches to utilize target data for DA methods: (1) using the 600K unlabeled target patches, and (2) leveraging the 85K labeled target patches without label access.
We use the standard DenseNet-121 model architecture (Huang et al., 2017) and follow the training protocol described in Koh et al. (2021); Sagawa et al. (2021). We implement CIP, DIP-Pool, and JointDIP-Pool, where CIP only uses labeled source data, and DIP-Pool and JointDIP-Pool leverage target covariates (image patches) from the target domain. We find that the pooled versions of DIP and JointDIP increase reliability of the models by
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{CelebA IV} & \multicolumn{2}{c|}{CelebA V} & \multicolumn{2}{c}{CelebA VI} \\ \hline DA Algorithm & src\_acc & tar\_acc & src\_acc & tar\_acc & src\_acc & tar\_acc \\ \hline ERM & 94.4\(\pm\)0.6 & **94.1\(\pm\)0.5** & 94.4\(\pm\)0.6 & **93.9\(\pm\)0.9** & 94.5\(\pm\)0.6 & **94.2\(\pm\)0.5** \\ ERM-Pool & 93.6\(\pm\)0.2 & 88.7\(\pm\)0.9 & 93.6\(\pm\)0.2 & 89.6\(\pm\)0.9 & 93.8\(\pm\)0.3 & 93.1\(\pm\)0.5 \\ DIP & 94.4\(\pm\)0.4 & **93.5\(\pm\)1.2** & 94.4\(\pm\)0.5 & **94.4\(\pm\)0.9** & 94.4\(\pm\)0.4 & 93.4\(\pm\)1.2 \\ DIP-Pool & 93.7\(\pm\)0.6 & 88.0\(\pm\)0.6 & 93.6\(\pm\)0.5 & 89.5\(\pm\)1.2 & 93.7\(\pm\)0.5 & 92.9\(\pm\)0.9 \\ CIP & 93.4\(\pm\)0.4 & 88.3\(\pm\)1.0 & 93.4\(\pm\)0.4 & 89.7\(\pm\)1.1 & 93.6\(\pm\)0.2 & 93.3\(\pm\)0.5 \\ IW-ERM & 92.2\(\pm\)0.9 & 90.1\(\pm\)1.0 & 93.2\(\pm\)0.4 & 90.9\(\pm\)0.8 & 92.1\(\pm\)0.7 & **94.4\(\pm\)0.2** \\ IW-CIP & 91.8\(\pm\)0.9 & **90.3\(\pm\)0.5** & 92.7\(\pm\)1.1 & 90.6\(\pm\)1.1 & 91.7\(\pm\)1.3 & **94.3\(\pm\)0.5** \\ IW-DIP & 93.0\(\pm\)0.7 & **94.8\(\pm\)0.3** & 93.5\(\pm\)0.8 & **95.1\(\pm\)0.5** & 92.4\(\pm\)1.4 & **94.7\(\pm\)0.4** \\ IRM & 93.8\(\pm\)0.3 & 88.7\(\pm\)0.9 & 93.5\(\pm\)0.4 & 89.6\(\pm\)0.9 & 93.7\(\pm\)0.2 & 93.1\(\pm\)0.6 \\ V-REx & 92.5\(\pm\)0.2 & 89.5\(\pm\)0.9 & 92.6\(\pm\)0.3 & 90.2\(\pm\)1.2 & 93.8\(\pm\)0.2 & 93.4\(\pm\)0.6 \\ groupDRO & 93.3\(\pm\)0.3 & 90.0\(\pm\)0.8 & 93.3\(\pm\)0.3 & **91.1\(\pm\)1.0** & 93.7\(\pm\)0.3 & 93.0\(\pm\)0.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Source and target accuracy in CelebA IV, V, VI under color shift and label shift. For each setting of the CelebA, DA algorithms are run on 10 different datasets, each generated using 10 different random seeds. The top four methods are highlighted in bold. The best method is colored in red.
utilizing more samples from multiple source domains, compared to their single source versions. Therefore, we choose to implement the pooled version of DIP and JointDIP in our experiments. Similarly, DANN (Ganin et al., 2016) and CORAL (Sun and Saenko, 2016), which were initially developed for the single source version, have been extended in (Sagawa et al., 2021) to take advantage of multiple source domains. Following the standard submission guidelines in WILDS (Koh et al., 2021; Sagawa et al., 2021), we do not use any data augmentation in CIP, but allow DIP-Pool and JointDIP-Pool to utilize the same color augmentation as described in Koh et al. (2021); Sagawa et al. (2021). Hyperparameters are chosen via a grid search based on the validation accuracy, where we allow a larger range of grids compared to previous experiments (see Appendix D.3 for further details).
The results are presented in Table 7, where we compare CIP with other DA algorithms that enforce different invariances across all source domains, such as IRM (Arjovsky et al., 2019) and Fish (Shi et al., 2021); and we also compare DIP-Pool and JointDIP-Pool with other domain-invariant algorithms using different distributional distances, like DANN (Ganin et al., 2016) and CORAL (Sun and Saenko, 2016). We observe that CIP has a similar performance compared to ERM-Pool, albeit with a slightly larger variance in test accuracy. When covariates of 600K unlabeled target patches are utilized in the DIP matching penalty, DIP-Pool results in an accuracy exceeding 80%. Remarkably, JointDIP-Pool further improves this accuracy by 1.5% while substantially reducing the variance at the same time. In scenarios where covariates of 85K labeled target patches are used in the DIP matching penalty, the accuracy of both methods increases significantly, with JointDIP-Pool improving DIP-Pool by over 3%. This result indicates that jointly matching the covariates with CICs between source and target can help DIP-based algorithms to identify the appropriate features to match across domains. In addition, when comparing the accuracy of two different
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline DA Algorithm & Target covariates & val accuracy & test accuracy11 \\ \hline IRM\({}^{*}\) & & 86.2\(\pm\)1.4 & 64.2\(\pm\)8.1 \\ groupDRO\({}^{*}\) & & 85.5\(\pm\)2.2 & 68.4\(\pm\)7.3 \\ ERM-Pool\({}^{*}\) & - & 84.9\(\pm\)3.1 & 70.3\(\pm\)6.4 \\ CIP & & 87.0\(\pm\)1.2 & 71.1\(\pm\)12.5 \\ Fish\({}^{*}\) & & 83.9\(\pm\)1.2 & **74.7\(\pm\)7.1** \\ \hline DANN\({}^{*}\) & & 86.9\(\pm\)2.2 & 68.4\(\pm\)9.2 \\ CORAL\({}^{*}\) & & 90.4\(\pm\)0.9 & 77.9\(\pm\)6.6 \\ DIP-Pool & & 91.6\(\pm\)0.8 & 81.2\(\pm\)8.2 \\ JointDIP-Pool & & 91.5\(\pm\)0.6 & **82.7\(\pm\)5.2** \\ \hline DIP-Pool & 85K labeled target patches & 91.0\(\pm\)0.5 & 88.7\(\pm\)5.7 \\ JointDIP-Pool & (target labels are not used) & 91.7\(\pm\)0.6 & **91.9\(\pm\)3.1** \\ \hline \hline \end{tabular}
\end{table}
Table 7: Accuracy of DA algorithms on Camelyon17 dataset. The first five DA methodsβIRM, GroupDRO, ERM-Pool, CIP, and Fishβonly use labeled data from multiple source domains, whereas DANN, CORAL, DIP-Pool, and JointDIP-Pool additionally use target image patches from the target domain, but they do not have access to the target labels. The algorithms are run over 10 trials. Results for algorithms marked with \({}^{*}\) are obtained from the WILDS benchmark.
approaches of using target covariates in the DIP matching penalty, our results demonstrate that distribution shifts exist between the unlabeled and labeled target data in Camelyon17, and leveraging the true target covariates from the 85K labeled target patches proves more effective than using the target covariates from the 600K unlabeled target patches.
## 6 Related work
As a subfield of transfer learning, _domain adaptation_ (DA), also known as transductive transfer learning (Redko et al., 2020), aims to develop effective algorithms when distribution shifts exist across training data (source domains) and test data (target domain). In DA, it is usually assumed that we have access to some unlabeled target data, and we aim to build a model specifically for the target domain. Another related terminology _domain generalization_ (DG), or out-of-distribution generalization (OOD) (Ben-Tal et al., 2009), instead assumes that unlabeled target data is unobtainable, and seeks a model to generalize appropriately for all possible test domains. In this paper, we do not distinguish these two terms particularly, and simply consider the DA setting, i.e., we have access to an unlabeled target dataset, and our goal is to find a classifier that performs well on the target domain.
One line of work in DA focuses on relating source and target domains by learning a common feature representation across the domains. Pan et al. (2008, 2010) first proposed to find transferable components across domains in a reproducing kernel Hilbert space, such that data distributions in different domains are close. Similar designs of creating intermediate representations across domains were investigated in (Gopalan et al., 2011; Gong et al., 2012), and later the formal idea of matching probability distributions and extracting invariant information was introduced in (Baktashmotlagh et al., 2013). They named their approach domain invariant projection (DIP), which projects data to a low-dimensional latent space where source and target covariates distributions are matched. This DIP type of approach was then widely employed in research on neural networks for DA (Sun and Saenko, 2016; Ganin et al., 2016; Hoffman et al., 2018). Specifically, Ganin et al. (2016) first proposed to use Generative Adversarial Network (GAN) based distributional distances to learn feature representations. Since then, subsequent research has emerged that applies different neural-network-based distances (Long et al., 2017; Courty et al., 2017; Long et al., 2018; Hoffman et al., 2018; Peng et al., 2019). While these studies have effectively demonstrated the empirical success of their methodologies across various image and text datasets, there is still a lack of understanding about the specific conditions under which these methods can achieve good target performance.
From a theoretical point of view, the rationale for DIP has been rigorously established by Ben-David et al. (2006, 2010), where they prove a target risk bound via Vapnik-Chervonenkis (VC) theory. Further studies have analyzed the source and target risk difference using different divergence measures (Mansour et al., 2009; Cortes et al., 2010; Cortes and Mohri, 2011, 2014; Hoffman et al., 2018); see the survey by Redko et al. (2020) for a complete review. In all these works, the target risk bound typically includes three components: the source risk of the classifier, a divergence term measuring invariance of the representation, and an optimal joint error term. Then DIP objective can be viewed as minimizing the sum of the first two terms in order to achieve a low target risk. However,
Johansson et al. (2019); Zhao et al. (2019) argued that DIP can completely fail in certain scenarios because the joint error term is not observable and cannot be controlled by DIP. For instance, there can be label-flipping features that achieve perfect source accuracy and invariance of representation, but result in poor performance on the target domain. Addressing how to avoid such cases in DIP has not been adequately explored.
The second class of DA methods arises from exploring invariance solely from multiple labeled source domains, without using target covariates. One such category of invariance is known as conditional invariance, which aims to discover feature representations \(\phi(X)\) that are invariant conditional on the label \(Y\) across source distributions. It was first introduced in Gong et al. (2016), where the authors proposed to find conditional transferable components after proper location-scale transformations. Later Heinze-Deml and Meinshausen (2017) applied a related approach by classifying features into "core" ones, which are conditionally invariant across domains, and "style" ones, whose distribution may shift significantly. They sought to construct a classifier built upon only the core features by imposing the conditional invariant penalty (CIP). Chen and Buhlmann (2020) further developed a theoretical framework under structural equation models to analyze the effect of CIP when the data generation process is anticausal. The emergence of conditionally invariant features, or CICs, is a natural consequence under anticausal data generation where the covariates, which are descendants of the labels, remain unperturbed. Consequently, most tasks tackled by DA methods utilizing conditional invariance typically involve anticausal problems.
It is also worth noting that under conditional invariance, it is convenient to study the label shift problem, where the marginal distributions of the label \(Y\) are shifted across domains. This shift in label distribution is common in many scenarios, for example, the distribution of ArXiv paper categories can be influenced by changes in research topic trends over time (Wu et al., 2021). Lipton et al. (2018) first proposed the label shift correction algorithm, which estimates the amount of label shift by using the conditional invariance of the covariates and the moment matching equation. To ensure numerically stability, several variants of the algorithms have been further introduced. For instance, Azizzadenesheli et al. (2019) formulated an \(\ell_{2}\)-norm regularized least squares optimization problem, while Tachet des Combes et al. (2020) considered a constrained least squares problem and the generalized label shift assumption. Garg et al. (2020) introduced a maximum likelihood estimation approach and provided a unified view of previous label shift correction algorithms. More recently, Chen et al. (2022) has considered a distributional shift named Sparse Joint Shift (SJS), which allows for both labels and a few covariates to shift, however, the generalized label shift considered in Tachet des Combes et al. (2020) and this paper permits distribution shifts of more general latent feature representations.
Besides conditional invariance, other types of invariance have also attracted increasing attention. For example, Arjovsky et al. (2019) came up with the Invariant Risk Minimization (IRM) method to identify causal features via enforcing the invariance of label \(Y\) given the features \(\phi(X)\). Following IRM, analogous invariance has been explored, such as the risk invariance enforced by variance across domains (Krueger et al., 2021; Xie et al., 2020) and gradient invariance (Koyama and Yamaguchi, 2020; Shi et al., 2021). While these approaches were initially motivated to identify invariant features in causal data models, they have also been applied to anticausal problems. However, as pointed out by Rosen
feld et al. (2020); Kamath et al. (2021), IRM may fail to capture the correct invariance, especially when the model is non-linear. Counterfactual invariance is another type of invariance introduced in (Veitch et al., 2021; Jiang and Veitch, 2022), where the aim is to seek representations that are counterfactually invariant to a spurious factor of variation. (Wang and Veitch, 2022) show that the formulation of learning counterfactual invariant representations is closely related to different formulations of invariant representation learning algorithms, depending on the underlying causal structure of the data. Aside from the robustness achieved by enforcing specific invariance, a more general framework is distributionally robust optimization (DRO) (Ben-Tal et al., 2013; Duchi et al., 2016). For instance, groupDRO (Sagawa et al., 2019) intends to improve worst-group performance using an online algorithm to update group weights. It should be noted that related study of invariance and robustness has also appeared in works from a causal inference point of view (Peters et al., 2016; Meinshausen, 2018; Rothenhausler et al., 2018; Magliacane et al., 2018).
Other DA algorithms have also been introduced based on different assumptions relating source and target data, which do not fall into the above classes of methods relying on invariance. For instance, self training, which originated from the semi-supervised learning literature (Chapelle et al., 2009), recursively adapts a classifier to fit pseudolabels predicted by a previous model using unlabeled data in a new domain (Amini and Gallinari, 2003). Theoretical properties of self training has been studied recently in (Kumar et al., 2020; Wei et al., 2020). Data augmentation serves as another beneficial way of obtaining better performance (Simard et al., 1998; Zhang et al., 2017; Yao et al., 2022), especially when possible perturbations across domains are well understood. In addition, a different perspective of DA stems from meta-learning (Thrun and Pratt, 2012; Finn et al., 2017). This "learning-to-learn" approach aims to distill knowledge of multiple learning episodes to improve future learning performance.
## 7 Discussion
In this paper, we highlight three prominent roles of CICs in DA. First, we propose the IW-CIP algorithm based on CICs, which can solve DA problems beyond simple covariate shift and label shift with theoretical guarantees. Specifically, we establish an explicit target risk bound for IW-CIP and further provide fine-grained analysis for each term in the bound. Second, we demonstrate the advantage of using CICs to measure the performance of other DA algorithms. For example, a conditionally invariant classifier built upon CICs can be applied as an approximate proxy for target labels to refute DA algorithms with poor target performance. Finally, to solve the label-flipping issue of DIP, we introduce the JointDIP algorithm which jointly matches the distribution of DIP features and CICs. Both our theoretical results and numerical experiments support the benefits of using CICs in DA.
While CICs across multiple source domains can be beneficial in DA, in practice it may not always be advantageous to use all available source domains for identifying CICs. Particularly, if not all source domains are related to the target domain, or if there aren't any features invariant across all source domains, it would be more efficient to select a subset of source domains for learning CICs. However, choosing the source domains that are related to the target domain or share common features is difficult without access to target
labels. In such settings, one feasible approach is to learn invariant features across possible combinations of source domains and refer to domain experts to investigate those features, which would help determine the selection of source domains. We leave the development of a framework for best source domain selection for future research.
There are other promising directions to pursue following our work. While we provide bounds in Section 4.1 to detect failure of DA algorithms, these bounds may not be tight and potentially can be improved based on other structural assumptions on the data generation model. Moreover, although in this paper we mainly focus on combing CICs with other DA algorithms, it is possible that other forms of invariance can also be applied to improve DA algorithms. For instance, can we use features obtained by IRM (Arjovsky et al., 2019), V-REx (Krueger et al., 2021), or other domain generalization algorithms to aid and enhance existing DA methods such as DIP with target risk guarantees? Finally, this paper mainly considers the unsupervised domain adaptation scenario where target labels are unavailable, however, datasets such as Camelyon17 from WILDS (Koh et al., 2021) may have a small portion of target labels available. Developing DA algorithms with target risk guarantees in such settings is therefore of practical interest.
## Appendix A Illustrative examples of general anticausal model
In this section, we provide several examples for the general anticausal model and demonstrate how the conditions in Theorem 3(b) and Theorem 3(c) are satisfied under these models.
### Example with CICs and label-flipping features
We present an example of the general anticausal model, as defined in Definition 4, which incorporates both CICs and label-flipping features.
Example 1Let \(Y\in\mathcal{Y}=\{1,2\}\) be a binary random variable and balanced in both domains, i.e., \(p_{1}^{(m)}=p_{2}^{(m)}=p_{1}^{(\mathfrak{T})}=p_{2}^{(\mathfrak{T})}=1/2\). Let \(X\in\mathbb{R}^{3}\) be a three-dimensional covariate vector with mechanism function \(f^{(1)}\) and \(f^{(\mathfrak{T})}\) as given by
\[f^{(1)}(1) =\begin{pmatrix}-1\\ -1\\ -1\end{pmatrix},\;f^{(1)}(2)=\begin{pmatrix}1\\ 1\\ 1\end{pmatrix};\] \[f^{(\mathfrak{T})}(1) =\begin{pmatrix}-1\\ 0\\ 1\end{pmatrix},\;f^{(\mathfrak{T})}(2)=\begin{pmatrix}1\\ 2\\ -1\end{pmatrix},\]
and where the noise terms \(\epsilon^{(1)}\) and \(\epsilon^{(\mathfrak{T})}\) are drawn from a Gaussian distribution \(\mathcal{P}_{\epsilon}=\mathcal{N}(0,\mathbb{I}_{3})\). Examining each coordinate, we can see that the first coordinate \(X_{[1]}\) maintains its conditional invariance across source and target, and thus it is a CIC. The second coordinate \(X_{[2]}\) is perturbed, but the correlations \(\rho(X_{[2]}^{(1)},Y^{(1)})=\rho(X_{[2]}^{(\mathfrak{T})},Y^{(\mathfrak{T})})= \sqrt{2}/2\) remain unchanged. The third coordinate \(X_{[3]}\) is also perturbed, and by Definition 3, it is a label-flipping feature, as \(\rho(X_{[3]}^{(1)},Y^{(1)})=-\rho(X_{[3]}^{(\mathfrak{T})},Y^{(\mathfrak{T})}) =\sqrt{2}/2\).
### Demonstration of condition in Theorem 3(b)
The condition (35) in Theorem 3(b) requires that for the linear conditionally invariant feature mapping \(\phi_{\text{inv}}\), CICs must have different conditional means for different labels. In other words, the first-moment of the CICs should suffice to distinguish between labels. We show that under the anticausal model given in the example above, this condition is indeed satisfied.
Example 1 Cont.In Example 1, one possible linear conditionally invariant feature mapping is \(\phi_{\text{inv}}(x)=x_{[1]}\), i.e., it projects to the first coordinate of the covariates. Simple algebra shows that
\[\mathbb{E}\left[\phi_{\text{inv}}(X^{(1)})\ \Big{|}\ Y^{(1)}=1\right]=-1,\text{ and }\mathbb{E}\left[\phi_{\text{inv}}(X^{(1)})\ \Big{|}\ Y^{(1)}=2=1\right],\]
which confirms the condition (35).
### Demonstration of condition in Theorem 3(c)
While the condition (35) for linear \(\phi_{\rm inv}\) is relatively straightforward, verifying the full-rank condition of matrix \(C_{\phi_{\rm inv}}(a)\) for general \(\phi_{\rm inv}\) in Theorem 3(c) can be more challenging due to the computation of higher-moments. Here we provide several examples where a nonlinear conditionally invariant feature mapping \(\phi_{\rm inv}\) meets the full rank condition.
**Example 1 Cont.** Consider the general anticausal model introduced in Example 1 (\(L=2\)) and let
\[\begin{split}&\phi_{\rm inv}^{k}(x)=x_{[1]}^{k},\ \ k=0,1,2,\ldots,\ \text{and}\\ &\phi_{\rm inv}^{e}(x)=\exp(x_{[1]}).\end{split} \tag{37}\]
Since \(X_{[1]}\) is a CIC, we know that \(\phi_{\rm inv}^{k}(X_{[1]})\) and \(\phi_{\rm inv}^{e}(X_{[1]})\) are also CICs. Take \(a=1\) in the definition of the matrix \(C_{\phi_{\rm inv}}(a)\). From the standard calculation of moments for Gaussian random variables, we have
\[C_{\phi_{\rm inv}^{k}}(a)=\begin{pmatrix}1&1\\ \mathbb{E}_{u\sim\mathcal{N}(0,1)}\left[(u-1)^{k}\right]&\mathbb{E}_{u\sim \mathcal{N}(0,1)}\left[(u+1)^{k}\right]\end{pmatrix},\ \ C_{\phi_{\rm inv}^{e}}(a)=\begin{pmatrix}1&1\\ e^{-\frac{1}{2}}&e^{\frac{3}{2}}\end{pmatrix}.\]
We conclude that \(C_{\phi_{\rm inv}^{k}}(a)\) is full rank if and only if \(k\) is odd, and \(C_{\phi_{\rm inv}^{e}}(a)\) is a full rank matrix. Therefore, \(\phi_{\rm inv}^{2k+1}(x)=x_{[1]}^{2k+1}(k=0,1,2,\ldots)\) and \(\phi_{\rm inv}^{e}(x)=\exp(x_{[1]})\) satisfy the full rank condition.
**Example 2** Consider the general anticausal model from Assumption 3 where \(Y\in\mathcal{Y}=\{1,2,3\}\) with \(L=3\). Let \(X\in\mathbb{R}^{2}\) be a two-dimensional covariate vector with mechanism functions \(f^{(1)}\) and \(f^{(\mathfrak{T})}\) defined as follows:
\[f^{(1)}(1) =\begin{pmatrix}1\\ -1\end{pmatrix},\ f^{(1)}(2)=\begin{pmatrix}2\\ 0\end{pmatrix},\ f^{(1)}(3)=\begin{pmatrix}3\\ 1\end{pmatrix};\] \[f^{(\mathfrak{T})}(1) =\begin{pmatrix}1\\ 1\end{pmatrix},\ f^{(\mathfrak{T})}(2)=\begin{pmatrix}2\\ 0\end{pmatrix},\ f^{(\mathfrak{T})}(3)=\begin{pmatrix}3\\ -1\end{pmatrix}.\]
That is, the first coordinate \(X_{[1]}\) is a CIC, while the second coordinate \(X_{[2]}\) is perturbed. Assume that the noise distribution \(\mathcal{P}_{\epsilon}=\mathcal{N}(0,\mathbb{I}_{2})\) follows a Gaussian distribution. Consider the feature mappings \(\phi_{\rm inv}^{2}\) and \(\phi_{\rm inv}^{e}\) from Eq. (37) as the conditionally invariant feature mappings, and take \(a=1\). Basic algebra shows that
\[C_{\phi_{\rm inv}^{2}}(a) =\begin{pmatrix}1&1&1\\ \mathbb{E}_{u\sim\mathcal{N}(0,1)}\left[(u+1)^{2}\right]&\mathbb{E}_{u\sim \mathcal{N}(0,1)}\left[(u+2)^{2}\right]&\mathbb{E}_{u\sim\mathcal{N}(0,1)} \left[(u+3)^{2}\right]\\ \mathbb{E}_{u\sim\mathcal{N}(0,1)}\left[(u+1)^{4}\right]&\mathbb{E}_{u\sim \mathcal{N}(0,1)}\left[(u+2)^{4}\right]&\mathbb{E}_{u\sim\mathcal{N}(0,1)} \left[(u+3)^{4}\right]\end{pmatrix}=\begin{pmatrix}1&1&1\\ 2&5&10\\ 10&43&138\end{pmatrix},\] \[C_{\phi_{\rm inv}^{e}}(a) =\begin{pmatrix}1&1&1&1\\ \mathbb{E}_{u\sim\mathcal{N}(0,1)}\left[e^{u+1}\right]&\mathbb{E}_{u\sim \mathcal{N}(0,1)}\left[e^{u+2}\right]&\mathbb{E}_{u\sim\mathcal{N}(0,1)}\left[ e^{u+3}\right]\\ \mathbb{E}_{u\sim\mathcal{N}(0,1)}\left[e^{2(u+1)}\right]&\mathbb{E}_{u\sim \mathcal{N}(0,1)}\left[e^{2(u+2)}\right]&\mathbb{E}_{u\sim\mathcal{N}(0,1)} \left[e^{2(u+3)}\right]\end{pmatrix}=\begin{pmatrix}1&1&1\\ \mathbb{E}^{\frac{3}{2}}&e^{\frac{5}{2}}&e^{\frac{7}{2}}\\ e^{4}&e^{6}&e^{8}\end{pmatrix},\]
both of which are full-rank, and therefore satisfying the full rank condition in Theorem 3(c).
Example 3Consider the general anticausal model from Assumption 3 with arbitrary number of classes \(L\geq 2\). Let
\[f^{(m)}(y)=\begin{pmatrix}e_{y}\\ a_{y}^{(m)}\end{pmatrix},f^{(\mathfrak{T})}(y)=\begin{pmatrix}e_{y}\\ a_{y}^{(\mathfrak{T})}\end{pmatrix},\]
where \(e_{i}\in\mathbb{R}^{L}\) denotes the zero vector with only the \(i\)-th element being \(1\), and \(a_{y}^{(m)},a_{y}^{(\mathfrak{T})}\in\mathbb{R}^{p-L}\) (\(p>L\)) are arbitrary vectors. That is, the first \(L\) covariates are CICs while the last \(p-L\) coordinates can be arbitrarily perturbed. Let
\[\phi_{\text{inv}}^{k}(X)=\left(X_{[1]}^{k},X_{[2]}^{k},\cdots,X_{[L]}^{k} \right)^{\top},\ \ k=1,2,\ldots.\]
For a fixed vector \(a\in\mathbb{R}^{L}\), verifying \(C_{\phi_{\text{inv}}^{k}}(a)\) is full rank can be technical depending on the distribution \(\mathcal{P}_{\epsilon}\). Instead, we show _an illustrative explanation_ rather than a rigorous proof. First, let's consider \(\mathcal{P}_{\epsilon}=\delta_{0}(\cdot)\), the Dirac distribution centered at \(0\). Then we can compute
\[C_{\phi_{\text{inv}}^{k}}(a)=\begin{pmatrix}1&1&\cdots&1\\ a_{[1]}&a_{[2]}&\cdots&a_{[L]}\\ \vdots&\vdots&\ddots&\vdots\\ a_{[1]}^{L-1}&a_{[2]}^{L-1}&\cdots&a_{[L]}^{L-1}\end{pmatrix},\]
which is a Vandermonde matrix and is full rank as long as we take \(a_{[i]}\neq a_{[j]},i\neq j\). Next we consider a case where \(\mathcal{P}_{\epsilon}\) is a distribution concentrated around \(0\) (e.g. a Gaussian distribution with very small variance). In this case, the matrix will be slightly perturbed but is still full rank by the continuity of high-order moments and matrix determinants.
## Appendix B Proof of Section 3 and additional results on IW-CIP
In this section we prove our main theorems and propositions given in Section 3, as well as additional results given in Proposition 5 and Lemma 1. Proposition 5 provides the finite-sample bound for the IW-CIP penalty when it is defined by the squared mean distance, and Lemma 1 proves the number of required source domains to validate the conditions in Proposition 3 when the perturbations are randomly generated.
### Proof of Theorem 1.A
Let \(w=(w^{(1)},\ldots,w^{(M)})\) where \(w^{(m)}\) is the true importance weight for the \(m\)-th source domain. For any classifier \(h=g\circ\phi\), we have
\[\mathcal{R}^{(\mathfrak{T})}(h)-\overline{\mathcal{R}}(h;w)\] \[=\frac{1}{M}\sum_{m=1}^{M}\sum_{y=1}^{L}\left(\mathbb{P}\left\{h( X^{(\mathfrak{T})})\neq y\ \Big{|}\ Y^{(\mathfrak{T})}=y\right\}-\mathbb{P}\left\{h(X^{(m)})\neq y\ \Big{|}\ Y^{(m)}=y\right\}\right)\mathbb{P}\left\{Y^{(\mathfrak{T})}=y\right\}\]
\[=\frac{1}{M}\sum_{m=1}^{M}\sum_{y=1}^{L}\left(\mathbb{P}\left\{h(X^{( \mathfrak{T})})=y\ \Big{|}\ Y^{(\mathfrak{T})}=y\right\}-\mathbb{P}\left\{h(X^{(m)})=y\ \Big{|}\ Y^{(m)}=y\right\}\right)\mathbb{P}\left\{Y^{( \mathfrak{T})}=y\right\},\]
where the second step applies \(w_{[y]}^{(m)}=\frac{\mathbb{P}\left\{Y^{(\mathfrak{T})}=y\right\}}{\mathbb{P} \left\{Y^{(m)}=y\right\}}\) and the law of total expectation. By definition of \(D_{\mathcal{G}}\left(\cdot,\cdot\right)\), it is then upper bounded by
\[\left|\mathcal{R}^{(\mathfrak{T})}(h)-\overline{\mathcal{R}}(h; w)\right| \leq\frac{1}{M}\sum_{m=1}^{M}\sum_{y=1}^{L}D_{\mathcal{G}}\left( \mathcal{P}_{\phi(X)|Y=y}^{(\mathfrak{T})},\mathcal{P}_{\phi(X)|Y=y}^{(m)} \right)\mathbb{P}\left\{Y^{(\mathfrak{T})}=y\right\}\] \[\leq\max_{\begin{subarray}{c}m=1,\ldots,M,\\ y=1,\ldots,L\end{subarray}}D_{\mathcal{G}}\left(\mathcal{P}_{\phi(X)|Y=y}^{( \mathfrak{T})},\mathcal{P}_{\phi(X)|Y=y}^{(m)}\right)\sum_{m=1}^{M}\frac{1}{M }\sum_{y=1}^{L}\mathbb{P}\left\{Y^{(\mathfrak{T})}=y\right\}\] \[=\max_{\begin{subarray}{c}m=1,\ldots,M,\\ y=1,\ldots,L\end{subarray}}D_{\mathcal{G}}\left(\mathcal{P}_{\phi(X)|Y=y}^{( \mathfrak{T})},\mathcal{P}_{\phi(X)|Y=y}^{(m)}\right)\] \[=\Psi_{\mathcal{G},\phi}, \tag{38}\]
where we use \(\sum_{y=1}^{L}\mathbb{P}\left\{Y^{(\mathfrak{T})}=y\right\}=1\) and \(\sum_{m=1}^{M}1/M=1\) in the second step. Furthermore, we can bound
\[\left|\mathcal{R}^{(m)}(h;w^{(m)})-\mathcal{R}^{(m)}(h;\widehat{w }^{(m)})\right| =\left|\mathbb{E}\left[\left(w_{[Y^{(m)}]}^{(m)}-\widehat{w}_{[Y^ {(m)}]}^{(m)}\right)\cdot\mathbf{1}_{h(X^{(m)})\neq Y^{(m)}}\right]\right|\] \[\leq\|w^{(m)}-\widehat{w}^{(m)}\|_{\infty}.\]
It follows that
\[\left|\overline{\mathcal{R}}(h;w)-\overline{\mathcal{R}}(h; \widehat{w})\right| \leq\frac{1}{M}\sum_{m=1}^{M}\left|\mathcal{R}^{(m)}(h;w^{(m)})- \mathcal{R}^{(m)}(h;\widehat{w}^{(m)})\right|\] \[\leq\max_{m=1,\ldots,M}\|w^{(m)}-\widehat{w}^{(m)}\|_{\infty}. \tag{39}\]
The result now follows from combining Eq. (38) and Eq. (39).
### Proof of Theorem 1.2
Denote the empirical average source risk across \(M\) environments by
\[\widehat{\overline{\mathcal{R}}}(h;\mathrm{w}^{(m)})=\frac{1}{M}\sum_{m=1}^{M }\widehat{\mathcal{R}}^{(m)}(h;\mathrm{w}^{(m)}).\]
Then we can write
\[\mathcal{R}^{(\mathfrak{T})}(\widehat{h}_{\text{IW-CIP}})- \mathcal{R}^{(\mathfrak{T})}(h^{\star})\] \[=\underbrace{\mathcal{R}^{(\mathfrak{T})}(\widehat{h}_{\text{IW-CIP }})-\overline{\mathcal{R}}(\widehat{h}_{\text{IW-CIP}};w)}_{(A_{1})}+ \underbrace{\overline{\mathcal{R}}(\widehat{h}_{\text{IW-CIP}};w)-\widehat{ \overline{\mathcal{R}}}(\widehat{h}_{\text{IW-CIP}};w)}_{(B_{1})}+\]
\[+\underbrace{\widehat{\mathcal{R}}(\widehat{h}_{\text{IW-CIP}};w)- \widehat{\mathcal{R}}(\widehat{h}_{\text{IW-CIP}};\widehat{w})}_{(C_{1})}+ \underbrace{\widehat{\mathcal{R}}(\widehat{h}_{\text{IW-CIP}};\widehat{w})}_{(D )}-\widehat{\mathcal{R}}(h^{\star};\widehat{w})}_{(D)}+\underbrace{\widehat{ \mathcal{R}}(h^{\star};\widehat{w})-\widehat{\mathcal{R}}(h^{\star};w)}_{(C_{2 })}\] \[+\underbrace{\widehat{\mathcal{R}}(h^{\star};w)-\overline{ \mathcal{R}}(h^{\star};w)}_{(B_{2})}+\underbrace{\overline{\mathcal{R}}(h^{ \star};w)-\mathcal{R}^{(\mathfrak{T})}(h^{\star})}_{(A_{2})}.\]
Term \(A_{1}\) and \(A_{2}\)From Eq. (38) in the proof of Theorem 1.A, we have
\[A_{1}=\mathcal{R}^{(\mathfrak{T})}(\widehat{h}_{\text{IW-CIP}} )-\overline{\mathcal{R}}(\widehat{h}_{\text{IW-CIP}};w)\leq\Psi_{\mathcal{G}, \widehat{\phi}_{\text{IW-CIP}}}, \tag{40}\] \[A_{2}=\overline{\mathcal{R}}(h^{\star};w)-\mathcal{R}^{( \mathfrak{T})}(h^{\star})=0.\]
The second equation follows from the definition of \(h^{\star}=g^{\star}\circ\phi^{\star}\), i.e., \(\phi^{\star}\) is a conditionally invariant feature mapping and satisfies \(\Psi_{\mathcal{G},\phi^{\star}}=0\).
Term \(B_{1}\) and \(B_{2}\)For any \(h=g\circ\phi\), we have \(\left|w^{(m)}_{[Y^{(m)}]}\mathbf{1}_{h(X^{(m)})\neq Y^{(m)}}\right|\leq\left\| w^{(m)}\right\|_{\infty}\). Then the generalization bound based on Rademacher complexity (see e.g. (Wainwright, 2019, Theorem 4.10)) shows that, for the \(m\)-th source domain, the bound
\[\sup_{\begin{subarray}{c}h=g\circ\phi\\ g\in\mathcal{G},\phi\in\Phi\end{subarray}}\left|\mathcal{R}^{(m)}(h;w^{(m)})- \widehat{\mathcal{R}}^{(m)}(h;w^{(m)})\right|\leq 2\mathfrak{R}_{n^{(m)}, \mathcal{P}^{(m)}}\left(\mathcal{H}(w^{(m)},\mathcal{G},\Phi)\right)\] \[+\left\|w^{(m)}\right\|_{\infty}\sqrt{\frac{2\log(M/\delta)}{n^{( m)}}},\]
holds with probability at least \(1-\delta/M\). Using the union bound over each \(m\in\{1,\ldots,M\}\), we obtain that with probability at least \(1-\delta\),
\[|B_{1}|+|B_{2}| \leq 2\sup_{\begin{subarray}{c}h=g\circ\phi,\\ g\in\mathcal{G},\phi\in\Phi\end{subarray}}\frac{1}{M}\sum_{m=1}^{M}\left| \mathcal{R}^{(m)}(h;w^{(m)})-\widehat{\mathcal{R}}^{(m)}(h;w^{(m)})\right|\] \[\leq\max_{m=1,\ldots,M}\left[4\mathfrak{R}_{n^{(m)},\mathcal{P}^ {(m)}}\left(\mathcal{H}(w^{(m)},\mathcal{G},\Phi)\right)+2\left\|w^{(m)} \right\|_{\infty}\sqrt{\frac{2\log(M/\delta)}{n^{(m)}}}\right]. \tag{41}\]
Term \(C_{1}\) and \(C_{2}\)For the \(m\)-th source domain, let us define
\[\widehat{\ell}_{j}^{(m)}=\sum_{k=1}^{n}\mathbf{1}_{Y_{k}^{(m)}=j}\mathbf{1}_{ h(X_{k}^{(m)})\neq Y_{k}^{(m)}}.\]
Then \(\|\widehat{\ell}^{(m)}\|_{1}\leq n\) and so for any \(h=g\circ\phi\),
\[\left|\widehat{\mathcal{R}}^{(m)}(h;w^{(m)})-\widehat{\mathcal{R} }^{(m)}(h;\widehat{w}^{(m)})\right| =\left|\frac{1}{n}\sum_{k=1}^{n}\left(w^{(m)}_{[Y_{k}^{(m)}]}- \widehat{w}^{(m)}_{[Y_{k}^{(m)}]}\right)\mathbf{1}_{h(X_{k}^{(m)})\neq Y_{k}^ {(m)}}\right|\] \[=\left|\frac{1}{n}\sum_{j=1}^{L}\left(w^{(m)}_{[j]}-\widehat{w}^{ (m)}_{[j]}\right)\widehat{\ell}_{j}^{(m)}\right|\]
\[\leq\frac{1}{n}\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{\infty} \left\|\widehat{\ell}^{(m)}\right\|_{1}\leq\left\|w^{(m)}-\widehat{w}^{(m)} \right\|_{\infty}.\]
It follows that
\[|C_{1}|+|C_{2}| \leq 2\sup_{\begin{subarray}{c}h=g\circ\phi,\\ g\in\mathcal{G},\phi\in\Phi\end{subarray}}\frac{1}{M}\sum_{m=1}^{M}\left| \widehat{\mathcal{R}}^{(m)}(h;w^{(m)})-\widehat{\mathcal{R}}^{(m)}(h; \widehat{w}^{(m)})\right|\] \[\leq 2\max_{m=1,\ldots,M}\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{ \infty}. \tag{42}\]
Term \(D\)By definition of \(\widehat{h}_{\text{IW-CIP}}\) from Eq. (18), we know that
\[\widehat{\widehat{\mathcal{R}}}(\widehat{h}_{\text{IW-CIP}};\widehat{w})+ \widehat{\Lambda}_{\widehat{\phi}_{\text{W-CIP}}}\leq\widehat{\widehat{ \mathcal{R}}}(h^{\star};\widehat{w})+\widehat{\Lambda}_{\phi^{\star}},\]
and so rearranging terms,
\[D=\widehat{\widehat{\mathcal{R}}}(\widehat{h}_{\text{IW-CIP}};\widehat{w})- \widehat{\widehat{\mathcal{R}}}(h^{\star};\widehat{w})\leq\widehat{\Lambda} _{\phi^{\star}}-\widehat{\Lambda}_{\widehat{\phi}_{\text{IW-CIP}}}\leq \widehat{\Lambda}_{\phi^{\star}}. \tag{43}\]
The theorem then follows from combining Eq. (40), (41), (42), and (43).
### Proof of Proposition 1
For a family of functions \(\mathcal{G}\) mapping \(\mathcal{Z}\) to \([0,1]\) and a distribution \(\mathcal{P}\) on \(\mathcal{Z}\), let \(\widehat{\mathcal{P}}_{n}\) denote the empirical distribution of \(n\,i.i.d.\) samples \((Z_{1},Z_{2},\ldots,Z_{n})\) from \(\mathcal{P}\). The standard Rademacher complexity theory (see e.g. (Wainwright, 2019, Theorem 4.10)) states that for any \(\delta>0\), with probability at least \(1-\delta\), we have
\[\sup_{g\in\mathcal{G}}\left|\mathbb{E}_{Z\sim\mathcal{P}}\left[g(Z)\right]- \mathbb{E}_{Z\sim\widehat{\mathcal{P}}_{n}}\left[g(Z)\right]\right|\leq 2 \mathfrak{R}_{n,\mathcal{P}}\left(\mathcal{G}\right)+\sqrt{\frac{\log(2/ \delta)}{2n}}.\]
By definition of \(\mathcal{G}\)-divergence, with probability \(1-\delta\) we have
\[D_{\mathcal{G}}\left(\mathcal{P},\widehat{\mathcal{P}}_{n}\right) =\sup_{g\in\mathcal{G}}\max_{y=1,2,\ldots,L}\left|\mathbb{E}_{Z \sim\mathcal{P}}\left[\mathbf{1}_{g(Z)=y}\right]-\mathbb{E}_{Z\sim\widehat{ \mathcal{P}}_{n}}\left[\mathbf{1}_{g(Z)=y}\right]\right|\] \[=\max_{y=1,2,\ldots,L}\sup_{g\in\mathcal{G}}\left|\mathbb{E}_{Z \sim\mathcal{P}}\left[\mathbf{1}_{g(Z)=y}\right]-\mathbb{E}_{Z\sim\widehat{ \mathcal{P}}_{n}}\left[\mathbf{1}_{g(Z)=y}\right]\right|\] \[\leq\max_{y=1,2,\ldots,L}\left(2\mathfrak{R}_{n,\mathcal{P}} \left(\mathcal{G}_{y}\right)+\sqrt{\frac{\log(2L/\delta)}{2n}}\right), \tag{44}\]
where \(\mathcal{G}_{y}\coloneqq\left\{\mathbf{1}_{g(x)=y},g\in\mathcal{G}\right\}\). Then we can bound
\[D_{\mathcal{G}}\left(\widehat{\mathcal{P}}_{\phi^{\star}(X)|Y= y}^{(m^{\prime})},\widehat{\mathcal{P}}_{\phi^{\star}(X)|Y=y}^{(m^{\prime})}\right)\] \[\stackrel{{(i)}}{{\leq}}D_{\mathcal{G}}\left( \widehat{\mathcal{P}}_{\phi^{\star}(X)|Y=y}^{(m)},\mathcal{P}_{\phi^{\star}(X )|Y=y}^{(m)}\right)+D_{\mathcal{G}}\left(\mathcal{P}_{\phi^{\star}(X)|Y=y}^{( m)},\mathcal{P}_{\phi^{\star}(X)|Y=y}^{(m^{\prime})}\right)\] \[\qquad+D_{\mathcal{G}}\left(\mathcal{P}_{\phi^{\star}(X)|Y=y}^{( m^{\prime})},\widehat{\mathcal{P}}_{\phi^{\star}(X)|Y=y}^{(m^{\prime})}\right)\]
\[\stackrel{{(ii)}}{{=}}D_{\mathcal{G}}\left(\widehat{\mathcal{P}}^{(m)}_{ \phi^{\star}(X)|Y=y},\mathcal{P}^{(m)}_{\phi^{\star}(X)|Y=y}\right)+D_{\mathcal{G }}\left(\widehat{\mathcal{P}}^{(m^{\prime})}_{\phi^{\star}(X)|Y=y},\mathcal{P}^ {(m^{\prime})}_{\phi^{\star}(X)|Y=y}\right), \tag{45}\]
where in step \((i)\) we apply triangular inequality, and in step \((ii)\) we use the fact that \(\phi^{\star}\) is a conditionally invariant feature mapping across source domains to cancel the middle term. Applying union bound, we get
\[\widehat{\Lambda}_{\phi^{\star}} =\frac{\lambda_{\text{IW-CIP}}}{LM^{2}}\sum_{y=1}^{L}\sum_{m \neq m^{\prime}}D_{\mathcal{G}}\left(\widehat{\mathcal{P}}^{(m)}_{\phi^{\star }(X)|Y=y},\widehat{\mathcal{P}}^{(m^{\prime})}_{\phi^{\star}(X)|Y=y}\right)\] \[\stackrel{{(i)}}{{\leq}}\frac{2\lambda_{\text{IW-CIP }}}{LM}\sum_{y=1}^{L}\sum_{m=1}^{M}D_{\mathcal{G}}\left(\mathcal{P}^{(m)}_{ \phi^{\star}(X)|Y=y},\widehat{\mathcal{P}}^{(m)}_{\phi^{\star}(X)|Y=y}\right)\] \[\stackrel{{(ii)}}{{\leq}}\frac{4\lambda_{\text{IW-CIP }}}{LM}\cdot\sum_{y=1}^{L}\sum_{m=1}^{M}\max_{y^{\prime}=1,2,\ldots,L}\mathfrak{ R}_{n^{(m)},\mathcal{P}^{(m)}_{\phi^{\star}(X)|Y=y}}\left(\mathcal{G}_{y^{ \prime}}\right)+\frac{2\lambda_{\text{IW-CIP}}}{M}\sum_{m=1}^{M}\sqrt{\frac{ \log(2LM/\delta)}{2n^{(m)}}}\] \[\leq 2\lambda_{\text{IW-CIP}}\left(2\max_{\begin{subarray}{c}m=1, 2,\ldots,M\\ y=1,2,\ldots,L\\ y^{\prime}=1,2,\ldots,L\end{subarray}}\mathfrak{R}_{n^{(m)},\mathcal{P}^{(m)}_{ \phi^{\star}(X)|Y=y}}\left(\mathcal{G}_{y^{\prime}}\right)+\max_{m=1,2,\ldots,M }\sqrt{\frac{\log(2LM/\delta)}{2n^{(m)}}}\right),\]
with probability at least \(1-\delta\). Here step \((i)\) uses Eq. (45) and step \((ii)\) uses Eq. (44). This completes the proof of the proposition.
### Another bound on \(\widehat{\Lambda}_{\phi^{\star}}\) under mean squared distance
In Proposition 1 we established a finite-sample bound for \(\widehat{\Lambda}_{\phi^{\star}}\) when \(\mathcal{G}\)-divergence is used. If the IW-CIP penalty is defined by the squared mean distance instead, we provide another explicit bound for \(\widehat{\Lambda}_{\phi^{\star}}\) that converges to zero as sample size goes to infinity. To show this, we require additional assumptions on the data generation model and the class of feature mappings \(\Phi\).
**Proposition 5**: _Suppose that source and target data are generated under Assumption 1. Let \(\Phi=\{\phi(x)=Ax+b,A\in\mathbb{R}^{q\times p},b\in\mathbb{R}^{q}\}\) be the linear class of feature mappings from \(\mathbb{R}^{p}\) to \(\mathbb{R}^{q}\)\((q\leq p)\), and let the squared mean distance be the distributional distance in IW-CIP penalty Eq. (19). Suppose \(\phi\in\Phi\) is a conditionally invariant feature mapping such that \(A\Sigma A^{\top}\) is non-singular. For any \(\delta>0\) satisfying \(\min_{m,y}n^{(m)}p^{(m)}_{y}\geq 2\log(2ML/\delta)\), we have_
\[\widehat{\Lambda}_{\phi}\leq\frac{12\lambda_{\text{IW-CIP}}\cdot\lambda_{\max }(A\Sigma A^{\top})\cdot(p+8\log(2ML/\delta))}{\min_{\begin{subarray}{c}m=1,2, \ldots,M\\ y=1,2,\ldots,L\end{subarray}}n^{(m)}p^{(m)}_{y}-2\log(2ML/\delta)}, \tag{46}\]
_with probability at least \(1-\delta\).12 Specifically, the statement holds for the optimal conditionally invariant mapping \(\phi^{\star}\) (see Eq. (5))._
Footnote 12: The probability is with respect to the randomness of \((X_{k}^{(m)},Y_{k}^{(m)})\stackrel{{\text{i.i.d.}}}{{\sim}} \mathcal{P}^{(m)}\), \(m=1,\ldots,M,k=1,\ldots,n^{(m)}\); see Eq. (20).
**Proof** The exact from of IW-CIP penalty under squared mean distance is
\[\widehat{\Lambda}_{\phi}\coloneqq\frac{\lambda_{\text{IW-CIP}}}{LM^{2}}\sum_{y=1 }^{L}\sum_{m\neq m^{\prime}}\left\|\widehat{\mu_{\phi}^{(m)}}(y)-\widehat{\mu_ {\phi}^{(m^{\prime})}}(y)\right\|_{2}^{2},\]
where
\[\widehat{\mu_{\phi}^{(m)}}(y)=\frac{1}{n_{y}^{(m)}}\sum_{\begin{subarray}{c}k= 1\\ Y_{k}^{(m)}=y\end{subarray}}^{n_{(m)}}\phi(X_{k}^{(m)}),\]
is the empirical mean of \(\phi(X)\) on the \(m\)-th source domain. Since \(\phi\) is conditionally invariant, from Eq. (52) we know that for any \(y\in\{1,2,\ldots,L\}\) and for any \(m\neq m^{\prime}\),
\[AP_{y}v_{y}^{(m)}=AP_{y}v_{y}^{(m^{\prime})}.\]
In the finite-sample case, when \(Y_{k}^{(m)}=Y_{k}^{(\mathfrak{T})}=y\), we can write
\[\phi(X_{k}^{(m)})=AX_{k}^{(m)}+b=A\left(f^{(1)}(y)+P_{y}v_{y}^{(m) }+\epsilon_{k}^{(m)}\right)+b,\] \[\phi(X_{k}^{(\mathfrak{T})})=AX_{k}^{(\mathfrak{T})}+b=A\left(f^ {(1)}(y)+P_{y}v_{y}^{(\mathfrak{T})}+\epsilon_{k}^{(\mathfrak{T})}\right)+b,\]
and
\[\widehat{\mu_{\phi}^{(m)}}(y)-\widehat{\mu_{\phi}^{(m^{\prime})}} (y)=\frac{1}{n_{y}^{(m)}}\sum_{\begin{subarray}{c}k=1\\ Y_{k}^{(m)}=y\end{subarray}}^{n^{(m)}}\phi(X_{k}^{(m)})-\frac{1}{n_{y}^{(m^{ \prime})}}\sum_{\begin{subarray}{c}k=1\\ Y_{k}^{(m^{\prime})}=y\end{subarray}}^{n^{(m^{\prime})}}\phi(X_{k}^{(m^{\prime })})\] \[=A\left(\underbrace{\frac{1}{n_{y}^{(m)}}\sum_{\begin{subarray}{c }k=1\\ Y_{k}^{(m)}=y\end{subarray}}^{n^{(m)}}\epsilon_{k}^{(m)}}_{\begin{subarray}{c}k =1\\ Y_{k}^{(m^{\prime})}=y\end{subarray}}\right.\underbrace{\frac{1}{n_{y}^{(m^{ \prime})}}\sum_{\begin{subarray}{c}k=1\\ Y_{k}^{(m^{\prime})}=y\end{subarray}}^{n^{(m^{\prime})}}\epsilon_{k}^{(m^{ \prime})}}_{\begin{subarray}{c}k\\ e_{y}^{(m^{\prime})}\end{subarray}}\right), \tag{47}\]
where \(n_{y}^{(m)}\) is the number of samples with label \(y\) in the \(m\)-th domain. Since \(n_{y}^{(m)}\) follows a multinomial distribution, we conclude from binomial (multinomial) tail probability bound (e.g. (Chung and Lu, 2006, Theorem 3.2)) that for all \(t_{y}^{(m)}>0\),
\[\mathbb{P}\left\{n_{y}^{(m)}\geq n^{(m)}p_{y}^{(m)}(1-t_{y}^{(m)})\right\}\geq 1 -\exp\left(-\frac{n^{(m)}p_{y}^{(m)}t_{y}^{(m)}}{2}\right).\]
When \(n_{y}^{(m)}=k>0\) is fixed, we have \(Ae_{y}^{(m)}\sim\mathcal{N}(0,\frac{1}{k}A\Sigma A^{\top})\), and therefore the standard concentration bound for Gaussian variables (e.g. (Wainwright, 2019, Example 2.11)) implies that for all \(t>0\),
\[\mathbb{P}\left\{\left\|Ae_{y}^{(m)}\right\|_{2}^{2}\leq\frac{1}{k}\lambda_{ \max}\left(A\Sigma A^{\top}\right)\cdot p\left(1+t\right)\ \middle|\ n_{y}^{(m)}=k\right\}\geq 1-e^{-p \min\{t,t^{2}\}/8}.\]
Next we combine two bounds above into a single bound for \(\left\|Ae_{y}^{(m)}\right\|_{2}^{2}\). Given any \(t_{y}^{(m)}>0\) and \(t>0\), we have
\[\mathbb{P}\left\{\left\|Ae_{y}^{(m)}\right\|_{2}^{2}\leq\frac{ \lambda_{\max}\left(A\Sigma A^{\top}\right)\cdot p\left(1+t\right)}{n^{(m)}p_ {y}^{(m)}(1-t_{y}^{(m)})}\right\}\] \[=\sum_{k=0}^{n^{(m)}}\mathbb{P}\left\{\left\|Ae_{y}^{(m)}\right\| _{2}^{2}\leq\frac{\lambda_{\max}\left(A\Sigma A^{\top}\right)\cdot p\left(1+t \right)}{n^{(m)}p_{y}^{(m)}(1-t_{y}^{(m)})}\right|\ n_{y}^{(m)}=k\right\} \mathbb{P}\left\{n_{y}^{(m)}=k\right\}\] \[\geq\sum_{k=n^{(m)}p_{y}^{(m)}(1-t_{y}^{(m)})}^{n^{(m)}}\mathbb{P }\left\{\left\|Ae_{y}^{(m)}\right\|_{2}^{2}\leq\frac{1}{k}\lambda_{\max}\left( A\Sigma A^{\top}\right)\cdot p\left(1+t\right)\ \middle|\ n_{y}^{(m)}=k\right\}\mathbb{P}\left\{n_{y}^{(m)}=k\right\}\] \[\geq\sum_{k=n^{(m)}p_{y}^{(m)}(1-t_{y}^{(m)})}^{n^{(m)}}\left(1-e ^{-p\min\{t,t^{2}\}/8}\right)\mathbb{P}\left\{n_{y}^{(m)}=k\right\}\] \[\geq\left(1-e^{-p\min\{t,t^{2}\}/8}\right)\left(1-e^{-n^{(m)}p_{ y}^{(m)}t_{y}^{(m)^{2}}/2}\right)\] \[\geq 1-e^{-p\min\{t,t^{2}\}/8}-e^{-n^{(m)}p_{y}^{(m)}t_{y}^{(m)^{2} }/2}.\]
Take \(t_{y}^{(m)}=\sqrt{2\log(2ML/\delta)/(n^{(m)}p_{y}^{(m)})}\) and \(t=8\log(2ML/\delta)/p+\sqrt{8\log(2ML/\delta)/p}\). Then with probability at least \(1-\delta/ML\), we have
\[\left\|Ae_{y}^{(m)}\right\|_{2}^{2} \leq\frac{\lambda_{\max}\left(A\Sigma A^{\top}\right)\left(p+8 \log(2ML/\delta)+\sqrt{8p\log(2ML/\delta)}\right)}{n^{(m)}p_{y}^{(m)}-\sqrt{n^ {(m)}p_{y}^{(m)}\cdot 2\log\left(2ML/\delta\right)}}\] \[\leq\frac{3\lambda_{\max}\left(A\Sigma A^{\top}\right)\left(p+8 \log(2ML/\delta)\right)}{n^{(m)}p_{y}^{(m)}-2\log(2ML/\delta)},\]
where we apply the inequality \(\sqrt{ab}\leq(a+b)/2\) for \(a,b>0\) in the last step. Applying union bound, we get
\[\widehat{\Lambda}_{\phi}=\frac{\lambda_{\text{IW-CIP}}}{LM^{2}}\sum_{y=1}^{L }\sum_{m\neq m^{\prime}}\left\|\widehat{\mu_{\phi}^{(m)}(y)}-\widehat{\mu_{ \phi}^{(m^{\prime})}(y)}\right\|_{2}^{2}\]
\[=\frac{\lambda_{\text{IW-CIP}}}{LM^{2}}\sum_{y=1}^{L}\sum_{m\neq m^{ \prime}}\left\|Ae_{y}^{(m)}-Ae_{y}^{(m^{\prime})}\right\|_{2}^{2}\] \[\leq\frac{2\lambda_{\text{IW-CIP}}}{LM^{2}}\sum_{y=1}^{L}\sum_{m \neq m^{\prime}}\left(\left\|Ae_{y}^{(m)}\right\|_{2}^{2}+\left\|Ae_{y}^{(m^{ \prime})}\right\|_{2}^{2}\right)\] \[\leq 4\lambda_{\text{IW-CIP}}\cdot\max_{\begin{subarray}{c}m=1,2, \ldots,M\\ y=1,2,\ldots,L\end{subarray}}\left\|Ae_{y}^{(m)}\right\|_{2}^{2}\] \[\leq\frac{12\lambda_{\text{IW-CIP}}\cdot\lambda_{\max}(A\Sigma A ^{\top})\cdot(p+8\log(2ML/\delta))}{\min_{\begin{subarray}{c}m=1,2,\ldots,M\\ y=1,2,\ldots,L\end{subarray}}n^{(m)}p_{y}^{(m)}-2\log(2ML/\delta)},\]
with probability at least \(1-\delta\), proving the result.
### Proof of Proposition 2
We have that for any \(i\in\{1,2,...,L\}\),
\[\mathbb{P}\left\{\widehat{h}_{\text{CIP}}(X^{(\mathfrak{T})})=i\right\} =\sum_{j=1}^{L}\mathbb{P}\left\{\widehat{h}_{\text{CIP}}(X^{( \mathfrak{T})})=i\ \Big{|}\ Y^{(\mathfrak{T})}=j\right\}\cdot\mathbb{P}\left\{Y^{( \mathfrak{T})}=j\right\}\] \[=\sum_{j=1}^{L}\left(\mathbb{P}\left\{\widehat{h}_{\text{CIP}}(X ^{(m)})=i\ \Big{|}\ Y^{(m)}=j\right\}+\delta_{i,j}\right)\cdot\mathbb{P}\left\{Y^{( \mathfrak{T})}=j\right\},\]
where \(\delta_{i,j}=\mathbb{P}\left\{\widehat{h}_{\text{CIP}}(X^{(\mathfrak{T})})= i\ \Big{|}\ Y^{(\mathfrak{T})}=j\right\}-\mathbb{P}\left\{\widehat{h}_{\text{CIP}}(X ^{(1)})=i\ \Big{|}\ Y^{(1)}=j\right\}\). Writing the above equation in matrix form, we have
\[\mu_{\widehat{h}_{\text{CIP}}}=C_{\widehat{h}_{\text{CIP}}}^{(m)}\,w^{(m)}+ \Delta\mu^{(\mathfrak{T})}, \tag{48}\]
where \(\Delta\) is a matrix with \(\delta_{i,j}\) being its \((i,j)\)-th element, and \(\mu^{(\mathfrak{T})}\) denotes the target label distribution. Comparing with the linear system in Eq. (15), Eq. (48) has an additional term on the right-hand side because the finite-sample CIP \(\widehat{h}_{\text{CIP}}\) does not guarantee exact conditional invariance. Since the importance weight \(\widehat{w}^{(m)}\) is obtained by solving the finite-sample version
\[\widehat{\mu}_{\widehat{h}_{\text{CIP}}}=\widehat{C}_{\widehat{h}_{\text{CIP }}}^{(m)}\,\widehat{w}^{(m)}, \tag{49}\]
combining Eq. (48) and Eq. (49), we have
\[\begin{split}&\underbrace{\mu_{\widehat{h}_{\text{CIP}}}- \widehat{\mu}_{\widehat{h}_{\text{CIP}}}}\\ &=C_{\widehat{h}_{\text{CIP}}}^{(m)}\,\left(w^{(m)}-\widehat{w} ^{(m)}\right)+\underbrace{\left(C_{\widehat{h}_{\text{CIP}}}^{(m)}-\widehat{C }_{\widehat{h}_{\text{CIP}}}^{(m)}\right)}_{(\mathcal{A}_{2})}\,\widehat{w}^ {(m)}+\underbrace{\Delta\mu^{\Diamond}}_{(A_{3})}.\end{split} \tag{50}\]
Now we bound \((A_{1}),(A_{2})\), and \((A_{3})\) separately. For bouding \((A_{1})\) and \((A_{2})\), we closely follow the proof of Theorem 3 in Lipton et al. (2018).
Term (\(A_{1}\))By Hoeffding's inequality, we have
\[\mathbb{P}\left\{\left|A_{1,i}\right|>t\right\}=\mathbb{P}\left\{\left|\frac{1 }{n^{(\mathbb{T})}}\sum_{k=1}^{n^{(\mathbb{T})}}\mathbf{1}_{\widehat{h}_{ \text{CIP}}(\widetilde{X}_{k}^{(\mathbb{T})})=i}-\mathbb{E}\left[\mathbf{1} _{\widehat{h}_{\text{CIP}}(X^{(\mathbb{T})})=i}\right]\right|>t\right\}\leq 2 \exp\left\{-2n^{(\mathbb{T})}t^{2}\right\},\]
for all \(t>0\). Therefore, for any \(t_{1}>0\), we have
\[\mathbb{P}\left\{\left\|A_{1}\right\|_{2}>t_{1}\right\} \leq\mathbb{P}\left\{\exists i\in\left\{1,...,L\right\},\left|A_{ 1,i}\right|>\frac{t_{1}}{\sqrt{L}}\right\}\] \[\leq\sum_{i=1}^{L}\mathbb{P}\left\{\left|A_{1,i}\right|>\frac{t_ {1}}{\sqrt{L}}\right\}\] \[\leq 2L\cdot\exp\left\{-\frac{2n^{(\mathbb{T})}t_{1}^{2}}{L} \right\}.\]
Term (\(A_{2}\))Note that
\[C_{\widehat{h}_{\text{CIP}}}^{(m)}-\widehat{C}_{\widehat{h}_{\text{CIP}}}^{ (m)}=\frac{1}{n^{(m)}}\sum_{k=1}^{n^{(m)}}e_{\widehat{h}_{\text{CIP}}( \widetilde{X}_{k}^{(m)})}e_{\widehat{Y}_{k}^{(m)}}^{\top}-\mathbb{E}\left[e_{ \widehat{h}_{\text{CIP}}(X^{(m)})}e_{Y^{(m)}}^{\top}\right],\]
where \(e_{i}\) denotes the zero vector with only the \(i\)-th element being \(1\). Let
\[Z_{k}=e_{\widehat{h}_{\text{CIP}}(\widetilde{X}_{k}^{(m)})}e_{\widehat{Y}_{k }^{(m)}}^{\top}-\mathbb{E}\left[e_{\widehat{h}_{\text{CIP}}(X^{(m)})}e_{Y^{( m)}}^{\top}\right].\]
Then it is straightforward to see that
\[\begin{cases}\mathbb{E}\left[Z_{k}\right]=0;\\ \|Z_{k}\|_{2}\leq 2;\\ \max\left\{\|\mathbb{E}\left[Z_{k}Z_{k}^{\top}\right]\|_{2},\|\mathbb{E} \left[Z_{k}^{\top}Z_{k}\right]\|_{2}\right\}\leq 1.\end{cases}\]
By the matrix Bernstein inequality (see e.g. (Tropp et al., 2015, Theorem 6.1.1)), we have
\[\mathbb{P}\left\{\|A_{2}\|_{2}\geq t_{2}\right\}\leq 2L\exp\left\{-\frac{n^{( m)}t_{2}^{2}/2}{1+2t_{2}/3}\right\},\]
for all \(t_{2}>0\).
Term (\(A_{3}\))By definition of \(D_{\mathcal{G}}\left(\cdot,\cdot\right)\) and \(\Psi_{\mathcal{G},\widehat{\phi}_{\text{CIP}}}\), each element of \(A_{3}\) satisfies
\[\left|A_{3,i}\right|\leq\sum_{j=1}^{L}\left|\delta_{i,j}\right|\cdot\mathbb{P} \left\{Y^{(\mathbb{T})}=j\right\}\]
\[\leq\sum_{j=1}^{L}D_{\mathcal{G}}\left(\mathcal{P}_{\widehat{\phi}_{ \text{CIP}}(X)|Y=j}^{(\mathfrak{T})},\mathcal{P}_{\widehat{\phi}_{\text{CIP}} (X)|Y=j}^{(m)}\right)\cdot\mathbb{P}\left\{Y^{(\mathfrak{T})}=j\right\}\leq \Psi_{\mathcal{G},\widehat{\phi}_{\text{CIP}}}.\]
Hence we have
\[\|A_{3}\|_{2}=\sqrt{\sum_{i=1}^{L}|A_{3,i}|^{2}}\leq\sqrt{L}\Psi_{ \mathcal{G},\widehat{\phi}_{\text{CIP}}}.\]
Applying these bounds to Eq. (50) and using the assumption that \(C_{\widehat{h}_{\text{CIP}}}^{(m)}\) has condition number \(\kappa^{(m)}\) (i.e., the smallest eigenvalue \(\geq 1/\kappa^{(m)}\) because \(1\) is an eigenvalue of the matrix), we get
\[\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{2} \leq\kappa_{m}\left(t_{1}+t_{2}\cdot\left\|\widehat{w}^{(m)} \right\|_{2}+\sqrt{L}\Psi_{\mathcal{G},\widehat{\phi}_{\text{CIP}}}\right)\] \[\leq\kappa_{m}\left(t_{1}+t_{2}\left(\left\|w^{(m)}-\widehat{w}^ {(m)}\right\|_{2}+\left\|w^{(m)}\right\|_{2}\right)+\sqrt{L}\Psi_{\mathcal{G}, \widehat{\phi}_{\text{CIP}}}\right),\]
with probability at least \(1-2L\exp\left(-2n^{(\mathfrak{T})}t_{1}^{2}/L\right)-2L\exp\left(-3n^{(m)}t_{ 2}^{2}/(6+4t_{2})\right)\). Finally, let \(t_{1}=\sqrt{L\log(4L/\delta)/2n^{(\mathfrak{T})}}\) and \(t_{2}=\sqrt{3\log(4L/\delta)/n^{(m)}}\). Using the assumption \(n^{(m)}\geq 12\kappa_{m}^{2}\log(4L/\delta)\), we get
\[\left\|w^{(m)}-\widehat{w}^{(m)}\right\|_{2}\leq 2\kappa_{m}\left(\sqrt{L} \Psi_{\mathcal{G},\widehat{\phi}_{\text{CIP}}}+\sqrt{\frac{L\log(4L/\delta) }{2n^{(\mathfrak{T})}}}+\sqrt{\frac{3\log(4L/\delta)}{n^{(m)}}}\left\|w^{(m)} \right\|_{2}\right)\]
with probability at least \(1-\delta\). Combining the results then proves the proposition.
### Proof of Proposition 3
Using assumption (25), for any positive semidefinite matrix \(S\in\mathbb{R}^{d_{y}\times d_{y}}\), we have
\[\left(v_{y}^{(\mathfrak{T})}-v_{y}^{(m)}\right)^{\top}S\left(v_{ y}^{(\mathfrak{T})}-v_{y}^{(m)}\right) =\text{trace}\left(\left(v_{y}^{(\mathfrak{T})}-v_{y}^{(m)}\right) ^{\top}S^{\frac{1}{2}}S^{\frac{1}{2}}\left(v_{y}^{(\mathfrak{T})}-v_{y}^{(m)} \right)\right)\] \[=\text{trace}\left(S^{\frac{1}{2}}\left(v_{y}^{(\mathfrak{T})}-v_ {y}^{(m)}\right)\left(v_{y}^{(\mathfrak{T})}-v_{y}^{(m)}\right)^{\top}S^{ \frac{1}{2}}\right)\] \[\leq\text{trace}\left(S^{\frac{1}{2}}\left(\frac{\zeta_{y}}{M-1} \sum_{m=2}^{M}v_{y}^{(m)}(v_{y}^{(m)})^{\top}\right)S^{\frac{1}{2}}\right)\] \[=\frac{\zeta_{y}}{M-1}\sum_{m=2}^{M}\text{trace}\left(S^{\frac{1} {2}}v_{y}^{(m)}(v_{y}^{(m)})^{\top}S^{\frac{1}{2}}\right)\] \[=\frac{\zeta_{y}}{M-1}\sum_{m=2}^{M}{v_{y}^{(m)}}^{\top}Sv_{y}^{( m)}. \tag{51}\]
Inequality above uses the fact that \(\text{trace}(A)\leq\text{trace}(B)\) for positive semi-definite matrix \(A,B\) satisfying \(A\preceq B\). Given that \(\epsilon^{(m)},\epsilon^{(\mathfrak{T})}\overset{\text{i.i.d.}}{\sim}\mathcal{N} (0,\Sigma)\), for any \(y\in\{1,2,\dots,L\}\) and
\(m\in\{1,2,\ldots,M\}\) we have
\[\phi(X^{(m)})\mid Y^{(m)} =y\sim\mathcal{N}\left(A\left(f^{(1)}(y)+P_{y}v_{y}^{(m)}\right)+b, A\Sigma A^{\top}\right), \tag{52}\] \[\phi(X^{(\mathfrak{T})})\mid Y^{(\mathfrak{T})} =y\sim\mathcal{N}\left(A\left(f^{(1)}(y)+P_{y}v_{y}^{(\mathfrak{T })}\right)+b,A\Sigma A^{\top}\right).\]
Note that \(v_{y}^{(1)}=0\) by Assumption 1, so we can further simplify terms in Eq. (23) as
\[\Delta_{\phi}^{(m)}(y)=AP_{y}\left(v_{y}^{(m)}-v_{y}^{(1)}\right)=AP_{y}v_{y}^{ (m)}\ \ \text{and}\ \ \Sigma_{\phi}^{(m)}(y)=A\Sigma A^{\top}.\]
With results above in hand, we can derive
\[D_{\mathcal{G}}^{2}\left(\mathcal{P}_{\phi(X)|Y=y}^{(\mathfrak{ T})},\mathcal{P}_{\phi(X)|Y=y}^{(m)}\right) \tag{53}\] \[\leq\sup_{g\in\mathcal{G}}\max_{y^{\prime}=1,2,\ldots,L}\left| \mathbb{E}_{Z\sim\mathcal{P}_{\phi(X)|Y=y}^{(\mathfrak{T})}}\left[\mathbf{1}_ {g(Z)=y^{\prime}}\right]-\mathbb{E}_{Z\sim\mathcal{P}_{\phi(X)|Y=y}^{(m)}} \left[\mathbf{1}_{g(Z)=y^{\prime}}\right]\right|\] \[\leq\sup_{g\in\mathcal{G}}\sum_{y^{\prime}=1}^{L}\left|\mathbb{E }_{Z\sim\mathcal{P}_{\phi(X)|Y=y}^{(\mathfrak{T})}}\left[\mathbf{1}_{g(Z)=y^{ \prime}}\right]-\mathbb{E}_{Z\sim\mathcal{P}_{\phi(X)|Y=y}^{(m)}}\left[ \mathbf{1}_{g(Z)=y^{\prime}}\right]\right|\] \[=4\sup_{g\in\mathcal{G}}\mathrm{TV}^{2}\left(\mathcal{P}_{g( \phi(X))|Y=y}^{(\mathfrak{T})},\mathcal{P}_{g(\phi(X))|Y=y}^{(m)}\right)\] \[\overset{(i)}{\leq}2\sup_{g\in\mathcal{G}}\mathrm{KL}\left( \mathcal{P}_{g(\phi(X))|Y=y}^{(\mathfrak{T})},\mathcal{P}_{g(\phi(X))|Y=y}^{( m)}\right)\] \[\overset{(ii)}{\leq}2\mathrm{KL}\left(\mathcal{P}_{\phi(X)|Y=y}^ {(\mathfrak{T})},\mathcal{P}_{\phi(X)|Y=y}^{(m)}\right)\] \[\overset{(iii)}{=}\left(AP_{y}\left(v_{y}^{(\mathfrak{T})}-v_{y}^ {(m)}\right)\right)^{\top}\left(A\Sigma A^{\top}\right)^{-1}\left(AP_{y}\left( v_{y}^{(\mathfrak{T})}-v_{y}^{(m)}\right)\right)\] \[\overset{(iv)}{\leq}\frac{\zeta_{y}}{M-1}\sum_{m=2}^{M}\left(AP_{ y}v_{y}^{(m)}\right)^{\top}\left(A\Sigma A^{\top}\right)^{-1}\left(AP_{y}v_{y}^{(m)}\right)\] \[\overset{(v)}{=}2\zeta_{y}\Pi_{\phi}(y).\]
Here step \((i)\) applies Pinsker's inequality and step \((ii)\) applies data processing inequality. Step \((iii)\) applies the KL divergence formula between two Gaussians, i.e.,
\[\mathrm{KL}\left(\mathcal{N}(\mu_{1},\Sigma_{1}),\mathcal{N}(\mu_{1},\Sigma_{ 2})\right)=\frac{1}{2}\left(\log\frac{|\Sigma_{2}|}{\Sigma_{1}}\right)-p+ \mathrm{Tr}\left(\Sigma_{2}^{-1}\Sigma_{1}\right)+(\mu_{1}-\mu_{2})^{\top} \Sigma_{2}^{-1}(\mu_{2}-\mu_{1}).\]
Step \((iv)\) uses Eq. (51) with \(S=P_{y}^{\top}A^{\top}(A\Sigma A^{\top})^{-1}AP_{y}\) and the last step \((vi)\) follows from definition (24). Now the proposition follows by applying Eq. (53) to the definition of \(\Psi_{\mathcal{G},\phi}\) in Eq. (21).
### Number of source domains needed for controlling the deviation from conditional invariance
To validate condition (25) in Proposition 3, we prove the following lemma which states that when the perturbations are generated randomly, the required number of source domains is approximately linear in the dimension of perturbations.
**Lemma 1**: _Under Assumption 1, further assume that perturbations are generated according to \(v_{y}^{(m)},v_{y}^{(\mathfrak{T})}\stackrel{{\rm i.i.d.}}{{\sim}} \mathcal{N}(0,\tau_{y}^{2}\mathbb{I}_{d_{y}})\). For any \(\delta\in(0,1)\), if the number of source domains \(M\) satisfies_
\[M\geq 1+r\left(\max_{y}\sqrt{d_{y}}+\sqrt{2\log(2L/\delta)}\right)^{2},\]
_for some constant \(r>1\), then with probability at least \(1-\delta\),13 condition (25) holds with \(\zeta_{y}=c_{r}(M+\log M)\) for some constant \(c_{r}>0\)._
Footnote 13: The probability is with respect to the randomness of perturbations \(v_{y}^{(m)},v_{y}^{(\mathfrak{T})}\stackrel{{\rm i.i.d.}}{{\sim}} \mathcal{N}(0,\tau_{y}^{2}\mathbb{I}_{d_{y}})\).
ProofWe verify that when \(v_{y}^{(m)},v_{y}^{(\mathfrak{T})}\stackrel{{\rm i.i.d.}}{{\sim}} \mathcal{N}(0,\tau_{y}^{2}\mathbb{I}_{d_{y}})\), condition (25) holds with high probability. Since \(M\geq\max_{y=1,2,\ldots,L}\{d_{y}\}+1\), Theorem 6.1 and Example 2.11 in Wainwright (2019) show that given \(y\in\{1,2,\ldots,L\}\), for any \(t_{0}\in(0,1),t_{1},\ldots,t_{M}>0\) we have
\[\mathbb{P}\left\{\lambda_{\min}\left(\frac{1}{M-1}\sum_{m=2}^{M}v_{y}^{(m)}v_ {y}^{(m)}{}^{\top}\right)\geq\tau_{y}^{2}\left(1-t_{0}-\sqrt{\frac{d_{y}}{M-1} }\right)\right\}\geq 1-e^{-(M-1)t_{0}^{2}/2},\]
and
\[\mathbb{P}\left\{\left\|v_{y}^{(\mathfrak{T})}-v_{y}^{(1)}\right\| _{2}^{2}\leq\tau_{y}^{2}d_{y}\left(1+t_{1}\right)\right\}\geq 1-e^{-d_{y}\min\{t_{1}, t_{1}^{2}\}/8},\] \[\mathbb{P}\left\{\left\|v_{y}^{(\mathfrak{T})}-v_{y}^{(m)}\right\| _{2}^{2}\leq 2\tau_{y}^{2}d_{y}\left(1+t_{m}\right)\right\}\geq 1-e^{-d_{y}\min\{t_{ m},t_{m}^{2}\}/8},\quad\forall m\in\left\{2,\cdots,M\right\}.\]
Take \(t_{0}=\sqrt{\frac{2}{M-1}\log\frac{2L}{\delta}}\) and \(t_{m}=\frac{8}{d_{y}}\log(\frac{2ML}{\delta})+\sqrt{\frac{8}{d_{y}}\log(\frac {2ML}{\delta})}\) for all \(y\) such that \(d_{y}>0\). Applying union bound, the following bound holds for all \(y\in\{1,2,\ldots,L\}\) and \(m\in\{1,2,\ldots,M\}\) with probability greater than \(1-\delta\):
\[\lambda_{\max}\left(\left(v_{y}^{(\mathfrak{T})}-v_{y}^{(m)} \right)\left(v_{y}^{(\mathfrak{T})}-v_{y}^{(m)}\right)^{\top}\right)=\left\| \left(v_{y}^{(\mathfrak{T})}-v_{y}^{(m)}\right)\right\|_{2}^{2}\] \[\leq 2\tau_{y}^{2}\left(d_{y}+8\log(2ML/\delta)+\sqrt{8d_{y}\log( 2ML/\delta)}\right)\] \[\leq 3\tau_{y}^{2}\left(d_{y}+8\log(2ML/\delta)\right)\] \[\leq\frac{3\left(d_{y}+8\log(2ML/\delta)\right)}{1-\sqrt{2\left( \log(2L/\delta)\right)/(M-1)}-\sqrt{d_{y}/(M-1)}}\lambda_{\min}\left(\frac{1}{M -1}\sum_{m=2}^{M}v_{y}^{(m)}v_{y}^{(m)}{}^{\top}\right)\] \[\leq\frac{3r}{r-1}\left(d_{y}+8\log\left(\frac{2ML}{\delta} \right)\right)\lambda_{\min}\left(\frac{1}{M-1}\sum_{m=2}^{M}v_{y}^{(m)}{v_{ y}^{(m)}}^{\top}\right)\] \[\leq c_{r}(M+\log M)\cdot\lambda_{\min}\left(\frac{1}{M-1}\sum_{m =2}^{M}v_{y}^{(m)}{v_{y}^{(m)}}^{\top}\right),\]
where we apply the lower bound on \(M\) in the last two steps.
### Proof of Proposition 4
Let \(f_{\text{inv}}(1),f_{\text{inv}}(2)\in\mathbb{R}^{p-d}\) denote the vector of the last \(p-d\) coordinates of \(f^{(1)}(1),f^{(1)}(2)\in\mathbb{R}^{p}\) respectively, which are conditionally invariant across source and target. We first prove that the target risks of \(h_{\text{oracle}}\) and \(h^{\star}\) are given by
\[\begin{split}\mathcal{R}^{(\mathfrak{T})}(h_{\text{oracle}})& =r_{p_{1}^{(\mathfrak{T})},\sigma}\left(\left\|f^{(\mathfrak{T}) }(1)-f^{(\mathfrak{T})}(2)\right\|_{2}\right),\\ \mathcal{R}^{(\mathfrak{T})}(h^{\star})&=r_{p_{1}^ {(\mathfrak{T})},\sigma}\Big{(}\left\|f_{\text{inv}}(1)-f_{\text{inv}}(2) \right\|_{2}\Big{)},\end{split} \tag{54}\]
where \(r_{p_{1}^{(\mathfrak{T})},\sigma}(x)\) is a continuous decreasing function in \(x\). The risk of the oracle classifier can be derived as follows. For simplicity we write \(\mu_{1}^{(\mathfrak{T})}=f^{(\mathfrak{T})}(1)\) and \(\mu_{2}^{(\mathfrak{T})}=f^{(\mathfrak{T})}(2)\). For any \(\phi(x)=\beta^{\top}x+\beta_{0}\) with \(\left\|\beta\right\|_{2}=1\), we know that
\[\phi(X^{(\mathfrak{T})})\mid Y^{(\mathfrak{T})} =1\sim\mathcal{N}\left(\beta^{\top}\mu_{1}^{(\mathfrak{T})}+ \beta_{0},\sigma^{2}\right),\] \[\phi(X^{(\mathfrak{T})})\mid Y^{(\mathfrak{T})} =2\sim\mathcal{N}\left(\beta^{\top}\mu_{2}^{(\mathfrak{T})}+ \beta_{0},\sigma^{2}\right).\]
Then the target risk of \(h=g\circ\phi\) can be computed by
\[\mathcal{R}^{(\mathfrak{T})}(h) =\mathbb{E}\left[\mathbf{1}_{h(X^{(\mathfrak{T})})\neq Y^{( \mathfrak{T})}}\right]\] \[=\mathbb{P}\left\{h(X^{(\mathfrak{T})})=2\ \Big{|}\ Y^{(\mathfrak{T})}=1\right\}\mathbb{P}\left\{Y^{( \mathfrak{T})}=1\right\}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mathbb{P} \left\{h(X^{(\mathfrak{T})})=1\ \Big{|}\ Y^{(\mathfrak{T})}=2\right\}\mathbb{P}\left\{Y^{( \mathfrak{T})}=2\right\}\] \[=\mathbb{P}\left\{\phi(X^{(\mathfrak{T})})>0\ \Big{|}\ Y^{(\mathfrak{T})}=1\right\}\mathbb{P}\left\{Y^{( \mathfrak{T})}=1\right\}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\mathbb{P }\left\{\phi(X^{(\mathfrak{T})})\leq 0\ \Big{|}\ Y^{(\mathfrak{T})}=2\right\}\mathbb{P}\left\{Y^{( \mathfrak{T})}=2\right\}\] \[=p_{1}^{(\mathfrak{T})}\cdot\frac{1}{2}\left(1-\operatorname{ erf}\left(\frac{\beta^{\top}\mu_{1}^{(\mathfrak{T})}+\beta_{0}}{\sqrt{2\sigma^{2}}} \right)\right)+(1-p_{1}^{(\mathfrak{T})})\cdot\frac{1}{2}\left(1+\operatorname {erf}\left(\frac{\beta^{\top}\mu_{2}^{(\mathfrak{T})}+\beta_{0}}{\sqrt{2 \sigma^{2}}}\right)\right)\] \[=\frac{1}{2}\left(1-p_{1}^{(\mathfrak{T})}\cdot\operatorname{ erf}\left(\frac{\beta^{\top}\mu_{1}^{(\mathfrak{T})}+\beta_{0}}{\sqrt{2\sigma^{2}}} \right)+(1-p_{1}^{(\mathfrak{T})})\cdot\operatorname{erf}\left(\frac{\beta^{ \top}\mu_{2}^{(\mathfrak{T})}+\beta_{0}}{\sqrt{2\sigma^{2}}}\right)\right), \tag{55}\]
where \(\operatorname{erf}\left(x\right)=2\int_{0}^{x}e^{-t^{2}}dt/\sqrt{\pi}\) is the Gauss error function. The optimization problem of \(h_{\text{oracle}}\) in Eq. (3) becomes
\[\begin{split}\beta_{\text{oracle}},\beta_{0,\text{oracle}}=\operatorname {arg\,min}_{\beta,\beta_{0}}&-p_{1}^{(\mathfrak{T})}\cdot \operatorname{erf}\left(\frac{\beta^{\top}\mu_{1}^{(\mathfrak{T})}+\beta_{0}} {\sqrt{2\sigma^{2}}}\right)+(1-p_{1}^{(\mathfrak{T})})\cdot\operatorname{erf} \left(\frac{\beta^{\top}\mu_{2}^{(\mathfrak{T})}+\beta_{0}}{\sqrt{2\sigma^{2}} }\right),\\ \text{s.t.}&\left\|\beta\right\|_{2}=1.\end{split} \tag{56}\]
Setting the gradient of the objective function with respect to \(\beta_{0}\) to \(0\), we get
\[\beta_{0}=\frac{\sigma^{2}}{\beta^{\top}(\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{( \mathfrak{T})})}\log\frac{p_{1}^{(\mathfrak{T})}}{1-p_{1}^{(\mathfrak{T})}}- \frac{1}{2}\beta^{\top}(\mu_{1}^{(\mathfrak{T})}+\mu_{2}^{(\mathfrak{T})}).\]
Plugging the expression of \(\beta_{0}\) into Eq. (56) and using the method of Lagrange multipliers, we arrive at
\[\frac{\partial}{\partial\beta}\left(-p_{1}^{(\mathfrak{T})}\cdot \mathrm{erf}\left(\frac{\beta^{\top}\mu_{1}^{(\mathfrak{T})}+\beta_{0}}{\sqrt{2 \sigma^{2}}}\right)+(1-p_{1}^{(\mathfrak{T})})\cdot\mathrm{erf}\left(\frac{ \beta^{\top}\mu_{2}^{(\mathfrak{T})}+\beta_{0}}{\sqrt{2\sigma^{2}}}\right)+ \lambda(\left\|\beta\right\|_{2}^{2}-1)\right)=0\] \[\implies 2\lambda\beta=p_{1}^{(\mathfrak{T})}\frac{2}{\sqrt{\pi}}\exp \left(-\frac{(\beta^{\top}\mu_{1}^{(\mathfrak{T})}+\beta_{0})^{2}}{2\sigma^ {2}}\right)\] \[\times\left(\frac{1}{2\sqrt{2\sigma^{2}}}(\mu_{1}^{(\mathfrak{T} )}-\mu_{2}^{(\mathfrak{T})})-\frac{\sqrt{\sigma^{2}/2}\log\frac{p_{1}^{( \mathfrak{T})}}{1-p_{1}^{(\mathfrak{T})}}}{(\beta^{\top}(\mu_{1}^{(\mathfrak{ T})}-\mu_{2}^{(\mathfrak{T})}))^{2}}(\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{( \mathfrak{T})})\right)\] \[-(1-p_{1}^{(\mathfrak{T})})p_{1}^{(\mathfrak{T})}\frac{2}{\sqrt {\pi}}\exp\left(-\frac{(\beta^{\top}\mu_{2}^{(\mathfrak{T})}+\beta_{0})^{2}}{ 2\sigma^{2}}\right)\] \[\times\left(-\frac{1}{2\sqrt{2\sigma^{2}}}(\mu_{1}^{(\mathfrak{T} )}-\mu_{2}^{(\mathfrak{T})})-\frac{\sqrt{\sigma^{2}/2}\log\frac{p_{1}^{( \mathfrak{T})}}{1-p_{1}^{(\mathfrak{T})}}}{(\beta^{\top}(\mu_{1}^{(\mathfrak{ T})}-\mu_{2}^{(\mathfrak{T})}))^{2}}(\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{( \mathfrak{T})})\right).\]
That is, \(\beta\) and \(\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{(\mathfrak{T})}\) are in the same direction. Using \(\left\|\beta\right\|_{2}=1\), we get
\[\beta_{\mathrm{oracle}} =\frac{\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{(\mathfrak{T})}}{ \left\|\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{(\mathfrak{T})}\right\|_{2}},\] \[\beta_{0,\mathrm{oracle}} =\frac{\sigma^{2}}{\left\|\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{( \mathfrak{T})}\right\|_{2}}\log\frac{p_{1}^{(\mathfrak{T})}}{1-p_{1}^{( \mathfrak{T})}}-\frac{\left\|\mu_{1}^{(\mathfrak{T})}\right\|_{2}^{2}-\left\| \mu_{2}^{(\mathfrak{T})}\right\|_{2}^{2}}{2\left\|\mu_{1}^{(\mathfrak{T})}-\mu _{2}^{(\mathfrak{T})}\right\|_{2}}.\]
Plugging the above equations into Eq. (55), after some algebra we get the oracle target risk as
\[\mathcal{R}^{(\mathfrak{T})}(h_{\mathrm{oracle}})=r_{p_{1}^{(\mathfrak{T})}, \sigma}\left(\left\|\mu_{1}^{(\mathfrak{T})}-\mu_{2}^{(\mathfrak{T})}\right\| _{2}\right)=r_{p_{1}^{(\mathfrak{T})},\sigma}\left(\left\|f^{(\mathfrak{T})}( 1)-f^{(\mathfrak{T})}(2)\right\|_{2}\right),\]
where
\[r_{p_{1}^{(\mathfrak{T})},\sigma}(x) =\frac{1}{2}\Bigg{(}1-p_{1}^{(\mathfrak{T})}\mathrm{erf}\left( \frac{x}{2\sqrt{2}\sigma}+\frac{\sigma}{\sqrt{2}x}\log\frac{p_{1}^{(\mathfrak{ T})}}{1-p_{1}^{(\mathfrak{T})}}\right) \tag{57}\] \[\qquad\qquad-(1-p_{1}^{(\mathfrak{T})})\mathrm{erf}\left(\frac{x}{ 2\sqrt{2}\sigma}-\frac{\sigma}{\sqrt{2}x}\log\frac{p_{1}^{(\mathfrak{T})}}{1-p_ {1}^{(\mathfrak{T})}}\right)\Bigg{)}.\]
Next, turning to \(h^{\star}\), note that \(span\{v_{1}^{(2)},v_{1}^{(3)},\ldots,v_{1}^{(M)},v_{2}^{(2)},v_{2}^{(3)},\ldots, v_{2}^{(M)}\}=\mathbb{R}^{d}\), and so the constraint of conditional invariance in Eq. (5) requires that the first \(d\) coordinates of \(\beta^{\star}\) are zero, i.e., \(\phi^{\star}\) only uses the last \(p-d\) invariant coordinates. Following exactly the same proof as above, we have
\[\mathcal{R}^{(\mathfrak{T})}(h^{\star})=r_{p_{1}^{(\mathfrak{T})},\sigma}\Big{(} \left\|f_{\mathrm{inv}}(1)-f_{\mathrm{inv}}(2)\right\|_{2}\Big{)}.\]
Now we verify that \(r_{p_{1}^{(\mathfrak{T})},\sigma}(x)\) is a decreasing function in \(x\), by calculating its gradient.
\[\frac{dr_{p_{1}^{(\mathfrak{T})},\sigma}(x)}{dx}\] \[=-\frac{1}{\sqrt{\pi}}p_{1}^{(\mathfrak{T})}\exp\left(-\left( \frac{x}{2\sqrt{2}\sigma}+\frac{\sigma}{\sqrt{2}x}\log\frac{p_{1}^{(\mathfrak{ T})}}{1-p_{1}^{(\mathfrak{T})}}\right)^{2}\right)\] \[\times\left(\frac{1}{2\sqrt{2}\sigma}-\frac{\sigma}{\sqrt{2}x^{2 }}\log\frac{p_{1}^{(\mathfrak{T})}}{1-p_{1}^{(\mathfrak{T})}}\right)\] \[-\frac{1}{\sqrt{\pi}}(1-p_{1}^{(\mathfrak{T})})\exp\left(-\left( \frac{x}{2\sqrt{2}\sigma}-\frac{\sigma}{\sqrt{2}x}\log\frac{p_{1}^{(\mathfrak{ T})}}{1-p_{1}^{(\mathfrak{T})}}\right)^{2}\right)\] \[\times\left(\frac{1}{2\sqrt{2}\sigma}+\frac{\sigma}{\sqrt{2}x^{2 }}\log\frac{p_{1}^{(\mathfrak{T})}}{1-p_{1}^{(\mathfrak{T})}}\right)\] \[=-\sqrt{\frac{p_{1}^{(\mathfrak{T})}(1-p_{1}^{(\mathfrak{T})})}{ 2\pi\sigma^{2}}}\exp\left(-\left(\frac{x}{2\sqrt{2}\sigma}\right)^{2}-\left( \frac{\sigma}{\sqrt{2}x}\log\frac{p_{1}^{(\mathfrak{T})}}{1-p_{1}^{(\mathfrak{ T})}}\right)^{2}\right)<0.\]
This intermediate result implies that the risk difference between \(h^{\star}\) and \(h_{\text{oracle}}\) originates from the norm differences between \(f^{(\mathfrak{T})}(1)-f^{(\mathfrak{T})}(2)\) and \(f_{\text{inv}}(1)-f_{\text{inv}}(2)\) only, where the second term is equivalent to the last \(p-d\) dimensions of the first term which corresponds to CICs in the general anticausal model. Intuitively, when the dimension of CICs is close to \(p\), the discrepancy between the norms of \(f^{(\mathfrak{T})}(1)-f^{(\mathfrak{T})}(2)\) and \(f_{\text{inv}}(1)-f_{\text{inv}}(2)\) becomes insignificant, and the disparity in the target risks of \(h^{\star}\) and \(h_{\text{oracle}}\) tends to be negligible. In particular, we establish the explicit bound for this disparity as follows:
When \(p_{1}^{(\mathfrak{T})}=1/2\), the explicit formula for \(r_{p_{1}^{(\mathfrak{T})},\sigma}(x)\) given in Eq.(57) shows that \(r_{p_{1}^{(\mathfrak{T})},\sigma}(x)=\left(1-\operatorname{erf}\left(x/(2 \sqrt{2}\sigma)\right)\right)/2\). Since \(v_{1}^{(\mathfrak{T})}-v_{2}^{(\mathfrak{T})}\sim\mathcal{N}(0,2\tau^{2} \sigma^{2}\mathbb{I}_{d})\), the standard Gaussian concentration bound shows that
\[\mathbb{P}\left\{\left\|v_{1}^{(\mathfrak{T})}-v_{2}^{(\mathfrak{T})}\right\| _{2}\leq\sqrt{2\tau^{2}\sigma^{2}d\left(1+\sqrt{\frac{8}{d}\log\frac{1}{ \delta}}+\frac{8}{d}\log\frac{1}{\delta}\right)}\right\}\geq 1-\delta.\]
By using the target risks given in Eq. (54), we can calculate
\[\mathcal{R}^{(\mathfrak{T})}(h^{\star})-\mathcal{R}^{(\mathfrak{ T})}(h_{\text{oracle}})\] \[=\frac{1}{2}\left(\operatorname{erf}\left(\frac{\left\|f^{( \mathfrak{T})}(1)-f^{(\mathfrak{T})}(2)\right\|_{2}}{2\sqrt{2}\sigma}\right)- \operatorname{erf}\left(\frac{\left\|f_{\text{inv}}(1)-f_{\text{inv}}(2) \right\|_{2}}{2\sqrt{2}\sigma}\right)\right)\] \[\leq\frac{1}{2}\left(\operatorname{erf}\left(\frac{\left\|f^{(1 )}(1)-f^{(1)}(2)\right\|_{2}+\left\|v_{1}^{(\mathfrak{T})}-v_{2}^{(\mathfrak{ T})}\right\|_{2}}{2\sqrt{2}\sigma}\right)-\operatorname{erf}\left(\frac{\left\|f_{ \text{inv}}(1)-f_{\text{inv}}(2)\right\|_{2}}{2\sqrt{2}\sigma}\right)\right)\] \[\overset{(i)}{=}\frac{1}{2}\left(\operatorname{erf}\left(\frac{ \xi\sigma\sqrt{p}+\left\|v_{1}^{(\mathfrak{T})}-v_{2}^{(\mathfrak{T})}\right\|_ {2}}{2\sqrt{2}\sigma}\right)-\operatorname{erf}\left(\frac{\xi\sqrt{p-d}}{2 \sqrt{2}}\right)\right)\]
\[\leq\frac{1}{2}\left(\operatorname{erf}\left(\frac{\xi\sqrt{p}+\tau \sqrt{3d+24\log(1/\delta)}}{2\sqrt{2}}\right)-\operatorname{erf}\left(\frac{\xi \sqrt{p-d}}{2\sqrt{2}}\right)\right)\] \[\stackrel{{(ii)}}{{\leq}}\frac{\xi(\sqrt{p}-\sqrt{p-d })+\tau\sqrt{3d+24\log(1/\delta)}}{\sqrt{8\pi}}\exp\left(-\frac{\xi^{2}(p-d)}{ 8}\right)\] \[=\frac{1}{\sqrt{8\pi}}\left(\frac{cd}{\sqrt{p}+\sqrt{p-d}}+\tau \sqrt{3d+24\log\left(\frac{1}{\delta}\right)}\right)\exp\left(-\frac{\xi^{2}(p- d)}{8}\right)\] \[\leq\frac{1}{\sqrt{8\pi}}\left(\xi\sqrt{d}+\tau\sqrt{3d+24\log \left(\frac{1}{\delta}\right)}\right)\exp\left(-\frac{\xi^{2}(p-d)}{8}\right)\] \[\leq c_{\xi,\tau}\left(\sqrt{d}+\sqrt{\log(1/\delta)}\right)\exp \left(-\frac{\xi^{2}(p-d)}{8}\right),\]
with probability at least \(1-\delta\). Here, in step \((i)\), we use the fact that \(f_{\text{inv}}(1)\) and \(f_{\text{inv}}(2)\) are the last \(p-d\) invariant coordinates of \(f^{(1)}(1)\) and \(f^{(1)}(2)\) respectively; and in step \((ii)\) we use the inequality \(\int_{a}^{b}e^{-t^{2}}dt\leq e^{-a^{2}}(b-a)\) for \(0<a<b\). This completes the proof of the proposition.
## Appendix C Proof of Section 4
In this section we prove our theorems given in Section 4.
### Proof of Theorem 2
We prove the following generalized version of the theorem, which does not require that \(\phi_{\text{inv}}\) is exactly conditionally invariant. That is, for any \(h_{\text{inv}}=g_{\text{inv}}\circ\phi_{\text{inv}}\) where \(g_{\text{inv}}\in\mathcal{G}\) and \(\phi_{\text{inv}}\in\Phi\), for any other classifier \(h\) we have
\[\left|\mathcal{R}^{(\mathfrak{T})}(h)-\mathcal{R}^{(1)}(h)\right| \leq 2\mathcal{R}^{(1)}(h_{\text{inv}})+\left|\mathbb{P}\left\{h(X^{(1)} )\neq h_{\text{inv}}(X^{(1)})\right\}-\mathbb{P}\left\{h(X^{(\mathfrak{T})}) \neq h_{\text{inv}}(X^{(\mathfrak{T})})\right\}\right|\\ +L\Psi_{\mathcal{G},\phi_{\text{inv}}}.\]
If this inequality holds, when \(\phi_{\text{inv}}\) is a conditionally invariant feature mapping across source and target domains, we get \(\Psi_{\mathcal{G},\phi_{\text{inv}}}=0\) and arrive at the original Theorem 2.
**Proof** Define \(\delta_{ij}=\mathbb{P}\left\{h_{\text{inv}}(X^{(\mathfrak{T})})=j\ \big{|}\ Y^{(\mathfrak{T})}=i\right\}-\mathbb{P}\left\{h_{\text{inv}}(X^{(1)} )=j\ \big{|}\ Y^{(1)}=i\right\}\), then \(|\delta_{ij}|\leq\Psi_{\mathcal{G},h_{\text{inv}}}\). Because there is no label shift, for any \(i,j\in\{1,2,\dots,L\}\) we have
\[\mathbb{P}\left\{Y^{(\mathfrak{T})}=i,h_{\text{inv}}(X^{(\mathfrak{T})})=j \right\}-\mathbb{P}\left\{Y^{(1)}=i,h_{\text{inv}}(X^{(1)})=j\right\}=\delta_ {ij}\cdot\mathbb{P}\left\{Y^{(\mathfrak{T})}=i\right\}.\]
As a result, we get
\[\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j,h_{\text{inv}}(X^{( \mathfrak{T})})=j\right\}-\mathbb{P}\left\{h(X^{(1)})=j,h_{\text{inv}}(X^{(1)} )=j\right\}\] \[=\sum_{i=1}^{L}\left[\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j\ \big{|}\ Y^{( \mathfrak{T})}=i,h_{\text{inv}}(X^{(\mathfrak{T})})=j\right\}\cdot\mathbb{P} \left\{Y^{(\mathfrak{T})}=i,h_{\text{inv}}(X^{(\mathfrak{T})})=j\right\}\right.\]
\[-\mathbb{P}\left\{h(X^{(1)})=j\ \Big{|}\ Y^{(1)}=i,h_{\text{inv}}(X^{(1)})= j\right\}\cdot\mathbb{P}\left\{Y^{(1)}=i,h_{\text{inv}}(X^{(1)})=j\right\}\bigg{]}\] \[=\sum_{i=1}^{L}\Bigg{(}\bigg{[}\mathbb{P}\left\{h(X^{(\mathfrak{ T})})=j\ \Big{|}\ Y^{(\mathfrak{T})}=i,h_{\text{inv}}(X^{(\mathfrak{T})})=j\right\}\] \[\qquad\qquad-\mathbb{P}\left\{h(X^{(1)})=j\ \Big{|}\ Y^{(1)}=i,h_{\text{inv}}(X^{(1)})=j \right\}\bigg{]}\cdot\mathbb{P}\left\{Y^{(1)}=i,h_{\text{inv}}(X^{(1)})=j\right\}\] \[\qquad\qquad+\delta_{ij}\cdot\mathbb{P}\left\{Y^{(\mathfrak{T})}= i\right\}\cdot\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j\ \Big{|}\ Y^{(\mathfrak{T})}=i,h_{\text{inv}}(X^{(\mathfrak{T})})=j\right\} \Bigg{)}. \tag{58}\]
Similarly we can calculate
\[\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j,Y^{(\mathfrak{T})}=j \right\}-\mathbb{P}\left\{h(X^{(1)})=j,Y^{(1)})=j\right\}\] \[=\sum_{i=1}^{L}\bigg{[}\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j \ \Big{|}\ Y^{(\mathfrak{T})}=j,h_{\text{inv}}(X^{(\mathfrak{T})})=i \right\}\cdot\mathbb{P}\left\{Y^{(\mathfrak{T})}=j,h_{\text{inv}}(X^{( \mathfrak{T})})=i\right\}\] \[\qquad\qquad-\mathbb{P}\left\{h(X^{(1)})=j\ \Big{|}\ Y^{(1)}=j,h_{\text{inv}}(X^{(1)})=i \right\}\cdot\mathbb{P}\left\{Y^{(1)}=j,h_{\text{inv}}(X^{(1)})=i\right\} \bigg{]}\] \[=\sum_{i=1}^{L}\Bigg{(}\bigg{[}\mathbb{P}\left\{h(X^{(\mathfrak{ T})})=j\ \Big{|}\ Y^{(\mathfrak{T})}=j,h_{\text{inv}}(X^{(\mathfrak{T})})=i\right\}\] \[\qquad\qquad-\mathbb{P}\left\{h(X^{(1)})=j\ \Big{|}\ Y^{(1)}=j,h_{\text{inv}}(X^{(1)})=i\right\}\bigg{]}\cdot \mathbb{P}\left\{Y^{(1)}=j,h_{\text{inv}}(X^{(1)})=i\right\}\] \[\qquad\qquad+\delta_{ji}\cdot\mathbb{P}\left\{Y^{(\mathfrak{T})}= j\right\}\cdot\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j\ \Big{|}\ Y^{(\mathfrak{T})}=j,h_{\text{inv}}(X^{(\mathfrak{T})})=i\right\} \Bigg{)}. \tag{59}\]
Subtracting Eq. (59) from Eq. (58), we obtain
\[\Big{[}\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j,h_{\text{inv}}(X^{ (\mathfrak{T})})=j\right\}-\mathbb{P}\left\{h(X^{(1)})=j,h_{\text{inv}}(X^{(1) })=j\right\}\Big{]}\] \[= \bigg{[}\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j\ \Big{|}\ Y^{( \mathfrak{T})}\neq j,h_{\text{inv}}(X^{(\mathfrak{T})})=j\right\}\] \[\qquad\qquad-\mathbb{P}\left\{h(X^{(1)})=j\ \Big{|}\ Y^{(1)}\neq j,h_{\text{inv}}(X^{( 1)})=j\right\}\bigg{]}\cdot B_{j}\] \[-\left[\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j\ \Big{|}\ Y^{( \mathfrak{T})}=j,h_{\text{inv}}(X^{(\mathfrak{T})})\neq j\right\}\right.\] \[\qquad\qquad\qquad-\mathbb{P}\left\{h(X^{(1)})=j\ \Big{|}\ Y^{(1)}=j,h_{\text{inv}}(X^{(1)})\neq j\right\}\bigg{]}\cdot B_{j}^{\prime}\] \[+\sum_{i=1}^{L}\bigg{[}\delta_{ij}\cdot\mathbb{P}\left\{Y^{( \mathfrak{T})}=i\right\}\cdot\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j\ \Big{|}\ Y^{(\mathfrak{T})}=i,h_{\text{inv}}(X^{(\mathfrak{T})})=j\right\}\] \[\qquad\qquad\qquad-\delta_{ji}\cdot\mathbb{P}\left\{Y^{( \mathfrak{T})}=j\right\}\cdot\mathbb{P}\left\{h(X^{(\mathfrak{T})})=j\ \Big{|}\ Y^{(\mathfrak{T})}=j,h_{\text{inv}}(X^{(\mathfrak{T})})=i\right\} \bigg{]} \tag{60}\]
where we write
\[B_{j}=\mathbb{P}\left\{Y^{(1)}\neq j,h_{\text{inv}}(X^{(1)})=j\right\},\ \ B_{j}^{\prime}=\mathbb{P}\left\{Y^{(1)}=j,h_{\text{inv}}(X^{(1)})\neq j \right\}.\]
Now it follows that
\[\left|\left[\mathbb{P}\left\{h(X^{(1)})\neq h_{\text{inv}}(X^{(1) })\right\}-\mathbb{P}\left\{h(X^{(\mathfrak{T})})\neq h_{\text{inv}}(X^{( \mathfrak{T})})\right\}\right]-\left(\mathcal{R}^{(1)}(h)-\mathcal{R}^{( \mathfrak{T})}(h)\right)\right|\] \[= \Bigg{|}\left[\mathbb{P}\left\{h(X^{(\mathfrak{T})})=h_{\text{inv }}(X^{(\mathfrak{T})})\right\}-\mathbb{P}\left\{h(X^{(1)})=h_{\text{inv}}(X^{( 1)})\right\}\right]\] \[\qquad\qquad\qquad\qquad-\left[\mathbb{P}\left\{h(X^{(\mathfrak{ T})})=Y^{(\mathfrak{T})}\right\}-\mathbb{P}\left\{h(X^{(1)})=Y^{(1)} \right\}\right]\Bigg{|}\] \[\stackrel{{(i)}}{{\leq}}\sum_{j=1}^{L}B_{j}+\sum_{j =1}^{L}B_{j}^{\prime}+\Bigg{|}\sum_{i=1}^{L}\sum_{j=1}^{L}\delta_{ij}\cdot \mathbb{P}\left\{Y^{(\mathfrak{T})}=i\right\}\] \[\qquad\qquad\qquad\qquad\qquad\cdot\left(\mathbb{P}\left\{h(X^{ (\mathfrak{T})})=j\ \Big{|}\ Y^{(\mathfrak{T})}=i,h_{\text{inv}}(X^{(\mathfrak{T})})=j\right\}\right.\] \[\qquad\qquad\qquad\qquad\qquad\left.-\mathbb{P}\left\{h(X^{( \mathfrak{T})})=i\ \Big{|}\ Y^{(\mathfrak{T})}=j,h_{\text{inv}}(X^{(\mathfrak{T})})=i\right\} \right)\Bigg{|}\] \[\stackrel{{(ii)}}{{\leq}}2\mathcal{R}^{(1)}(h_{\text{ inv}})+\sum_{i=1}^{L}\sum_{j=1}^{L}|\delta_{ij}|\cdot\mathbb{P}\left\{Y^{( \mathfrak{T})}=i\right\}\] \[\stackrel{{(iii)}}{{\leq}}2\mathcal{R}^{(1)}(h_{ \text{inv}})+\sum_{i=1}^{L}\sum_{j=1}^{L}\Psi_{\mathcal{G},\phi_{\text{inv}}} \cdot\mathbb{P}\left\{Y^{(\mathfrak{T})}=i\right\}\] \[=2\mathcal{R}^{(1)}(h_{\text{inv}})+L\Psi_{\mathcal{G},\phi_{ \text{inv}}},\]
where in step (i) we sum over \(j\in\mathcal{Y}=\{1,2,\ldots,L\}\) using Eq. (60), and in step (ii) we use \(\sum_{j=1}^{L}B_{j}=\sum_{j=1}^{L}B_{j}^{\prime}=\mathcal{R}^{(1)}(h_{\text{ inv}})\). Step (iii) applies \(|\delta_{ij}|\leq\Psi_{\mathcal{G},\phi_{\text{inv}}}\). Rearranging terms, we arrive at
\[\left|\mathcal{R}^{(\mathfrak{T})}(h)-\mathcal{R}^{(1)}(h)\right| \leq 2\mathcal{R}^{(1)}(h_{\text{inv}})+\Big{|}\mathbb{P}\left\{h(X^{(1)}) \neq h_{\text{inv}}(X^{(1)})\right\}-\mathbb{P}\left\{h(X^{(\mathfrak{T})}) \neq h_{\text{inv}}(X^{(\mathfrak{T})})\right\}\Big{|}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+L\Psi_{\mathcal{G}, \phi_{\text{inv}}}.\]
\(\blacksquare\)
### Proof of Theorem 3
For notation simplicity, for any \(\phi\in\Phi\), we omit the bias term and write \(\phi(x)=Ax\). All proof below holds after adding the bias term. And we use \(u_{1},\ldots,u_{L}\) and \(\widetilde{u}_{1},\ldots,\widetilde{u}_{L}\) to denote \(f^{(1)}(1),\ldots,f^{(1)}(L)\) and \(f^{(\mathfrak{T})}(1),\ldots,f^{(\mathfrak{T})}(L)\), respectively.
Part (a)By Assumption 3, for any \(A\in\Phi\), the distributions of \(AX^{(1)}\) and \(AX^{(\mathfrak{T})}\) are given by the mixture distributions of the form
\[\mathcal{P}^{(1)}_{AX}=\sum_{j=1}^{L}\frac{1}{L}P_{A\epsilon}(Au_{j}),\ \ \mathcal{P}^{( \mathfrak{T})}_{AX}=\sum_{j=1}^{L}\frac{1}{L}P_{A\epsilon}(A\widetilde{u}_{j}),\]
where \(P_{A\epsilon}(u)\) denotes the distribution of \(A\epsilon\) with mean shifted by a fixed vector \(u\). Applying Lemma 2, the constraint in DIP, \(\mathcal{P}^{(1)}_{AX}=\mathcal{P}^{(\mathfrak{T})}_{AX}\), implies that we can find a permutation \(\pi\) on \(\{1,2,\ldots,L\}\) such that
\[Au_{j}=A\widetilde{u}_{\pi(j)}\text{ for all }j\in\mathcal{Y}. \tag{61}\]
Hence any representations learned by DIP should meet the constraint given in Eq. (61) for some permutation \(\pi\). Moreover, letting \(\dim\left(\operatorname{span}\{u_{1}-\widetilde{u}_{\pi(1)},\ldots,u_{L}- \widetilde{u}_{\pi(L)}\}\right)=k\), Lemma 3 shows that any matrix \(A\) satisfying Eq. (61) has the row space orthogonal to \(\operatorname{span}\{u_{1}-\widetilde{u}_{\pi(1)},\ldots,u_{L}-\widetilde{u}_ {\pi(L)}\}\) and \(\operatorname{rank}(A)\leq p-k\).
Now consider all the pairs of \((A_{\pi},\pi)\) that meets (61) and the row space of \(A_{\pi}\) is the orthogonal complement to \(\operatorname{span}\{u_{1}-\widetilde{u}_{\pi(1)},\ldots,u_{L}-\widetilde{u}_ {\pi(L)}\}\). By definition of DIP in Eq. (11), we obtain \(\phi_{\text{DIP}}=A_{\pi_{\star}}\), where
\[\pi_{\star}=\operatorname*{arg\,min}_{\{\pi\text{ is permutation}\}}\min_{g\in \mathcal{G},A_{\pi}}\mathcal{R}^{(1)}(g\circ A_{\pi}).\]
We can clearly find examples where \(\pi_{\star}\neq\mathbf{I}\), for instance, as shown in Figure 1. This occurs as long as the label-flipping features offer lower source risk than the conditionally invariant features while it cannot generalize to the target domain. In this case, the conditional distributions of \(\phi_{\text{DIP}}(X^{(1)})\) and \(\phi_{\text{DIP}}(X^{(\mathfrak{T})})\), given the labels, are aligned after the target labels are permuted by \(\pi_{\star}\).
Part (b)Next, we consider JointDIP defined in Eq. (32). If \(\phi_{\text{inv}}\) is a linear mapping, say \(\phi_{\text{inv}}(x)=Bx\) for some matrix \(B\in\mathbb{R}^{\tau\times p}\), then writing \(C=(A^{\top},B^{\top})^{\top}\) to denote the concatenation of \(A\) and \(B\), we have
\[\mathcal{P}^{(1)}_{(\phi_{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text
Moreover, given that there is no label shift and \(\phi_{\text{j-DIP}}\) is conditionally invariant across \(\mathcal{P}^{(1)}\) and \(\mathcal{P}^{(\mathfrak{T})}\), we have \(\mathcal{R}^{(\mathfrak{T})}(g\circ\phi_{\text{j-DIP}})=\mathcal{R}^{(1)}(g \circ\phi_{\text{j-DIP}})\) for all \(g\in\mathcal{G}\). Then the optimization objective of population JointDIP becomes equivalent to minimizing \(\mathcal{R}^{(\mathfrak{T})}(g\circ\phi_{\text{j-DIP}})\). Since \(\phi_{\text{inv}}^{0}=(\phi_{\text{inv}},\mathbf{0}_{q-r})\) is a feasible solution to the JointDIP constraint, we conclude that JointDIP is at least as good as the optimal classifier built upon \(\phi_{\text{inv}}^{0}\), i.e.,
\[\mathcal{R}^{(\mathfrak{T})}(h_{\text{j-DIP}})\leq\min_{g\in\mathcal{G}} \mathcal{R}^{(\mathfrak{T})}(g\circ\phi_{\text{inv}}^{0}).\]
Part (c)In JointDIP, the joint distributions of \(AX\in\mathbb{R}^{q}\) and \(\phi_{\text{inv}}(X)\in\mathbb{R}^{r}\) are matched across the source and target domains. Let
\[T(X)=\left(\begin{array}{c}AX\\ \phi_{\text{inv}}(X)\end{array}\right)\in\mathbb{R}^{q+r}.\]
The characteristic function of \(T(X^{(1)})\) under the source distribution is then:
\[c^{(1)}(t)=c^{(1)}(t_{1},t_{2}) =\mathbb{E}\left[e^{it^{\top}T(X^{(1)})}\right]\] \[=\frac{1}{L}\sum_{j=1}^{L}\mathbb{E}\left[e^{it^{\top}T(X^{(1)})} \ \Big{|}\ Y^{(1)}=j\right]\] \[=\frac{1}{L}\sum_{j=1}^{L}\mathbb{E}\left[e^{it^{\top}_{1}AX^{(1) }+it^{\top}_{2}\phi_{\text{inv}}(X^{(1)})}\ \Big{|}\ Y^{(1)}=j\right]\] \[=\frac{1}{L}\sum_{j=1}^{L}e^{it^{\top}_{1}Au_{j}}\cdot\underbrace {\mathbb{E}\left[e^{it^{\top}_{1}A\epsilon^{(1)}+it^{\top}_{2}\phi_{\text{inv}} (X^{(1)})}\ \Big{|}\ Y^{(1)}=j\right]}_{:=b_{j}(t_{1},t_{2})},\]
where the second step uses the tower property, and the last step follows from the general anticausal model, where conditional on \(Y^{(1)}=j\), \(X^{(1)}=u_{j}+\epsilon^{(m)}\) under the source distribution. Similarly, we can compute the characteristic function of \(T\) under the target distribution:
\[c^{(\mathfrak{T})}(t) =c^{(\mathfrak{T})}(t_{1},t_{2})\] \[=\frac{1}{L}\sum_{j=1}^{L}e^{it^{\top}_{1}A\widetilde{u}_{j}} \cdot\mathbb{E}\left[e^{it^{\top}_{1}A\epsilon^{(\mathfrak{T})}+it^{\top}_{2} \phi_{\text{inv}}(X^{(\mathfrak{T})})}\ \Big{|}\ Y^{(\mathfrak{T})}=j\right]\] \[=\frac{1}{L}\sum_{j=1}^{L}e^{it^{\top}_{1}A\widetilde{u}_{j}} \cdot b_{j}(t_{1},t_{2}).\]
Here the last equality holds because \(\epsilon^{(\mathfrak{T})}\) and \(\phi_{\text{inv}}(X^{(\mathfrak{T})})\) share the same conditional distributions under the target distribution as \(\epsilon^{(1)}\) and \(\phi_{\text{inv}}(X^{(1)})\) under the source distribution. The JointDIP matching penalty enforces \(c^{(1)}(t)=c^{(\mathfrak{T})}(t)\) for all \(t\), or
\[\sum_{j=1}^{L}e^{it^{\top}_{1}Au_{j}}b_{j}(t_{1},t_{2})=\sum_{j=1}^{L}e^{it^{ \top}_{1}A\widetilde{u}_{j}}b_{j}(t_{1},t_{2})\text{ for all }t_{1}\in\mathbb{R}^{q},t_{2}\in\mathbb{R}^{r}.\]
If we take partial derivaties of both sides with respect to \(t_{1}\),
\[\left(\sum_{j=1}^{L}Au_{j}e^{it_{1}^{\top}Au_{j}}b_{j}(t_{1},t_{2}) \right)i+\sum_{j=1}^{L}e^{it_{1}^{\top}Au_{j}}\frac{\partial b_{j}(t_{1},t_{2})}{ \partial t_{1}}\] \[=\left(\sum_{j=1}^{L}A\widetilde{u}_{j}e^{it_{1}^{\top}Au_{j}}b_{ j}(t_{1},t_{2})\right)i+\sum_{j=1}^{L}e^{it_{1}^{\top}A\widetilde{u}_{j}} \frac{\partial b_{j}(t_{1},t_{2})}{\partial t_{1}}.\]
Setting \(t_{1}=0\), it follows that
\[\sum_{j=1}^{L}Au_{j}\cdot b_{j}(0,t_{2})=\sum_{j=1}^{L}A\widetilde{u}_{j}\cdot b _{j}(0,t_{2}),\]
or equivalently,
\[\sum_{j=1}^{L}Au_{j}\cdot\mathbb{E}\left[e^{it_{2}^{\top}\phi_{\rm inv}(X^{(1) })}\ \Big{|}\ Y^{(1)}=j\right]=\sum_{j=1}^{L}A\widetilde{u}_{j}\cdot\mathbb{E} \left[e^{it_{2}^{\top}\phi_{\rm inv}(X^{(1)})}\ \Big{|}\ Y^{(1)}=j\right]. \tag{63}\]
For any vector \(a\in\mathbb{R}^{r}\), let \(t_{2}=ka\) for \(k\in\mathbb{R}\). In Eq. (63), take derivative with respect to \(k\) up to \(L-1\) times and set \(k=0\), then we get
\[\begin{cases}\sum_{j=1}^{L}Au_{j}=\sum_{j=1}^{L}A\widetilde{u}_{j},\\ \sum_{j=1}^{L}Au_{j}\cdot\mathbb{E}\left[a^{\top}\phi_{\rm inv}(X^{(1)})\ \big{|}\ Y^{(1)}=j\right]=\sum_{j=1}^{L}A\widetilde{u}_{j}\cdot\mathbb{E} \left[a^{\top}\phi_{\rm inv}(X^{(1)})\ \big{|}\ Y^{(1)}=j\right],\\ \sum_{j=1}^{L}Au_{j}\cdot\mathbb{E}\left[\left(a^{\top}\phi_{\rm inv}(X^{(1)}) \right)^{2}\ \Big{|}\ Y^{(1)}=j\right]=\sum_{j=1}^{L}A\widetilde{u}_{j}\cdot \mathbb{E}\left[\left(a^{\top}\phi_{\rm inv}(X^{(1)})\right)^{2}\ \Big{|}\ Y^{(1)}=j\right],\\ \ldots\\ \sum_{j=1}^{L}Au_{j}\cdot\mathbb{E}\left[\left(a^{\top}\phi_{\rm inv}(X^{(1)}) \right)^{L-1}\ \Big{|}\ Y^{(1)}=j\right]=\sum_{j=1}^{L}A\widetilde{u}_{j}\cdot \mathbb{E}\left[\left(a^{\top}\phi_{\rm inv}(X^{(1)})\right)^{L-1}\ \Big{|}\ Y^{(1)}=j\right],\end{cases}\]
where we use the fact that \(\phi_{\rm inv}\) is a conditionally invariant feature mapping. Writing the above linear system in a matrix form, we have
\[\underbrace{\left(\begin{array}{cccc}1&1&\cdots&1\\ m_{1}^{1}(a)&m_{2}^{1}(a)&\cdots&m_{L}^{1}(a)\\ \vdots&\vdots&\ddots&\vdots\\ m_{1}^{L-1}(a)&m_{2}^{L-1}(a)&\cdots&m_{L}^{L-1}(a)\end{array}\right)}_{=C_{ \phi_{\rm inv}}(a)}\begin{pmatrix}(Au_{1}-A\widetilde{u}_{1})^{\top}\\ (Au_{2}-A\widetilde{u}_{2})^{\top}\\ \vdots\\ (Au_{L}-A\widetilde{u}_{L})^{\top}\end{pmatrix}=0,\ \ \forall\ell\in\{1,2,\ldots,L\},\]
where \(m_{j}^{l}(a)=\mathbb{E}\left[\left(a^{\top}\phi_{\rm inv}(X^{(1)})\right)^{l} \Big{|}\ Y^{(1)}=j\right]\). By assumption that \(C_{\phi_{\rm inv}}(a)\) is full rank, we get \(Au_{j}=A\widetilde{u}_{j}\) for all \(j\in\{1,\ldots,L\}\), i.e.,
\[\mathcal{P}^{(1)}_{\phi_{\rm\mbox{-}DIP}(X)|Y=y}=\mathcal{P}^{(\Xi)}_{\phi_{ \rm\mbox{-}DIP}(X)|Y=y},\ \ \forall y\in\{1,2,\ldots,L\},\]
proving the result.
**Lemma 2**: _Consider the mixture distributions \(P_{1},P_{2}\) given by_
\[P_{1}=\sum_{j=1}^{L}w_{j}P(u_{j})\mbox{ and }P_{2}=\sum_{j=1}^{L}\widetilde{w}_{j}P (\widetilde{u}_{j}),\]
_where \(w_{j},\widetilde{w}_{j}\) are the mixing weights such that \(\sum_{j}w_{j}=\sum_{j}\widetilde{w}_{j}=1\) and \(w_{j},\widetilde{w}_{j}\geq 0\), and \(u_{j},\widetilde{u}_{j}\in\mathbb{R}^{p}\) are the fixed vectors for the \(j\)-th components, and \(P(u_{j}),P(\widetilde{u}_{j})\) denote the distribution \(P\) location shifted by \(u_{j},\widetilde{u}_{j}\) respectively. If \(P_{1}=P_{2}\), then there exists a permutation \(\pi\) such that \(u_{j}=\widetilde{u}_{\pi j}\) for all \(j\)._
**Proof** Let \(\nu=\sum_{j=1}^{L}w_{j}\delta_{u_{j}}\) and \(\widetilde{\nu}=\sum_{j=1}^{L}\widetilde{w}_{j}\delta_{\widetilde{u}_{j}}\) be the mixing distributions of \(P_{1}\) and \(P_{2}\) respectively. Let \(X\sim P_{1}\) and \(\widetilde{X}\sim P_{2}\). Then we can write \(X=U+Z\) and \(\widetilde{X}=\widetilde{U}+\widetilde{Z}\) where
\[\begin{cases}U\sim\nu,Z\sim P\mbox{ independent of }U;\\ \widetilde{U}\sim\widetilde{\nu},\widetilde{Z}\sim P\mbox{ independent of }\widetilde{U}.\end{cases}\]
The characteristic functions of \(X\) and \(\widetilde{X}\) can be expressed as
\[\varphi_{X}(t)=\varphi_{U}(t)\varphi_{Z}(t),\ \ \varphi_{\widetilde{X}}(t)= \varphi_{\widetilde{U}}(t)\varphi_{\widetilde{Z}}(t).\]
Since \(\varphi_{X}(t)=\varphi_{\widetilde{X}}(t)\) and \(\varphi_{Z}(t)=\varphi_{\widetilde{Z}}(t)\), it follows that \(\varphi_{U}(t)=\varphi_{\widetilde{U}}(t)\), or equivalently \(\nu=\widetilde{\nu}\).
Next let \(S=\{u_{1},\ldots,u_{L}\}\) and \(\widetilde{S}=\{\widetilde{u}_{1},\ldots,\widetilde{u}_{L}\}\) to denote the support sets of \(\nu\) and \(\widetilde{\nu}\). We can easily extend Lemma 2 in Wu and Yang (2020) to the high-dimensional setting to show that
\[d_{H}(S,\widetilde{S})\leq L\cdot W_{1}(\nu,\widetilde{\nu})=0,\]
where \(d_{H}(S,\widetilde{S})\) is the Hausdorff distance between \(S\) and \(\widetilde{S}\), defined as
\[d_{H}(S,\widetilde{S})=\max\left\{\sup_{x\in S}\inf_{\widetilde{x}\in \widetilde{S}}\|x-\widetilde{x}\|_{1},\sup_{\widetilde{x}\in\widetilde{S}}\inf _{x\in S}\|x-\widetilde{x}\|_{1}\right\},\]
and where \(W_{1}\) denotes the \(1\)-Wasserstein distance. By definition of \(d_{H}(S,\widetilde{S})\), this proves that we can find a permutation \(\pi\) such that \(u_{j}=\widetilde{u}_{\pi j}\) for all \(j\), thus proving the lemma. \(\blacksquare\)
**Lemma 3**: _Suppose that \(\{u_{1},\ldots,u_{L}\}\) and \(\{\widetilde{u}_{1},\ldots,\widetilde{u}_{L}\}\subseteq\mathbb{R}^{p}\) are collections of vectors such that_
\[\dim\left(\operatorname{span}\{u_{1}-\widetilde{u}_{1},\ldots,u_{L}- \widetilde{u}_{L}\}\right)=k<p.\]
_Then for any \(A\in\mathbb{R}^{q\times p}\) satisfying \(Au_{j}=A\widetilde{u}_{j}\) for all \(j=1,\ldots,L\), we have \(\operatorname{rank}(A)\leq p-k\). The equality is achieved if and only if the rows of \(A\) span \(\operatorname{span}\{u_{1}-\widetilde{u}_{1},\ldots,u_{L}-\widetilde{u}_{L}\}^ {\perp}\)._
**Proof** This lemma is a simple consequence of linear algebra. By the condition of the lemma, \(A(u_{j}-\widetilde{u}_{j})=0\) for all \(j=1,\ldots,L\). This implies that the row space of \(A\) is orthogonal to the subspace spanned by \(\{u_{1}-\widetilde{u}_{1},\ldots,u_{L}-\widetilde{u}_{L}\}\), i.e.,
\[\operatorname{row}(A)\perp\operatorname{span}\{u_{1}-\widetilde{u}_{1}, \ldots,u_{L}-\widetilde{u}_{L}\}.\]
In particular, \(\dim(\operatorname{row}(A))\leq p-k\), with equality achieved if and only if \(\operatorname{row}(A)\) is the orthogonal complement of \(\operatorname{span}\{u_{1}-\widetilde{u}_{1},\dots,u_{L}-\widetilde{u}_{L}\}\). By the rank theorem, \(\operatorname{rank}(A)=\dim(\operatorname{row}(A))\leq p-k\), proving the lemma.
## Appendix D Experiment details
This section provides details on various DA algorithms, training procedures, network architectures, and hyperparameter selection strategies that we employed in our numerical experiments (Section 5).
### DA algorithms in numerical experiments
Table 8 provides a summary of the algorithms that we used in our numerical experiments. Both mean distance and maximum mean discrepancy (MMD) were used in DIP and CIP, but for JointDIP, we only considered MMD distance, because joint matching with mean distance is essentially the same as the original DIP. In addition to the DA algorithms introduced in this work, we also include IRM (Arjovsky et al., 2019), V-REx (Krueger et al., 2021), and groupDRO (Sagawa et al., 2019) in our experiments.
\begin{table}
\begin{tabular}{l|c|c|c} \hline DA Algorithm & \(\widehat{u}^{(m)}\) Estimation & \(\phi_{\text{inv}}\) & Minimization Objective & \(\mathfrak{D}\left(\cdot,\cdot\right)\) \\ \hline Tar & - & - & \(\widehat{\mathcal{R}}^{(\mathfrak{I})}(g\circ\phi)\) & - \\ \hline ERM & - & - & - \\ \hline ERM-Pool & - & - & - \\ \hline DIP & - & - & \(\widehat{\mathcal{R}}^{(\mathfrak{I})}(g\circ\phi)+\lambda_{\text{ DIP}}\cdot\mathfrak{D}\left(\widehat{\mathcal{P}}^{(1)}_{\phi(\chi)},\widehat{ \mathcal{P}}^{(\mathfrak{I})}_{\phi(\chi)}\right)\) & mean, MMD \\ \hline DIP-Pool & - & - & \(\frac{1}{2}\sum\limits_{\begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\left\{\widehat{\mathcal{R}}^{(m)}(g \circ\phi)+\lambda_{\text{DIP}}\cdot\mathfrak{D}\left(\widehat{\mathcal{P}}^{( m)}_{\phi(\chi)},\widehat{\mathcal{P}}^{(\mathfrak{I})}_{\phi(\chi)}\right)\right\}\) & mean, MMD \\ \hline CIP & - & - & \(\frac{1}{2}\sum\limits_{\begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\widehat{\mathcal{R}}^{(m)}(g\circ\phi)+ \frac{\lambda_{\text{CIP}}}{2M}\cdot\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\mathfrak{D}\left(\widehat{\mathcal{P}}^{( m)}_{\phi(\chi)|Y=y},\widehat{\mathcal{P}}^{(m)}_{\phi(\chi)|Y=y}\right)\) & mean, MMD \\ \hline IW-ERM & ERM-Pool & - \\ \hline IW-CIP & CIP & - & \(\frac{1}{2}\sum\limits_{\begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\widehat{\mathcal{R}}^{(m)}(g\circ\phi; \widehat{u}^{(m)})+\frac{\lambda_{\text{CIP}}}{2M}\cdot\sum\limits_{ \begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\sum\limits_{\begin{subarray}{c} \mathfrak{I}\\ \infty\end{subarray}}^{\frac{L}{2}}\) & mean, MMD \\ \hline JointDIP & - & \(\widehat{\phi}_{\text{CIP}}\) & \(\widehat{\mathcal{R}}^{(\mathfrak{I})}(g\circ\phi)+\lambda_{\text{ DIP}}\cdot\mathfrak{D}\left(\widehat{\mathcal{P}}^{(1)}_{\phi(\omega)(\chi)},\widehat{ \mathcal{P}}^{(1)}_{\phi(\omega)(\chi)})\right)\) & MMD \\ \hline JointDIP-Pool & - & \(\widehat{\phi}_{\text{CIP}}\) & \(\frac{1}{2}\sum\limits_{\begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\left\{\widehat{\mathcal{R}}^{(m)}(g\circ \phi)+\lambda_{\text{DIP}}\cdot\mathfrak{D}\left(\widehat{\mathcal{P}}^{( \mathfrak{I})}_{\phi(\omega)(\chi)},\widehat{\mathcal{P}}^{(\mathfrak{I})}_{ \phi(\omega)(\chi)})\right\}\) & MMD \\ \hline IW-JointDIP & CIP & \(\widehat{\phi}_{\text{CIP}}\) & \(\widehat{\mathcal{R}}^{(\mathfrak{I})}(g\circ\phi;\widehat{u}^{(1)})+\lambda_{ \text{ DIP}}\cdot\mathfrak{D}\left(\widehat{\mathcal{P}}^{(1)}_{\phi(\omega)(\chi) },\widehat{\mathcal{P}}^{(\mathfrak{I})}_{\phi(\omega)(\chi)})\right)\) & MMD \\ \hline IRM & - & - & \(\frac{1}{2}\sum\limits_{\begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\left\{\widehat{\mathcal{R}}^{(m)}(g\circ \phi)+\lambda_{\text{MMI}}\cdot\left\|\nabla_{\omega(\omega=1,\omega}\widehat{ \mathcal{R}}^{(m)}(w\cdot g\circ\phi)\right\|_{2}^{2}\right\}\) & - \\ \hline V-REx & - & - & \(\frac{1}{2}\sum\limits_{\begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\widehat{\mathcal{R}}^{(m)}(g\circ\phi)+ \lambda_{\text{IREx}}\cdot\operatorname{Var}\left(\widehat{\mathcal{R}}^{(m)}(g \circ\phi)\right)\) & - \\ \hline groupDRO & - & - & \(\sup_{\sum\limits_{\begin{subarray}{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\mathfrak{D}_{\omega}=1,\omega_{ \text{IB}}\geq 0}\sum\limits_{\begin{subarray}{c}\mathfrak{I}\\ \infty\end{subarray}}^{\frac{M}{2}}\widehat{\mathcal{R}}^{(m)}(g\circ\phi)\) & - \\ \hline \end{tabular}
\end{table}
Table 8: DA algorithms used in numerical experiments.
### Training details and network architectures
Except for Camelyon17, the datasets we used in Section 5 are either generated or perturbed synthetically. To avoid any potential bias that can result from repeatedly running DA algorithms on a single, fixed synthetic dataset, for each classification task, we use 10 different seeds to construct 10 different datasets, and each DA algorithm is run once per seed.
Table 9 shows details on the training parameters and model architectures used in all the experiments in Section 5. For multi-stage DA algorithms such as JointDIP (where CIP is run before solving JointDIP optimization problem), the number of epochs is the same in each training stage. As for the feature layer used for CIP and DIP matching penalty, we utilize the last layer of the neural network in experiments on SCM, MNIST, and CelebA; for experiments on Camelyon17, we use the flattened output from CNN in DenseNet-121 following the practice in Koh et al. (2021).
### Hyperparameter selection
In our experiments on SCM, MNIST, and CelebA, instead of searching for different hyperparameters for each seed, which would be time-consuming, we choose the same hyperparameters across all seeds. In Appendix D.4, we compare between two hyperparameter selection strategies, i.e., using different hyperparameters versus using the same hyperparameters across seeds on SCMs, and find that these two strategies yield similar results.
In order to select hyperparameters which allow for the highest average accuracy, we use a simple grid search. Three different ways of hyperparameter selection are presented in Gul
\begin{table}
\begin{tabular}{c|c c c c c} \hline Classification Task & Epochs & Optimizer & Batch Size & NN Architecture & Feature Layer & \(\mathfrak{D}\left(\cdot\cdot\right)^{14}\) \\ \hline SCM & 50 & Adam(lr=1e-2) & 100 & Linear Model & Last layer & mean \\ \hline \multirow{3}{*}{SCM\_binary15} & \multirow{3}{*}{50} & \multirow{3}{*}{Adam(lr=1e-2)} & \multirow{3}{*}{100} & \multicolumn{2}{c}{Linear(\(10,10\)) or} \\ & & & & Linear(\(10,10\))-ReLU & Last layer & mean \\ & & & & Linear(\(10,2\)) & & \\ \hline \multirow{3}{*}{MNIST} & \multirow{3}{*}{20} & \multirow{3}{*}{Adam(lr=1e-3)} & \multirow{3}{*}{256} & \multicolumn{2}{c}{Conv2d\((1,20,5,1)-\)ReLU} \\ & & & & MaxPool2d\((2)\) & & \\ & & & & Conv2d\((20,50,5,1)-\)ReLU & & \\ & & & & MaxPool2d\((2)\) & & \\ & & & & Linear(\(800,500\))-ReLU & & \\ & & & & Linear(\(500,2\)) & & \\ \hline \multirow{3}{*}{CelebA} & \multirow{3}{*}{10} & \multirow{3}{*}{Adam(lr=1e-3)} & \multirow{3}{*}{64} & \multicolumn{2}{c}{Conv2d\((3,16,5,1)-\)ReLU} \\ & & & & MaxPool2d\((2)\) & & \\ & & & & Conv2d\((16,32,5,1,1)-\)ReLU & & \\ & & & & MaxPool2d\((2)\) & & \\ & & & & Conv2d\((32,64,5,1)-\)ReLU & & \\ & & & & MaxPool2d\((2)\) & & \\ & & & & Linear(\(960,256)-\)ReLU & & \\ & & & & Linear(\(256,2\)) & & \\ \hline \multirow{3}{*}{Camelyon17} & \multirow{3}{*}{5} & SGD(lr=1e-3, & \multirow{3}{*}{32} & \multirow{3}{*}{DenseNet-121} & \multirow{3}{*}{Flattened} \\ & & weight\_decay=0.01, & & & (pretrained=False) & CNN output \\ \cline{1-1} & & momentum=0.9) & & & \\ \hline \end{tabular}
\end{table}
Table 9: Training details of each classification task.
rajani and Lopez-Paz (2020): training-domain validation set, leave-one-out cross validation, and test-domain validation set (oracle). The training-domain validation set approach is not suitable for us as it assumes the target domain is very similar to the training domains, which does not apply to our case. The leave-one-out cross validation approach is cumbersome and would also suffer if the validation domain is significantly different from the target domain. The test-domain validation set approach has often been adopted in several domain adaptation papers (Ganin et al., 2016; Arjovsky et al., 2019; Krueger et al., 2021) when an appropriate validation set is unavailable. However, this approach contradicts the domain adaptation principle where most target labels should be unavailable. Therefore, we restrict access to just 10% of labeled samples from the target domain in each seed, and check the accuracy of these labeled samples only at the final checkpoint during hyperparameter selection.
Note that Camelyon17 provides a validation set. Hence, we adhere to the standard submission guidelines from WILDS Koh et al. (2021); Sagawa et al. (2021) for this dataset, selecting hyperparameters based on accuracy scores on the validation set.
### Different hyperparameter vs. same hyperparameter across all seeds
We carry out a comparison of two strategies for selecting hyperparameters on SCMs: (1) using the same hyperparameter across all seeds, and (2) using different hyperparameters for different seeds. We observe that in most cases, the second approach tends to give a better performance. However, this difference typically doesn't affect the order of the top-performing algorithms; the best performing algorithm tends to remain the same regardless
\begin{table}
\begin{tabular}{l|c c} \hline \hline Algorithm & Hyperparameter & Grid Search Space \\ \hline DIP (mean / MMD, Pool) & \(\lambda^{*}_{\text{DIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \hline CIP (mean / MMD) & \(\lambda^{*}_{\text{CIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \hline IW-DIP (mean / MMD, Pool) & \(\lambda_{\text{IW-DIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \hline IW-CIP (mean / MMD) & \(\lambda_{\text{CIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \(\lambda_{\text{IW-CIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \hline JointDIP (Pool) & \(\lambda^{*}_{\text{CIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \(\lambda^{*}_{\text{JointDIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \hline IW-JointDIP & \(\lambda_{\text{CIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \(\lambda_{\text{JointDIP}}\) & \(10^{\{-2,-1,0,1,2\}}\) \\ \hline IRM & \(\lambda_{\text{IRM}}\) & \(10^{\{-1,0,1,2,3,4\}}\) \\ & iterations annealing & \(0,10,100,1000,3000\) \\ \hline V-REx & \(\lambda_{\text{VREx}}\) & \(10^{\{-1,0,1,2,3,4\}}\) \\ & iterations annealing & \(0,10,100,1000,3000\) \\ \hline groupDRO & eta & \(10^{\{-2,-1,0,1\}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Hyperparameter search space. In experiments on camelyon17, the search space of hyperparameters marked with \({}^{*}\) is defined by \(10^{\{-3,-2,-1,0,1,2\}}\).
of the hyperparameter selection strategies. For the sake of simplicity and time efficiency, we therefore adopt the first approach in our final results.
## Appendix E Additional Experiments
In this section, we provide additional experiments on SCMs and MNIST to investigate the effect of the number of source domains on the generalization performance of CIP and its variants.
### Effect of number of source domains in linear SCMs
To show that a sufficient number of source domains is necessary for finding CICs, we conduct experiments on linear SCMs to compare the results of utilizing fewer versus more source domains. Table 12 reports the risk difference \(\widehat{\mathcal{R}}^{(\mathfrak{T})}(h)-\widehat{\mathcal{R}}(h;\widehat{w})\) between the source and target domains for four different CIP-based methods. We observe that the risk difference tends to be closer to 0 for larger values of \(M\) in SCM II, III, and IV, whereas smaller values of \(M\) lead to a larger risk difference. This finding suggests that CIP and IW-CIP may underperform when the number of source domains \(M\) is small, because they can only find feature representations that are conditionally invariant across the source domains, but with limited domains, it becomes challenging to obtain feature representations that are conditionally invariant across both source and target domains. Note that in SCM I, the number of domains does not make a significant difference, because CICs do not exist in SCM I.
The minimum number of source domains necessary for finding CICs depends on the types of interventions and the hypothesis class in use. Proposition 3 and Lemma 1 show that the number of source domains needs to be greater than the dimension of the perturbations,
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & SCM I & \multicolumn{2}{c|}{SCM II} & \multicolumn{2}{c|}{SCM III} & \multicolumn{2}{c}{SCM IV} \\ \hline Mean shift & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c}{Y} \\ CICs & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c}{Y} \\ Label shift & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c|}{N} & \multicolumn{2}{c}{Y} \\ Label-flipping features & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{N} & \multicolumn{2}{c|}{Y} & \multicolumn{2}{c}{Y} \\ \hline Hyperparameter & Different & Same & Different & Same & Different & Same & Different & Same \\ \hline DIP-mean & **88.2\(\pm\)1.9** & **87.6\(\pm\)1.5** & 73.3\(\pm\)10.7 & 62.0\(\pm\)2.9 & 36.3\(\pm\)11.8 & 34.5\(\pm\)14.9 & 35.6\(\pm\)14.0 & 35.3\(\pm\)14.6 \\ DIP-MMD & 86.2\(\pm\)2.2 & 86.6\(\pm\)2.2 & 72.0\(\pm\)16.2 & 59.9\(\pm\)3.3 & 77.5\(\pm\)7.9 & 63.3\(\pm\)31.7 & 74.0\(\pm\)4.2 & 60.2\(\pm\)30.1 \\ DIP-Pool-mean & 86.2\(\pm\)2.8 & 86.4\(\pm\)2.2 & 70.8\(\pm\)10.1 & 60.1\(\pm\)3.1 & 81.9\(\pm\)1.8 & 82.0\(\pm\)1.1 & 81.9\(\pm\)3.8 & 82.3\(\pm\)3.8 \\ DIP-Pool-MMD & 86.2\(\pm\)1.9 & 85.5\(\pm\)3.1 & 74.2\(\pm\)13.5 & 61.3\(\pm\)9.3 & 82.5\(\pm\)1.4 & 83.3\(\pm\)0.8 & 82.2\(\pm\)4.1 & 82.5\(\pm\)3.8 \\ CIP-mean & 56.6\(\pm\)11.8 & 55.9\(\pm\)12.0 & 79.3\(\pm\)6.4 & 75.7\(\pm\)6.5 & 82.2\(\pm\)1.5 & 81.8\(\pm\)1.3 & 83.2\(\pm\)1.8 & 82.1\(\pm\)1.2 \\ CIP-MMD & 63.2\(\pm\)14.1 & 60.4\(\pm\)14.8 & 84.1\(\pm\)9.6 & 73.5\(\pm\)7.0 & 81.7\(\pm\)2.7 & 81.7\(\pm\)2.6 & 83.6\(\pm\)2.8 & 82.7\(\pm\)1.6 \\ IW-CIP-mean & 54.1\(\pm\)11.8 & 54.0\(\pm\)11.8 & 90.8\(\pm\)1.4 & 90.4\(\pm\)0.8 & 80.3\(\pm\)4.2 & 81.2\(\pm\)4.1 & 83.1\(\pm\)2.3 & 83.8\(\pm\)2.2 \\ IW-CIP-MMD & 64.4\(\pm\)12.7 & 56.8\(\pm\)13.6 & 89.5\(\pm\)4.3 & 90.4\(\pm\)0.8 & 80.9\(\pm\)3.5 & 80.5\(\pm\)6.3 & 84.1\(\pm\)1.9 & 83.5\(\pm\)2.8 \\ IW-DIP-mean & 54.3\(\pm\)11.7 & 54.3\(\pm\)11.7 & **92.7\(\pm\)2.1** & **92.1\(\pm\)2.7** & 37.6\(\pm\)12.9 & 37.2\(\pm\)14.9 & 66.6\(\pm\)5.2 & 64.2\(\pm\)7.3 \\ IW-DIP-MMD & 75.2\(\pm\)12.6 & 68.0\(\pm\)19.4 & 91.5\(\pm\)2.9 & 90.1\(\pm\)1.1 & 70.9\(\pm\)13.8 & 65.1\(\pm\)23.6 & 81.5\(\pm\)21.1 & 80.1\(\pm\)2.6 \\ JointDIP & 85.8\(\pm\)2.2 & 86.8\(\pm\)1.9 & 77.7\(\pm\)12.5 & 70.6\(\pm\)6.2 & **85.3\(\pm\)2.8** & **85.4\(\pm\)2.1** & 81.6\(\pm\)3.9 & 82.8\(\pm\)1.9 \\ IW-JointDIP & 72.7\(\pm\)17.3 & 68.0\(\pm\)19.4 & 91.3\(\pm\)2.8 & 90.0\(\pm\)1.3 & 82.9\(\pm\)3.3 & 82.9\(\pm\)8.1 & **85.4\(\pm\)1.9** & **85.1\(\pm\)3.7** \\ IRM & 66.1\(\pm\)12.1 & 56.7\(\pm\)10.4 & 86.7\(\pm\)5.4 & 71.9\(\pm\)17.9 & 82.0\(\pm\)1.9 & 80.2\(\pm\)1.8 & 80.2\(\pm\)2.2 & 83.7\(\pm\)3.3 \\ V-REx & 63.0\(\pm\)13.1 & 55.6\(\pm\)11.5 & 72.7\(\pm\)26.3 & 62.9\(\pm\)25.7 & 80.8\(\pm\)7.6 & 80.4\(\pm\)7.6 & 84.0\(\pm\)4.0 & 83.8\(\pm\)3.7 \\ groupDRO & 54.4\(\pm\)10.0 & 54.4\(\pm\)10.0 & 65.2\(\pm\)26.6 & 64.2\(\pm\)25.7 & 82.5\(\pm\)6.7 & 81.3\(\pm\)7.4 & 84.7\(\pm\)3.8 & 84.3\(\pm\)3.2 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Target accuracy in linear SCMs under different hyperparameter tuning strategies.
which is the case for linear SCMs above. In this context, we also consider another SCM under binary interventions, as described below. The purpose is to demonstrate that the required number of source domains that enables successful finding of CICs via CIP can vary when the hypothesis class changes.
* SCM binary: The data generation model is \[Y^{(m)} =\text{Bernoulli}(0.5),\] \[X^{(m)}_{[1:5]} =0.2(Y^{(m)}-0.5)\cdot\mathds{1}_{5}+0.4\cdot\mathcal{N}(0, \mathbb{I}_{5}),\] \[X^{(m)}_{[6:10]} =0.2(Y^{(m)}-0.5)\cdot\mathds{1}_{5}+0.4\cdot\mathcal{N}(0, \mathbb{I}_{5})+A^{(m)},\] where \(A^{(m)}\) is randomly sampled from \(\{a\in\mathbb{R}^{5}\mid a_{i}\in\{-1,1\}\}\) (\(1\leq m\leq M\)) without replacement in source domains, and the target domain suffers larger intervention with \(A^{(\mathbb{T})}=(2,2,2,2,2)^{\top}\).
We use two different models in this experiment: a linear model and a two-layer neural network. For the linear model, as shown in Proposition 3 and Lemma 1, the number of source domains should be at least on the order of the dimension of intervention plus one, that is, \(M\geq 6\). We vary the number of source domains \(M\) from \(2\) to \(7\) and evaluate the performance of CIP-mean and CIP-MMD in Table 13 (there is no label shift between source and target domains). We find that when \(M\geq 6\) the target accuracy and risk are close to those in the source domains. We also observe that using more source domains than the theoretically required minimum number of source domains can further improve performance in practice.
For the two-layer neural network, we vary \(M\) from \(2^{1}\) to \(2^{5}\). In Table 14, we observe that when \(M=32\), the target accuracy and target risk are close to the source accuracy and source risk. This suggests that if the hypothesis class is nonlinear and complex, CIP may need more domains to accurately identify CICs. In this case, \(M=2^{5}=32\) source domains covering all possible binary interventions are required.
### Effect of number of source domains in MNIST
Next we compare the performance of CIP-based methods on MNIST with varying number of source domains (\(M=3\) versus \(M=5\)). For \(M=3\), we use the middle three domains with rotation angles of \(-15^{\circ},0^{\circ}\), and \(15^{\circ}\) from Section 5.2; for \(M=5\), we use all source
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline & SCM I & \multicolumn{2}{c|}{SCM II (\(\times 10^{-2}\))} & \multicolumn{2}{c|}{SCM III (\(\times 10^{-2}\))} & \multicolumn{2}{c}{SCM IV (\(\times 10^{-2}\))} \\ \hline DA Algorithm & \(M=2\) & \(M=3\) & \(M=2\) & \(M=11\) & \(M=2\) & \(M=11\) & \(M=2\) & \(M=11\) \\ \hline CIP-mean & \(1.7\pm 2.0\) & \(2.1\pm 1.4\) & \(11.3\pm 57.9\) & \(-0.1\pm 0.9\) & \(22.2\pm 26.3\) & \(1.3\pm 3.6\) & \(14.2\pm 25.3\) & \(0.1\pm 0.9\) \\ CIP-MMD & \(1.7\pm 1.9\) & \(1.3\pm 1.1\) & \(-5.9\pm 7.6\) & \(3.6\pm 9.6\) & \(6.2\pm 7.1\) & \(2.1\pm 5.2\) & \(2.0\pm 4.7\) & \(0.3\pm 2.1\) \\ IW-CIP-mean & \(1.9\pm 2.2\) & \(2.0\pm 0.8\) & \(89.0\pm 120.0\) & \(-0.6\pm 10.9\) & \(41.1\pm 50.1\) & \(2.6\pm 6.9\) & \(25.6\pm 29.9\) & \(5.1\pm 6.7\) \\ IW-CIP-MMD & \(2.0\pm 2.3\) & \(1.7\pm 1.0\) & \(41.7\pm 60.0\) & \(-4.7\pm 9.0\) & \(30.2\pm 31.6\) & \(6.6\pm 16.8\) & \(11.4\pm 20.8\) & \(5.3\pm 9.6\) \\ \hline \end{tabular}
\end{table}
Table 12: Risk difference \(\widehat{\mathcal{R}}^{(\mathbb{T})}(h)-\widehat{\mathcal{R}}(h;\widehat{w})\) between source and target in linear SCMs. These differences are evaluated under varying number of source domains \(M\).
domains mentioned in our experiments from Section 5.2. Table 15 shows the risk difference between source and target domains for four different CIP-based methods. Similar to our findings in the SCM experiments, the results demonstrate that using a higher number of source domains helps in identifying CICs and reducing the risk difference between source and target.
|
2305.19796 | Effect of flow shear on the onset of dynamos | Understanding the origin and structure of mean magnetic fields in
astrophysical conditions is a major challenge. Shear flows often coexist in
such astrophysical conditions and the role of flow shear on dynamo mechanism is
only beginning to be investigated. Here, we present a direct numerical
simulation (DNS) study of the effect of flow shear on dynamo instability for a
variety of base flows with controllable mirror symmetry (i.e, fluid helicity).
Our observations suggest that for helical base flow, the effect of shear is to
suppress the small scale dynamo (SSD) action, i.e, shear helps the large scale
magnetic field to manifest itself by suppressing SSD action. For non-helical
base flows, flow shear has the opposite effect of amplifying the small-scale
dynamo action. The magnetic energy growth rate ($\gamma$) for non-helical base
flows are found to follow an algebraic nature of the form, $\gamma = - aS +
bS^\frac{2}{3}$ , where a, b > 0 are real constants and S is the shear flow
strength and $\gamma$ is found to be independent of scale of flow shear.
Studies with different shear profiles and shear scale lengths for non-helical
base flows have been performed to test the universality of our finding. | Shishir Biswas, Rajaraman Ganesh | 2023-05-31T12:37:37Z | http://arxiv.org/abs/2305.19796v1 | # Effect of flow shear on the onset of dynamos
###### Abstract
Understanding the origin and structure of mean magnetic fields in astrophysical conditions is a major challenge. Shear flows often coexist in such astrophysical conditions and the role of flow shear on dynamo mechanism is only beginning to be investigated. Here, we present a direct numerical simulation (DNS) study of the effect of flow shear on dynamo instability for a variety of base flows with controllable mirror symmetry (i.e, fluid helicity). Our observations suggest that for helical base flow, the effect of shear is to suppress the small scale dynamo (SSD) action, i.e, shear helps the large scale magnetic field to manifest itself by suppressing SSD action. For non-helical base flows, flow shear has the opposite effect of amplifying the small-scale dynamo action. The magnetic energy growth rate (\(\gamma\)) for non-helical base flows are found to follow an algebraic nature of the form, \(\gamma=-aS+bS^{\frac{3}{2}}\), where \(a,b>0\) are real constants and \(S\) is the shear flow strength and \(\gamma\) is found to be independent of scale of flow shear. Studies with different shear profiles and shear scale lengths for non-helical base flows have been performed to test the universality of our finding.
## I Introduction
Predicting the generation of multi-scale magnetic fields, in many astrophysical bodies, has been a long-standing theoretical question in astrophysical plasmas. Different theories have been proposed to account for the origin of these multi-scale magnetic fields. For example, invoking magnetic induction due to the motion of conducting fluids, [1; 2] suggested these multi-scale magnetic fields are generated via a hydromagnetic dynamo process and maintained against resistive losses.
Depending on the length scales involved, dynamos may be classified into two broad categories : Small Scale or fluctuation Dynamo (SSD) and Large Scale or mean field Dynamo (LSD). Unlike SSD, for LSDs a lack of reflectional symmetry is widely believed to be a necessary condition [3]. Depending on the time scales, dynamos may also be categorized as Fast dynamos (growth rate remain finite in the range \(R_{m}\rightarrow\infty\)) and Slow dynamos (magnetic diffusion plays a significant role) [3; 4]. Fast dynamos are further classified into two sub-categories as, 'quick' dynamo and 'pedestrian' dynamo [5]. For a 'quick' dynamo magnetic energy growth rate achieves its maximum value quickly as a function of magnetic Reynolds number \(R_{m}\), where as for a 'pedestrian' dynamo the growth rate very weakly depends on \(R_{m}\)[5]. Depending on the feedback strength of the magnetic field on to the flow field, dynamos are regarded as linear or non-linear. For example, a linear dynamo is one in which the magnetic field dynamics does not "back react" with the velocity field and the velocity field is either given or it obeys the Navier-Stokes equation [3]. A nonlinear dynamo or self-consistent dynamo is when the nonlinear effects start to change the flow (once the magnetic field is large enough) to stop further magnetic field growth, that is, the flow and B-field "back react" on each other, typically leading to nonlinear saturation [3].
SSDs may also be defined as systems which sustain B-field fluctuations at scales smaller than the forcing scale [6; 7; 8; 9; 10; 11]. The fluctuating magnetic fields found in galaxies and clusters, as well as in the solar photosphere may be regarded as due to SSDs. Oftentimes, the generated magnetic fields are also observed to be correlated on scales larger than the driving scale, resulting in LSD action [3; 12]. For instance, the solar magnetic field possesses a large-scale dipole component which is mostly aligned with the Sun's rotation axis and a wave of magnetic activity that traverses from mid-latitudes to the equator on an 11-year time scale is clearly visible in the solar butterfly diagram [13]. Large-scale dynamo activity can also be explained by the well-known \(\alpha\) effect [2; 3; 14], provided the system has some mirror-symmetry breaking (i.e, when kinetic or fluid helicity is non-zero).
Not only by the nature of turbulence, but, dynamos are also affected by factors such as density lamination (for example, density variation along the direction of gravity), rotation, kinetic helicity (mirror symmetry breaking), and flow shear. Out of these factors, flow shear is ubiquitous in astrophysical systems - appearing in interstellar medium, galaxies, accretion disks, and in liquid-metal laboratory dynamo [15] experiments. The paradigm of investigation of the exponential growth of magnetic field caused by the interaction of small-scale velocity fluctuations and a large-scale velocity shear; is commonly referred to as the "shear dynamo problem". For example, presence of a large-scale velocity shear, in association with turbulent rotating convection (turbulent convective motion under the influence of rotation), is seen to actually increase the dynamo growth rate at larger scales [16; 17; 18; 19]. Furthermore, it is also found that a highly helical flow pattern may only result in a SSD action when the rotational convention is sufficiently strong [20].
For conditions where rotational effects are negligible,
an integro-differential equation has been proposed based on a quasi-linear model to address the limit of weak convective flow [21; 22]. In order to further investigate the shear dynamo problem, several other analytical frameworks have been reported [23; 24; 25]. Along with these analytical attempts, it has been reported, based on direct numerical simulation that a driven small-scale, purely non-helical turbulence enhances the exponential growth of large scale magnetic energy in the presence of non-rotating linear shear flows [26; 27; 28]. For example, it is found that [27; 28], the LSD growth rate scales linearly with the \(S\) (where \(S\) is the shear flow strength). On the other hand, using a kinematic dynamo model, it has been shown that, unlike a linear relationship, the dynamo growth rate scales as \(S^{\frac{3}{2}}\)[29].
It is clear from the preceding discussion that theoretical and computational efforts are being put to understand the origin of large-scale dynamo action. Numerical studies have also been carried out on the shear dynamo action for large scale velocity shear and helical forced turbulence [30] and provide an effective explanation for large-scale dynamo action using a propagating wave-like dynamo solution [30]. The primary difficulty lies in controlling the fluctuations on a small scale, as the small scale magnetic fields are regarded to be harmful to the dynamo action on a larger scale [31]. In the presence of large-scale velocity shear and non-helical flows, recent work provides an evidence of large-scale magnetic field generation from small-scale dynamos [31; 32]. This intriguing numerical observation has been explained using the concept of "magnetic shear current effect" [31; 32; 33; 34]. The generation of large-scale magnetic fields is the primary focus of all of these studies.
An alternate school of thought for LSD is to decrease the efficiency of small scales rather than trying to increase the activity of large-scale dynamos [35; 36; 37]. Kinematic dynamo model has been used to examine shear dynamo activity with superimposed large-scale shear flow and small-scale helical base flow [35; 36]. For the numerical experiment, a well-known time-dependent 2.5-dimensional GP flow [38] has been considered. The presence of symmetry along one spatial dimension in GP flow [38] allows one to effectively transform the three-dimensional (3D) kinematic dynamo problem into a two-dimensional (2D) one. The results of numerical simulation led to the conclusion that the interaction between large-scale shear flow and small-scale helical flow does not boost the induction process. Instead, it slows down the small-scale dynamo growth rate, which in turn makes it possible for the large-scale dynamo action to become apparent [35; 36; 37]. This idea is sometimes referred to as the "suppression principle". In addition, propagating dynamo waves [1] have been observed, which is a hallmark of large-scale dynamo activity [35; 36; 37]. By taking magnetic feed back into consideration (non-linear dynamo action) this issue has been revisited [39; 40; 41].
Recently, the shear-dynamo activity in the non-helical limit has been investigated numerically using both the kinematic and self-consistent (with magnetic feed back) dynamo models [42]. In addition to a linear shear, the model also incorporates a random non-helical white-noise as a body force. This model has also been used to explain why existence of large-scale velocity shear is a favourable condition for small-scale dynamos. The turbulence caused by flow shear provides an explanation for the enhancement of small-scale dynamo [42].
In the context of shear dynamo problem, the effect of flow shear on non-helical base flow is studies sparsely. In this present work, we have investigated the shear dynamo action using a kinematic dynamo model. In our model, the velocity field is not simulated using the Navier-Stokes equation; instead, it is given and remains unchanged throughout the simulation. As a flow drive for our simulation, we have considered a recently reported three-dimensional Yoshida-Morrison flow (or YM flow [43] in short). One of the interesting features of YM flow is that, its mirror symmetry (kinetic helicity) can be controlled by varying the magnitude of certain flow parameter [44]. In the maximal helicity limit, YM flow resembles the well-known Arnold-Beltrami-Childress (ABC) flow, while in the non-helical limit, it is known as EPI2D flow [43]. The ABC flow, but not the EPI2D flow, is a well-known candidate for fast dynamo action. Here, we investigate the effect of flow shear on dynamo instability. For a helical base flow, such as ABC flow, the inductive process is known to result in an exponential increase in magnetic energy in the absence of shear flows. Our spectral analysis supports the notion that this dynamo operates on a relatively small scales. We have also identified that, for this helical base flow (ABC flow), the presence of flow shear effectively suppresses small-scale dynamo activity over a broad range of the magnetic Reynolds number \(R_{m}\). Several authors [35; 36; 37] have reported this suppression mechanism using a quasi-2D helical base flow (GP flow) [38] with a large scale shear. Here, we have observed similar suppression activity using a full 3D helical ABC flow.
The above said picture changes dramatically when EPI2D flow is considered as the base flow. We find that the EPI2D flow is unable to induce exponential amplification of magnetic energy in the absence of shear flows. Interestingly, when the shear flow is considered, the small scale EPI2D flow is found to generate exponentially growing magnetic energy with time. In other words, our numerical analysis suggests that, in the presence of shear flow, an otherwise non-dynamo producing non-helical base flow (EPI2D flow) can effectively generate fast dynamo activity. We also observe, through numerical simulation, that the strength of shear flows has a significant impact on the amount of small-scale dynamo activity. We have obtained a generalized algebraic (combination of linear and non-linear) scaling, for the growth rate of magnetic energy with shear flow strength \(S\) as \(\gamma=-aS+bS^{\frac{3}{2}}\), where \(a\) and \(b\) are positive real constants. It is observed that, our numerical finding of depending of \(\gamma\) on \(S\) is in agreement with several earlier analytical works [29; 45] while generalizing the same. The robustness
of our numerical finding is tested using shear flows with varying shear length scales, in addition, for a number of different small-scale base flows. Accretion discs, galaxies, jets, stellar convective zones, and so on all include hydrodynamic flows that are characterized by significant flow shear, suggesting that the dynamo mechanisms under consideration may play a key role in the creation of magnetic fields in these astrophysical scenarios.
The organization of the paper is as follows. In Sec. II we present about the dynamic equations. Our numerical solver and simulation details are described in Sec. III. The initial conditions, parameter details are shown in Sec. IV. Section V is dedicated to the simulation results on induction dynamo action that we obtained from our code and finally the summary and conclusions are listed in Sec. VI.
## II Governing equations
The governing equations to study kinematic fast dynamo action for the single fluid MHD plasma are as follows,
\[\frac{\partial\vec{B}}{\partial t}+\vec{\nabla}\cdot\left(\vec{u }\otimes\vec{B}-\vec{B}\otimes\vec{u}\right)=\frac{1}{R_{m}}\nabla^{2}\vec{B} \tag{1}\] \[\vec{\nabla}\cdot\vec{B}=0 \tag{2}\]
where, \(\vec{u}\), \(\vec{B}\) and \(R_{m}\) represent the velocity, magnetic fields and magnetic Reynolds number respectively. The magnetic Reynolds number (\(R_{m}\)) is defined as, \(R_{m}=\frac{u_{0}L}{\eta}\), where \(\eta\) is magnetic diffusivity and \(u_{0}\) is a typical velocity scale. Time is normalized to Alfven times (i.e, time taken for an Alfven wave to traverse the simulation domain) and length to a typical characteristic length scale \(L\) (here it is the length of simulation domain). The symbol "\(\otimes\)" represents the dyadic between two vector quantities.
For solving the above set of equations at high grid resolution, we have developed a suite of GPU codes namely GMHD3D, which is briefly described in the following Section.
## III Simulation details: _Gmhd3d_ solver
In this Section, we discuss the details of the numerical solver along with the benchmarking of the solver carried out by us. In order to study the plasma dynamics governed by MHD equations described above, we have recently upgraded an already existing well benchmarked single GPU MHD solver [46], developed in house at Institute For Plasma Research to multi-node, multi-card (multi-GPU) architecture for better performance [47]. This newly upgraded GPU based magnetohydrodynamic solver (_GMHD3D_) is now capable of handling very large grid sizes. _GMHD3D_ is a multi-node, multi-card, three dimensional (3D), weakly compressible, pseudo-spectral, visco-resistive solver [47]. This suite (GMHD3D) includes both 2-dimensional and 3-dimensional HydroDynamic (HD) and MagnetoHydrodynamic (MHD) solvers. It uses pseudo-spectral technique to simulate the dynamics of 3D magnetohydrodynamic plasma in a cartesian box with periodic boundary condition. By this technique one calculates the spatial derivative to evaluate non-linear term in governing equations with a standard \(\frac{2}{3}\) de-aliasing rule [48]. OpenACC FFT library (AccFFT library [49]) is used to perform Fourier transform and Adams-bashforth time solver, for time integration. For 3D iso-surface visualization, an open source Python based data converter to VTK (Visualization Tool kit) by "PyEVTK" [50] is developed, which converts ASCII data to VTK binary format. After dumping the state data files to VTK, an open source visualization softwares, VisIt 3.1.2 [51] and Paraview [52] is used to visualize the data. For this present work, the new solver's accuracy with the single GPU solver has been cross-checked and it is verified that the results match upto machine precision. Further, several other benchmarking studies have been performed such as, the 3D kinematic dynamo effect [53; 54; 55; 56], have been reproduced with ABC flow at grid resolution \(64^{3}\). Details of these are presented in Appendix A. As will be discussed in the coming Section, numerical simulations reported here are performed in \(256^{3}\) grid size.
As discussed in the Introduction, to study the kinematic fast dynamo action, an accurate selection of "drive" velocity field is crucial, which we discuss in the Section to follow.
## IV Initial condition
Recently Yoshida and Morrison [43] (YM) proposed a new intermediate class of flow, which may be regarded as a topological bridge between quasi-2D and 3D flow classes. The flow is formulated as follows:
\[\vec{u}_{b}=u_{0}\alpha\vec{u}_{+}+u_{0}\beta\vec{u}_{-} \tag{3}\]
with
\[\vec{u}_{+}=\begin{bmatrix}Bsin(k_{0}y)-Ccos(k_{0}z)\\ 0\\ Asin(k_{0}x)\end{bmatrix} \tag{4}\]
and
\[\vec{u}_{-}=\begin{bmatrix}0\\ Csin(k_{0}z)-Acos(k_{0}x)\\ -Bcos(k_{0}y)\end{bmatrix} \tag{5}\]
so that,
\[u_{x} =\alpha u_{0}[B\sin(k_{0}y)-C\cos(k_{0}z)]\] \[u_{y} =\beta u_{0}[C\sin(k_{0}z)-A\cos(k_{0}x)] \tag{6}\] \[u_{z} =u_{0}[\alpha A\sin(k_{0}x)-\beta B\cos(k_{0}y)]\]
where \(k_{0}\), \(\alpha,\beta\), A, B and C are arbitrary real constants. For the present study, we consider the value of \(u_{0},\alpha\), A,
B and C to be unity. In the present work, we consider Eq. 3 as our base flow \(\vec{u}_{b}\). The variation of \(\beta\) value in YM flow leads to new classes of base flows.
For example, for \(\beta=0\), Yoshida et al. Yoshida et al. (2016) classify this flow as EPI-2D flow which is given by :
\[u_{x} =[\sin(k_{0}y)-\cos(k_{0}z)] \tag{7}\] \[u_{y} =0\] \[u_{z} =[\sin(k_{0}x)]\]
This flow (i.e, Eq. 7) is dependent on all the 3 spatial coordinates (i.e, \(x,y,z\)), whereas only two flow components are nonzero. Thus EPI-2D flow is quasi-2D in nature.
As can be expected, for \(\beta=1\) Eq. 6 becomes the well known Arnold-Beltrami-Childress flow [ABC] like flow,
\[u_{x} =[\sin(k_{0}y)-\cos(k_{0}z)] \tag{8}\] \[u_{y} =[\sin(k_{0}z)-\cos(k_{0}x)]\] \[u_{z} =[\sin(k_{0}x)-\cos(k_{0}y)]\]
As \(\beta\) is varied from 0 to 1.0, a whole set of intermediate class of flows emerge, such that a normalized fluid helicity is exactly 0.0 for \(\beta=0\) and is maximum for \(\beta=1.0\) (i.e, ABC-like flows) Yoshida et al. (2016). The variation of \(\beta\) value clearly leads to two distinguishable class viz helical (\(\beta>0\)) and non-helical (\(\beta=0\)) class of base flows Yoshida et al. (2016).
In the following, we begin our investigation by focusing on the most well-known type of helical flow such as ABC-like base flow with \(k_{0}=8\) and \(\beta=1\) (See Fig. 1a). In order to investigate the role of shear flows in the context of dynamo action, we introduce a periodic, large-scale shear flow (Eq. 9) (See Fig. 1b) of the form
\[\vec{u}_{s}=(0,S\cos(k_{s}x),0) \tag{9}\]
where \(S\) is shear flow strength and \(k_{s}\) is the length scale of shear flow such that \(k_{s}<k_{0}\). Hence we refer to the base flow i.e, the \(\beta=1\) ABC flow with mode number \(k_{0}\) as small-scale flow and the flow with \(k_{s}<k_{0}\) as large scale shear flow.
A recent work Yoshida et al. (2016) has shown that a purely non-helical quasi-2-dimensional EPI2D flow alone is incapable of producing fast dynamo action due to insufficient stretching capability. In the context of dynamo activity, it is interesting to examine the effect of large-scale shear flows on non-helical base flows i.e, EPI2D flow. Hence, we consider a small scale (\(k_{0}=8\)) EPI2D flow along with a periodic large scale (\(k_{s}=1\)) shear flow (Eq. 9) as our starting point (See Fig. 1c) to explore the shear dynamo action. In order to examine the robustness of our numerical findings, we have also considered a broken jet (two jets with opposed directions i.e., broken jets whose width is \(\frac{\pi}{16}\) in a system of length \(2\pi\), placed alternately one after the other.) flow shear profile Bender and Sreenivasan (2002); Bender and Sreenivasan (2002); Bender and Sreenivasan (2002) instead of a periodic shear profile (\(k_{s}\rightarrow\infty\)) (See Fig. 1d) to investigate the shear dynamo activity. Few additional studies with smaller scales (\(k_{0}=16\)) base flow have been performed [Details of which are added in Supplementary information].
An initial value problem involving the induction equation for \(\vec{B}\) is solved for the prescribed flows \(\vec{u}=\vec{u}_{b}+\vec{u}_{s}\) (See Eq. 3 & 9). We have considered a random perturbation as a seed initial magnetic field for our numerical experiments. We have also performed numerical experiments with a periodic initial magnetic field Yoshida et al. (2016) and a uniform magnetic field, and we find that the characteristics of the dynamo are largely insensitive to the initial conditions in both cases. For the rest of the discussion, we present results from random perturbations as initial magnetic field.
### Parameter Details
We evolve the set of equations discussed in Section II, for class of YM flow profile, in a triply periodic box of length \(L_{x}=L_{y}=L_{z}=2\pi\) with time steeping (\(dt\)) \(=10^{-4}\) and grid resolution \(256^{3}\). We have also conducted grid size and time step size scaling studies (not shown) and find that values indicated above are adequate. With these initial conditions and parameter spaces we present our numerical simulation results.
## V Simulation Results
The helical nature, chaotic property, and stretching ability of the ABC-like flow are known to be the primary causes for the generation of dynamo action Yoshida et al. (2016); Yoshida et al. (2016); Yoshida et al. (2016); Yoshida et al. (2016).
In this study, we use a kinematic dynamo model and begin with a small scale (\(k_{0}=8.0\)) ABC-like flow (or YM flow with \(\beta=1.0\)) to initiate our numerical experiment (See Fig. 1a). We perform our numerical runs for a wide range of magnetic Reynolds number \(R_{m}\) and compute the growth rate (\(\gamma=\frac{d}{dt}(\ln E_{B}(t))\)) of magnetic energy (\(E_{B}=\frac{1}{2}\int_{V}(B_{x}^{2}+B_{y}^{2}+B_{z}^{2})dxdydz\)) at late times (eg. \(t\sim 80\) to 90). For sufficiently large values of the magnetic Reynolds number \(R_{m}\), it is clear from Fig. 2a that the growth rate of magnetic energy does not saturate with \(R_{m}\), a hallmark of fast dynamo action. It is well-known that magnetic field lines can stretch, twist, and fold (abbreviated as STF) Yoshida et al. (2016) in a kinematic dynamo model with ABC-like as the base flow. It can be seen that the magnetic energy is concentrated on smaller scales, which can be compared to the length scale of the flow that is driving it (See Fig. 2b). We have computed the magnetic energy spectral density \(|\hat{B}(k)|\) (such that \(\int|\hat{B}(k,t)|^{2}dk\) is the total energy at time t and \(k=\sqrt{k_{x}^{2}+k_{y}^{2}+k_{z}^{2}}\)). From our spectral analysis, we observe that the majority of the power is concentrated in higher modes (i.e., on smaller length scales) (See fig. 2c). To put it another way, the dynamo is essentially a small scale or fluctua
tion dynamo (SSD). Investigating the effect of flow shear on this highly helical and chaotic 3D ABC-like flow is an interesting line of inquiry.
Several authors [30; 35; 36; 60; 61] have addressed the impact of large-scale shear on helical base flows in the context of dynamo action. In some of the earlier works have considered a circularly polarized, time dependent 2.5-dimensional base flow namely GP flow [38] as a driver for dynamo simulation. In essence, the time dependency of GP flow introduces the chaoticity and stretching property into the system, both of which are necessary for dynamo activity. Another important property of GP flow is that, one can control its reflectional symmetry (helicity distribution) by varying certain physical parameter [35] similar to our \(\beta\) parameter in YM flow [43; 44].
Here, using a kinematic dynamo model, we examine the impact of large scale (\(k_{s}=1\)) flow shear (Eq. 9) on small-scale (\(k_{0}=8\)) 3-dimensional YM flow with \(\beta=1\) (i.e, ABC-like flow) (Eq. 8) (See Fig. 3). As part of our study, we have conducted numerical simulations with varying shear flow strengths \(S\). As can be seen in Fig. 3, the small-scale dynamo action is suppressed by the large-scale (\(k_{s}=1\)) shear flow. The magnetic energy growth rate is found to decrease over a broad range of magnetic
Figure 1: Initial velocity (\(u=\sqrt{u_{x}^{2}+u_{y}^{2}+u_{z}^{2}}\)) profile in \(X-Y\) plane for (a) small scale (\(k_{0}=8\)) helical ABC flow (b) superposition of small scale (\(k_{0}=8\)) helical ABC flow and large scale (\(k_{s}=1.0\)) periodic shear (c) superposition of small scale (\(k_{0}=8\)) non-helical EPI2D flow and large scale (\(k_{s}=1\)) periodic shear (d) superposition of small scale (\(k_{0}=8\)) non-helical EPI2D flow and broken jet (i.e \(k_{s}\rightarrow\infty\)) flow shear [57; 58; 59] profile.
Reynolds numbers \(R_{m}\). The interaction between helical base flows and large scale shear effectively limits the growth of small scales (i.e, fluctuations). A possible reason could that in a fully chaotic system, two neighboring fluid element would diverge exponentially in time. If one includes a regulating flow (shear flow), the two neighboring fluid elements will diverge, but algebraically - which implies less chaotic flow and hence reduced dynamo growth [36]. The primary function of flow shear is to diminish the efficacy of fast dynamo action at small scales, which in turn may be interpreted as the flow shear is effectively helping to boost the activity of dynamos at larger scales, or the mean field. This possibility is also reported by a number of authors [35; 36; 37; 62] in the past, for 2.5 dimensional helical GP flow. Hence our findings in \(\beta=1\) limit of YM flow which corroborate with earlier work on helical flows, may be regarded as a benchmark for GMHD3D solver. In light of this background, a reasonable question to ask is the following: what kind of effect does shear flows have on a small-scale base flow
Figure 2: (a) Magnetic energy (\(E_{B}=\frac{1}{2}\int_{V}(B_{x}^{2}+B_{y}^{2}+B_{z}^{2})dxdydz\)) growth rate (\(\gamma=\frac{d}{dt}(\ln E_{B}(t))\)) at late times (eg. \(t\sim 80\) to \(90\)) for small scale helical ABC-like flow in the absence of shear flow (\(S=0\)). (b) Magnetic energy iso-surface in the absence of shear flow, \(S=0\) (magnetic energy is effectively dominated by small scales, hence a small scale dynamo (SSD)) [See **Movie1.mp4**]. (c) Calculation of magnetic energy spectral density (for \(S=0\) & 5) \(|\hat{B}(k)|\) (such that \(\int|\hat{B}(k,t)|^{2}dk\) is the total energy at time t (inset view: in linear scale). Simulation details: grid resolution \(256^{3}\).
that is not helical and does the scale of flow shear matter at all?
To address this, we employ direct numerical simulation (DNS) to examine the effect of a superposition of large-scale shear and purely non-helical short-scale EPI2D flow (See Fig. 1c). When there is no flow shear present, the dynamo effect is absent for an EPI2D flow. The magnetic energy growth rate (\(\gamma=\frac{d}{dt}(\ln E_{B}(t))\)) for small scale EPI2D flow in the absence of flow shear, is negative over a wide range of magnetic Reynolds numbers \(Rm\) (See Fig. 4a for \(S=0\)). This is an obvious indication of non-dynamo activity. However, Zeldovich's classic anti-dynamo theorem provides an alternative explanation for this [63; 3]. When this small-scale, non-dynamo-producing, EPI2D flow is superposed with a large-scale flow shear, the dynamics is found to undergo an interesting transformation. We have carried out our numerical experiments across a broad range of shear flow strengths (\(S\)) and magnetic Reynolds numbers (\(R_{m}\)) (See Fig. 4a for \(S=1\) to 20). In the presence of a non-zero shear flow strength, as shown in Fig. 4a, magnetic energy growth is clearly visible. At sufficiently high magnetic Reynolds numbers \(R_{m}\), the growth rate of magnetic energy (\(\gamma\)) continues to vary significantly and is found not to saturate with \(R_{m}\), verifying one of the defining feature of fast dynamo action [56; 4]. We have calculated \(\frac{d\gamma}{dR_{m}}\) as a function of \(R_{m}\) and find that the value of \(\frac{d\gamma}{dR_{m}}\) to be slowly varying, even at the maximum magnetic Reynolds number [See Fig. 4b].
We have visualized the iso-surfaces of the magnetic fields (Iso-B surfaces) in three dimensions and find that the magnetic energy is concentrated in two bands near the segments with strong velocity gradients (See Fig. 5). Magnetic energy iso-surfaces are dominated by small-scale structures (compared to the length scale of the flow), as is evident from Fig. 5. We have computed the magnetic energy spectral density \(|\hat{B}(k)|\) (such that \(\int|\hat{B}(k,t)|^{2}dk\) is the total energy at time t and \(k=\sqrt{k_{x}^{2}+k_{y}^{2}+k_{z}^{2}}\)) to verify our findings. It is seen from Fig. 4c that the most of the magnetic energy is at the higher mode numbers (shorter length scales). To further illustrate our point, we have plotted the magnetic energy spectral density over time for each of the modes (eg. \(|\hat{B}(k=1),t|,|\hat{B}(k=20),t|,|\hat{B}(k=30),t|,|\hat{B}(k=50),t|,|\hat{B} (k=70),t|,|\hat{B}(k=70),t|\) etc.). It is easy to see from Fig. 4d that the higher modes (shorter length scales) contain higher energies; this is a primary characteristic of SSD.
In addition, we plot the rate of increase in magnetic energy (\(\gamma\)) as a function of the shear flow strength (\(S\)) for non-helical base flow. Figure 6 demonstrates unambiguously that as the amplitude of the shear flow increases, the rate of growth of magnetic energy also increases. Spectral analysis confirms the SSD-like structure for a given value of shear flow strength (\(S\)) (See Fig. 4c). Based on the evidence presented in Fig. 6, we conclude that the effect of large-scale shear flows in this instance is not to suppress the SSD activity, but rather to amplify it. Additionally, we obtain a generalized algebraic (combination of linear and non-linear) scaling (based on \(\chi^{2}\) minimization) for the rate of increase of magnetic energy (\(\gamma\)) in the form \(\gamma(S)=-aS+bS^{\frac{2}{3}}\) (See Fig. 6), where \(a\) & \(b\) are real fit coefficients. Our numerical finding of dependency of \(\gamma\) on \(S\) is found to be in close agreement with analytic predictions [29; 45]. It is observed that, for a given random smooth velocity field, large-scale shear can support an small scale dynamo (SSD) with a scaling of \(S^{\frac{2}{3}}\)[45], which is consistent with an upper bound for growth rates anticipated afterward [29].
It has been recently proposed [42] that, a random non-helically driven, dissipative model can enhance SSD action. This work reports on amplification of SSD using a kinematic dynamo model which solves a Navier-Stokes equation for the fluid flow, along with a linear background shear and a random non-helical white noise drive [42]. In yet another model, the shear is self-consistently driven by the presence of an in-plane temperature gradient resulting in SSD [64].
In contrast to these earlier studies, in our model we have imposed a driven shear velocity field with a three-dimensional flow field (the helicity of which can be controlled) and studied the growth of magnetic energy by only evolving the induction equation for magnetic fields in a triply periodic three-dimensional box (i.e, the velocity field is not evolved using the Navier-Stokes equation; rather, it is given and remains static throughout the simulation.). Using this configuration, we have demonstrated unambiguously how flow shears enhance small scale dynamo (SSD) activity for non-helical base flows.
A natural question is whether the dynamo property found here is associated to the scale of the shear flow profile. To answer this question, we have conducted numerical experiments using broken jet flow shear as an extreme case (i.e, \(k_{s}\rightarrow\infty\) or, \(k_{s}\sim k_{max}\) numerically speaking) (See Fig. 1d). This shear profile appears frequently in hydrodynamics studies, especially those used to investigate Navier-Stokes turbulence [57; 58; 59]. EPI2D flow is shown to generate magnetic energy (\(E_{B}=\frac{1}{2}\int_{V}(B_{x}^{2}+B_{y}^{2}+B_{z}^{2})dxdydz\)) in an exponential fashion in the presence of broken jet flow (with \(k_{s}\sim k_{max}\)) shear. Furthermore, from Fig. 7a, it is clear that with increase in magnetic Reynolds number \(R_{m}\), the growth rate of magnetic energy (\(\gamma=\frac{d}{dt}(\ln E_{B}(t))\)) is found not to reach a saturation point in line with what is expected for a fast dynamo action. With the help of an iso-surface representation of magnetic energy (See Fig.8), we are able to determine that the generated magnetic energy is mostly contained in smaller scales (compared to the length scale of the flow that is driving it), making the dynamo a small-scale dynamo (SSD). In addition, we have estimated the magnetic energy spectral density associated with each mode, such as \(|\hat{B}(k=1),t|,|\hat{B}(k=20),t|,|\hat{B}(k=30),t|\) etc., and monitor its time-dependent evolution. As was the case in the previous example, it is found that the energy is concentrated in higher modes or shorter length scales (See Fig. 7b); consequently, the dynamo can be thought
of as a small-scale dynamo (SSD). We also plot the magnetic energy growth rate (\(\gamma=\frac{d}{dt}(\ln E_{B}(t))\)) as a function of shear flow strength (\(S\)), and we obtain the same generalized algebraic scaling \(\gamma(S)=-aS+bS^{\frac{2}{3}}\) (based on \(\chi^{2}\) minimization), where \(a\) & \(b\) are real fit coefficients as obtained for broken jet shear flow (See Fig. 7c). Hence, in the presence of broken jet flow shear [57; 58; 59] as well, our numerical observation unambiguously demonstrates that there is an onset and increase in the activity of small scale dynamo (SSD). As the effect is found to be robust at largest and smallest shear scale lengths, we conclude that the scaling of \(\gamma(S)\) vs \(S\) appears to be independent of shear flow scale (except that the real coefficients a and b are to be determined accordingly) and robust.
To further convince ourselves regarding the generality of our finding, we have considered yet another non-helical flow namely Taylor-Green (TG) flow and investigate the effect of flow shear on the dynamo activity with TG flow as the base flow. It is found that the fundamental observations remain unaltered (See Supplementary information for details).
## VI Summary and conclusion
In this work, we have performed direct numerical simulations (DNS) of kinematic dynamos using a 3-dimensional magnetohydrodynamic model at modest grid resolutions. By considering a simple kinematic dynamo model, we are able to demonstrate that the small-scale helical YM flow with \(\beta=1.0\) (ABC-like flow) generates rapid dynamo action over a broad range of magnetic Reynolds number. The results of our spectral calculation demonstrate without a doubt that the fully developed dynamo is, in effect, a small scale dynamo (SSD). In addition, it has been shown that the presence of a flow shear reduces the efficiency of small-scale dynamo action. This is interesting finding appears to be consistent with several earlier works reported for a 2.5-dimensional GP flow, which may also be considered as a good benchmark for GMHD3D solver.
Our major findings are:
\(\bullet\) A non-dynamo producing, small scale non-helical EPI2D flow shows fast SSD activity when flow shear is introduced. More importantly, unlike fully helical flows, it has been observed that, for EPI2D non-helical flows, the small-scale dynamo action (SSD) increases as the shear flow strength (\(S\)) increases. The spectral diagnostics also are found to be in agreement with the observation.
\(\bullet\) A generalized algebraic scaling for the magnetic energy growth rate (\(\gamma\)) as a function of the shear flow strength (\(S\)) has been obtained. Our numerical observation is supported by a number of recent analytical works [29; 45].
\(\bullet\) We have performed our numerical experiments using broken jet flow (\(k_{s}\rightarrow\infty\). i.e, \(k_{s}\sim k_{max}\) numerically speaking) shear profile, and we have found that the primary findings are unaffected. The mechanism of onset of dynamo from non-helical base flows in the presence of shear flows is found to be independent of the scale of shear flows.
\(\bullet\) The scaling of \(\gamma(S)=-aS+bS^{\frac{2}{3}}\), where \(a\) & \(b\) are real fit coefficients is found be to robust and not dependent on shear flow length scale \(k_{s}\).
Figure 3: Magnetic energy (\(E_{B}=\frac{1}{2}\int_{Y}(B_{x}^{2}+B_{y}^{2}+B_{z}^{2})dxdydz\)) growth rate (\(\gamma=\frac{d}{dt}(\ln E_{B}(t))\)) as function of shear flow strength (\(S\)) for a helical base flow (ABC-like (i.e, \(\beta=1.0\)) flow). As the shear strength (\(S\)) increases, small scale magnetic energy growth rate decreases. The profile of magnetic energy iso-surface (for \(S=0\)) and magnetic energy spectral density (for \(S=0\) & \(S=5\)) are shown in Fig. 2
\(\bullet\) We have also carried out the same analysis for a different well-known non-helical base flow, known as the Taylor-Green flow, and found that our results remain valid (See Supplementary information for details). Our numerical findings are basically found to be robust.
To conclude, we have investigated the effect of shear flows on the onset of dynamo instability on non-helical base flows, using a modest resolution and a wide range of magnetic Reynolds numbers and for flow shear scale lengths. Our numerical analysis reveals that the small-scale dynamo action is suppressed by flow shear for base helical flows, but is amplified for base non-helical flows. We also believe these dynamos should play an important role in a wide variety of astrophysical ob
Figure 4: (a) Magnetic energy (\(E_{B}=\frac{1}{2}\int_{V}(B_{x}^{2}+B_{y}^{2}+B_{z}^{2})dxdydz\)) growth rate (\(\gamma=\frac{d}{dt}(\ln E_{B}(t))\)) at late times (eg. \(t\sim 80\) to \(90\)) as a function of magnetic Reynolds number \(R_{m}\) for small scale (\(k_{0}=8\)) non-helical EPI2D flow and for various values of \(S\), the large scale (\(k_{s}=1\)) periodic shear flow strength. (b) Calculation of \(\frac{d\gamma}{dR_{m}}\) as a function of magnetic Reynolds number (\(R_{m}\)) for different values of shear flow strength \(S\). (c) Calculation of magnetic energy spectral density \(|\hat{B}(k)|\) (such that \(\int|\hat{B}(k,t)|^{2}dk\) is the total energy at time t for two different values of shear flow strength \(S\), namely \(S=5\) and \(S=10\) (inset view: in linear scale). (d) Time evolution of magnetic energy spectral density contained in each mode. The higher mode numbers (shorter length scales) contain higher energies at all later times shown; which is a primary characteristic of small scale dynamo (SSD).
jects, especially in highly symmetric regions like the mid plane of accretion disks where flow shear is known to be substantial [65; 66]. However, it may be argued that in actual astrophysical conditions, the magnetic back reaction on the velocity field cannot be disregarded. Unlike the scenario presented above, velocity fields in such situations are not predetermined. We hope to address several of these issues in a future communication.
## VII Acknowledgments
The simulations and visualizations presented here are performed on GPU nodes and visualization nodes of Antya cluster at the Institute for Plasma Research (IPR), INDIA. One of the author S.B is thankful to Dr. Rupak Mukherjee at Central University of Sikkim (CUS), Gangtok, Sikkim, India for providing an initial version
Figure 5: Time evolution of 3-dimensional magnetic field iso-surfaces (Iso-B surfaces) for a given small scale (\(k_{0}=8\)) EPI2D flow and large scale (\(k_{s}=1\)) periodic shear. Dominant magnetic energies are mostly confined in two regimes restricted near the segments (\(z=\frac{\pi}{2}\) and \(z=\frac{3\pi}{2}\)) where the velocity gradients are strongest. The structures are mostly dominated by small scale structures (SSD) [**Movie2.mp4**]. Simulation details: grid resolution \(256^{3}\), shear flow strength \(S=5.0\), magnetic Reynolds number \(R_{m}=200.0\). Visualization in log scale.
of _GMHD3D_ code. S.B thanks N. Vydyanathan, Bengaluru and B. K. Sharma at NVIDIA, Bengaluru, India, for extending their help with basic GPU methods. S.B is grateful to Mr. Soumen De Karmakar at IPR, India for many helpful discussions regarding GPUs, and HPC support team of IPR for extending their help related to ANTYA cluster.
## VIII Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
## IX Conflict of interest
The authors have no conflicts to disclose.
|
2309.13608 | Determining cosmological-model-independent $H_0$ and post-Newtonian
parameter with time-delay lenses and supernovae | Strong gravitational lensing provides a natural opportunity to test General
Relativity (GR). We propose a model-independent method for simultaneous
constraining on Hubble constant ($H_0$) and post-Newtonian parameter
(${\gamma_{\rm{PPN}}}$) using strong lensing systems and observational SNe Ia.
The time-delay measurements from strong lesning can directly determine the
Hubble constant, and the lens distance inferred from the spectroscopic
measurement of the stellar kinematics of the deflector galaxy can help us to
constrain the post-Newtonian parameter. We seek the Pantheon dataset and
reconstruct unanchored distances using Gaussian process regression to achieve
the cosmological model-independent GR testing instead of assuming a specific
model, which can reduce possible bias on GR testing and measurement of Hubble
constant. Combining the reconstructed unanchored distances and the four H0LiCOW
lenses datasets, our results are $H_0=72.9^{+2.0}_{-2.3}
{\mathrm{~km~s^{-1}~Mpc^{-1}}}$ and ${\gamma_{\rm{PPN}}}=0.89^{+0.17}_{-0.15}$.
All the lenses show that there is no obvious evidence to support GR deviation
within observational uncertainties. In the subsequent analysis, we consider a
ratio of distance ${D_{\Delta t}}/{D^{'}_{d}}$ method to further avoid the
influence of $H_0$ on GR testing. The results show that, except J1206 within
the $\sim1.2\sigma$ observational uncertainty, the remaining 3 lenses support
GR holds within the $1\sigma$ observational uncertainties. | Tonghua Liu, Kai Liao | 2023-09-24T11:10:06Z | http://arxiv.org/abs/2309.13608v1 | Determining cosmological-model-independent \(H_{0}\) and post-Newtonian parameter with time-delay lenses and supernovae
###### Abstract
Strong gravitational lensing provides a natural opportunity to test General Relativity (GR). We propose a model-independent method for simultaneous constraining on Hubble constant (\(H_{0}\)) and post-Newtonian parameter (\(\gamma_{\rm PPN}\)) using strong lensing systems and observational SNe Ia. The time-delay measurements from strong lensing can directly determine the Hubble constant, and the lens distance inferred from the spectroscopic measurement of the stellar kinematics of the deflector galaxy can help us to constrain the post-Newtonian parameter. We seek the Pantheon dataset and reconstruct unanchored distances using Gaussian process regression to achieve the cosmological model-independent GR testing instead of assuming a specific model, which can reduce possible bias on GR testing and measurement of Hubble constant. Combining the reconstructed unanchored distances and the four H0LiCOW lenses datasets, our results are \(H_{0}=72.9^{+2.0}_{-2.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\gamma_{\rm PPN}=0.89^{+0.17}_{-0.15}\). All the lenses show that there is no obvious evidence to support GR deviation within observational uncertainties. In the subsequent analysis, we consider a ratio of distance \(D_{\Delta t}/D_{d}^{{}^{\prime}}\) method to further avoid the influence of \(H_{0}\) on GR testing. The results show that, except J1206 within the \(\sim 1.2\sigma\) observational uncertainty, the remaining 3 lenses support GR holds within the \(1\sigma\) observational uncertainties.
keywords: cosmology: cosmological parameters - distance scale - gravitational lensing: strong
## 1 Introduction
The modern theory of cosmology is based on two pillars, Einstein's theory of General Relativity (GR) and the cosmological principle. The former was the first to equate the gravitational field with the curvature of space-time and was extremely successful in describing the gravitational interaction between matter. The latter describes that our Universe is homogeneous and isotropic on large scales. The Lambda cold dark matter (\(\Lambda\)CDM) model is consistent with most popular observational evidences, such as the observations of type Ia supernovae (SNe Ia) (Riess et al., 2007), Cosmic Microwave Background Radiation (CMB) (Planck Collaboration et al., 2016), and regarded as the standard cosmological model. However, there is a recent tension of \(4.4\sigma\) or more between the Hubble constant inferred by CMB observations under the assumption of a flat \(\Lambda\)CDM model (Planck Collaboration et al., 2020) and its value measured through Cepheid-calibrated distance ladder by the _Supernova \(H_{0}\) for the Equation of State_ collaboration (SH0ES) (Riess et al., 2019). Such inconsistency could be caused by unknown systematic errors in astrophysical observations or reveal new physics significantly different from \(\Lambda\)CDM model. In a recent work, Brout et al. (2022) reanalyzed and re-calibrated the photometric systems in the Pantheon+sample of SN Ia, including those in the SH0ES distance-ladder measurement of \(H_{0}\), and suggested that supernova calibration is not currently capable of resolving Hubble tension.
As an important prediction of GR, gravitational lensing is a powerful tool to study the velocity dispersion function of early-type galaxies (Matsumoto and Futamase, 2008; Geng et al., 2021; Chae, 2007), the distribution of dark matter (Cao et al., 2022; Ruff et al., 2011; Mellier et al., 1993; Newman et al., 2009), and cosmological parameters (Suyu et al., 2014; Bonvin et al., 2017; Liu et al., 2019; Chen et al., 2019). More importantly, the time-delay measurements between multiple images from strong gravitational lensing provide a valuable opportunity for determination of \(H_{0}\)(Refsdal, 1964). In the milestone work of Wong et al. (2020), the six gravitationally lensed quasars
with well-measured time delays were jointly used to constrain the \(H_{0}\) by the \(H_{0}\) Lenses in COSMGRAIL's Well-spring (H0LiCOW) collaboration. Assuming a flat \(\Lambda\)CDM model, the H0LiCOW collaboration reported the results on \(H_{0}\) with these six lensed quasars, \(H_{0}=73.3^{+1.7}_{-1.8}\)\(\rm{km\ s^{-1}\ Mpc^{-1}}\), which is consistent with local measurements from SN Ia, but in \(3.1\sigma\) tension with CMB observations. However, it needs to be stressed that both the Planck and H0LiCOW inferred \(H_{0}\) values are based on GR plus ACDM model, that inspires us to investigate the validity of GR with cosmological model-independent way. The validity of GR can be verified by constraining the parametrized post-Newtonian (PPN) parameter \(\gamma_{\rm PPN}\) (since GR predicts exactly \(\gamma_{\rm PPN}\equiv 1\)), which describes the spatial curvature generated by an unit rest mass. In recent years, especially on the solar system scale, many tests on GR have made great achievements in extremely high precision (see review (Will, 2014) for more works about testing GR). However, testing GR at extra-galactic scale is still not very precise. For instance, there is only \(\sim 20\%\) precision on \(\gamma_{\rm PPN}\) at \(10-100\) Mpc scales by using the joint measurements of weak gravitational lensing and redshift-space distortions (Simpson et al., 2013; Blake et al., 2016). At cosmological scales, strong gravitational lensing systems provide an effective way to probe deviation of GR (Bolton et al., 2006; Cao et al., 2017; Liu et al., 2022; Wei et al., 2022). Recently, the work by Collett et al. (2018) used a nearby strong gravitational lens ESO 325-G004 to test GR and reported the constrained results on \(\gamma_{\rm PPN}=0.97\pm 0.09\) at \(1\sigma\) confidence level (C.L). In further research, Yang et al. (2020) proposed a new methodology through time-delay measurements combined with the stellar kinematics of the deflector lens from strong lensing to simultaneously constrain \(H_{0}\) and \(\gamma_{\rm PPN}\), and showed that there is no obvious deviation from the GR with the result \(\gamma_{\rm PPN}=0.87^{+0.19}_{-0.17}\). However, it should be emphasized here that these work are cosmological model-dependent (in \(\Lambda\)CDM model). The testing GR should be done without invoking any particular background cosmology model in order to reduce potential bias from the forms of the parametric or model assumptions.
Inspired by above, we propose a cosmological model-independent method to constrain the Hubble constant and PPN parameter simultaneously at cosmological scales using four strongly lensed quasars published by H0LiCOW with both time delay distance and lens distance. For the distance information required to constrain the PPN parameter, we seek for unanchored (or relative) distances from SN Ia observations using a Gaussian process (GP) regression. Intuitively, the Time-Delay Strong Lensing (TDSL) measurements can directly determine the Hubble constant, and the lens distance inferred from the spectroscopic measurement of the stellar kinematics of the deflector galaxy can help us to constrain the post-Newtonian parameter. This paper is organized as follows: In Section 2 we introduce the methodology and H0LiCOW lensing data including reconstructed \(H_{0}D^{L}\) using GP regression. The constrained results on \(H_{0}\) and \(\gamma_{\rm PPN}\) and discussions are given in Section 3. We conclusion our result in Section 4. The natural units of \(c=G=1\) are adopted throughout this work.
## 2 Methodology and Holicow lensing data
### Distances inferred from H0LiCOW program
In the limit of a weak gravitational field, the Schwarzschild line element of space-time can be expressed as
\[ds^{2}=-\big{(}1+2\Psi\big{)}dt^{2}+\big{(}1-2\Phi\big{)}dr^{2}+r^{2}d\Omega ^{2}, \tag{1}\]
where \(\Psi\) is the Newtonian potential and \(\Phi\) represents the spatial curvature potential, and \(\Omega\) is the angle in the invariant orbital plane. The ratio \(\gamma_{\rm PPN}=\Phi/\Psi\) denotes as the PPN parameter, which describes the spatial curvature generated per unit mass. It should be emphasized that the PPN parameter \(\gamma_{\rm PPN}\) is predicted to be one or \(\Psi=\Phi\). In this work, we assume that \(\gamma_{\rm PPN}\) is a constant at the lens galaxy scales.
The crucial idea of using strong lensing systems to test gravity is through two different mass measurements, i.e., gravitational mass inferred from the lensing image, and dynamical mass obtained from the spectroscopic measurement of stellar kinematics of the deflector galaxy. The motion of non-relativistic matters (usually made up of baryonic matter and dark matter) is governed by the Newtonian gravitational potential \(\Psi\), which obeys Poisson equation. However, the motion of relativistic particles is sensitive to both potentials. Testing \(\gamma_{\rm PPN}\) requires observing the motion of relativistic and non-relativistic particles around the same massive object. Thus, strong lensing systems provide a natural laboratory to test gravity and further measure the PPN parameter \(\gamma_{\rm PPN}\). The difference between dynamical mass and lensing mass can be directly used for difference between \(\Psi\) and \(\Psi_{+}=\frac{\Psi+\Phi}{2}=(\frac{1+\gamma_{\rm PPN}}{2})\Psi\) (namely Weyl potential). In the framework of PPN, the deflection angle contains the lensing mass information, \(\alpha_{\rm PPN}(\theta)=(\frac{1+\gamma_{\rm PPN}}{2})\alpha_{\rm GR}(\theta)\), and effective lensing potential (the integral of the Weyl potential along the line-of-sight) reaches as \(\psi_{+}=(\frac{1+\gamma_{\rm PPN}}{2})\psi\), as well as convergence field \(\kappa^{\prime}=(\frac{1+\gamma_{\rm PPN}}{2})\kappa\). It is worth noting that the deflector angle is directly related to the cosmological distance, so the PPN parameter and the cosmological distance are highly degenerated. This is one of the main limitations of using strong lensing systems to test gravity. To break this degeneracy, additional data needs to be taken into account, either the cosmological or the gravitational one. The time delay measurements are able to break the degeneracy alone.
For a given strong lensing system, quasar acting as background source, time delays between multiple images can be measured from variable AGN light curves, and determined by both the geometry of the Universe as well as the gravitational potential of the lensing galaxy through (Shapiro, 1964)
\[\Delta t=D_{\Delta t}\left[\phi(\theta_{\rm A},\beta)-\phi(\theta_{\rm B},\beta) \right]=D_{\Delta t}\Delta\phi_{\rm AB}(\xi_{\rm lens}), \tag{2}\]
where \(\phi(\theta,\beta)=\big{[}(\theta-\beta)^{2}/2-\psi(\theta)\big{]}\) is the Fermat potential at images, \(\beta\) is the source position, \(\xi_{\rm lens}\) is the lens model parameter. The cosmological background is reflected in so-called "time delay distance" \(D_{\Delta t}=(1+z_{\rm d})D_{\rm d}D_{\rm s}/D_{\rm ds}\), which is inverse proportional to \(H_{0}\), the key here is the Fermat potential difference \(\Delta\phi_{\rm AB}(\xi_{\rm lens})\), which can be reconstructed by high-resolution lensing imaging from space telescopes. As we mentioned above, in the PPN framework, the inferred mass parameters are rescaled by a factor of \((1+\gamma_{\rm PPN})/2\). Therefore, we denote the actually inferred lens model parameters in the Fermat potential as \(\xi^{\prime}_{\rm lens}\). We rewrite the
time-delay distance as
\[D_{\Delta t}=(1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm ds}}=\frac{\Delta t_{ \rm AB}}{\Delta\phi_{\rm AB}(\xi^{\prime}_{\rm lens})}\,. \tag{3}\]
This is the first distance we need. It can be obtained from both the measurements of time delay and the Fermat potential reconstructed with parameter \(\xi^{\prime}_{\rm lens}\).
On the other hand, the stellar kinematics of lensing galaxies are only sensitive to the Newtonian potential \(\Psi\), which are independent of PPN parameters. It can also be obtained from the modeling of the kinematic observable in lensing galaxies. The modeling of the stellar kinematic in lensing galaxies by the HOLiCOW collaboration is quite mature, and here we give a brief introduction, thus provide the reader with a detailed background. The dynamics of stars with the luminosity density distribution of lenses \(\rho_{*}(r)\) in the gravitational potential \(\Psi\) follows the Jeans equation. In the limit of a relaxed (vanishing time derivatives) and spherically symmetric system, with the only distinction between radial (\(\sigma_{r}\)) and tangential (\(\sigma_{t}\)) dispersions, the anisotropic Jeans equation is
\[\frac{\partial(\rho_{*}\sigma_{r}^{2})}{\partial r}+\frac{2\beta_{\rm ani}(r) \rho_{*}\sigma_{r}^{2}}{r}=-\rho_{*}\frac{\partial\Psi}{\partial r}\,, \tag{4}\]
where \(\beta_{\rm ani}(r)\equiv 1-\frac{\sigma_{r}^{2}}{\sigma_{r}^{2}}\) is the stellar distribution anisotropy. The luminosity-weighted projected velocity dispersion \(\sigma_{s}\) is \(I(R)\sigma_{s}^{2}=2\int_{R}^{\infty}\left(1-\beta_{\rm ani}(r)\frac{R^{2}}{ r^{2}}\right)\frac{\rho_{*}\sigma_{r}^{2}rd\sigma}{\sqrt{r^{2}-R^{2}}}\)(Suyu et al., 2010), where \(I(R)\) is the projected light distribution and \(R\) is the projected radius. Considering observational conditions, the luminosity-weighted line-of-sight velocity dispersion \(\sigma_{v}\) within an aperture \(\mathcal{A}\) is real observations, which is given by \(\sigma_{v}^{2}=\frac{L_{\rm d}[I(R)\sigma_{v}^{2}+\mathcal{P}]dA}{\mathcal{A} [I(R)+\mathcal{P}]dA}\), where \(\mathcal{P}\) is point spread function (PSF) convolution of the seeing. The prediction of the stellar kinematics requires a three-dimensional stellar density \(\rho_{*}\) and mass \(M(r)\) profile. In terms of imaging data, the information about the parameters of the lens mass surface density with parameters mass \(\xi_{\rm lens}\) and the surface brightness of the lens with parameters \(\xi_{\rm light}\) can be extract. Finally, the prediction of any \(\sigma_{v}\) from any model can be decomposed into cosmological part \(D_{s}/D_{ds}\) and stellar kinematics part \(J(\xi_{\rm lens},\xi_{\rm light},\beta_{\rm ani})\)(Birrer et al., 2019). The function \(J\) captures all of the model components calculated from the sky angle (from the imaging data) and the anisotropy distribution of the stellar orbit (from the spectroscopy). This allows us to obtain the distance ratio \(D_{s}/D_{ds}\) from the well-measured velocity dispersion, independent of the cosmological model and time delays, but still relies on the lens model \(\xi_{\rm lens}\)(not the \(\xi^{\prime}_{\rm lens}\) under PPN) (Birrer et al., 2016, 2019)
\[\frac{D_{\rm s}}{D_{\rm ds}}=\frac{\sigma_{v}^{2}}{c^{2}J(\xi_{\rm lens}, \xi_{\rm light},\beta_{\rm ani})}\;. \tag{5}\]
The lens model parameter in \(J\) is the "unrescaled" \(\xi_{\rm lens}\). Here, we use \(\xi^{\prime}_{\rm lens}\) to replace \(\xi_{\rm lens}\), the corresponding distance ratio shall also be rescaled, as
\[\frac{2}{1+\gamma_{\rm PPN}}\frac{D_{\rm s}}{D_{\rm ds}}=\frac{\sigma_{v}^{2} }{c^{2}J(\xi^{\prime}_{\rm lens},\xi_{\rm light},\beta_{\rm ani})}\,. \tag{6}\]
Furthermore, we can define rescaled deflector galaxy distance \(D^{\prime}_{\rm d}=\frac{1+\gamma_{\rm PPN}}{2}D_{\rm d}\). By combining Eqs. (3) and (6), we obtain (Birrer et al., 2016, 2019)
\[D^{\prime}_{\rm d}=\frac{1}{1+\varepsilon_{\rm d}}\frac{c\Delta t_{\rm AB}}{ \Delta\phi_{\rm AB}(\xi^{\prime}_{\rm lens})}\frac{c^{2}J(\xi^{\prime}_{\rm lens },\xi_{\rm light},\beta_{\rm ani})}{\sigma_{v}^{2}}\,. \tag{7}\]
This is the second distance we need. More details for these two distance obtained from strong lensing systems refer to work (Wong et al., 2020) and references therein.
Thanks to the HOLiCOW collaboration, four lenses (namely RXJ1131-1231 (Suyu et al., 2013, 2014), PG1115+080 (Chen et al., 2019), B1608+656 1 (Suyu et al., 2010; Jee et al., 2019), J1206+4332 (Birrer et al., 2019)) posterior distributions with both angular diameter distances of lens \(D_{d}\) and time delay distances \(D_{\Delta t}\) have been published, making it easier to use them. There are available on the website of HOLiCOW 2. The redshifts of both lens and source, the time delay distances, and the angular diameter distance to the lenses for these lensed quasar systems are summarized in Table 2 of Wong et al. (2020). For more relevant work by using these lensing systems, we refer the reader to see the literature (Ding et al., 2021; Liu et al., 2022; Bag et al., 2022; Sonnenfeld, 2021; Liao et al., 2015; Rathna Kumar et al., 2015; Liao et al., 2020; Liu et al., 2022).
Footnote 1: This len was given in the form of skewed log-normal distribution, due to the absence of blindly analysis with respect to the cosmological parameters
Footnote 2: [http://www.holicow.org](http://www.holicow.org)
The next problem is the acquisition of these two distance information. However, since \(D_{d}\) depend on the cosmological model, \(D_{\Delta t}\) also change in different cosmological models. The traditional method is to assume a particular cosmological model, such as a flat \(\Lambda\)CDM model, but it is cosmological model-dependent. In this work, we follow the reconstructed method used in work (Liao et al., 2019), and seek for current SN Ia observations to determine distances, even though the observations of SN Ia are anchored relative distances.
### Unanchored distance from observations of SNe Ia using GP regression
As the most explosive variable source in Universe, due to the nature of SNe Ia (as standard candles), they were regarded as powerful cosmological probes. It was the observations of SNe Ia that led to the discovery of the accelerating expansion of Universe. We use recent Pantheon dataset, consisting of 1048 SNe Ia spanning the redshift range \(0.01<z<2.3\)(Schonie et al., 2018). To combine the Pantheon SNe and the HOLiCOW strong lenses datasets, we generate samples of the unanchored luminosity distance \(H_{0}D^{L}\) from the posterior of the Pantheon dataset. In order to perform the posterior sampling in a cosmological model-independent way, the GP regression (Holsclaw et al., 2010, 2010; Joudaki et al., 2018; Shafieloo et al., 2012, 2013; Keeley et al., 2019) is considered here by using the code GPHist3(Kirkby and Keeley, 2017). GP regression is powerful tool for reconstruction of function since the regression occurs in an infinite-dimensional function space without overfitting problem (Joudaki et al., 2018). GP regression works by generating a large sample of functions \(\gamma(z)\) determined by the covariance function. The covariance between these functions can be described by a kernel function. We adopt a squared-exponential kernel to
parameterize the covariance
\[\langle\gamma(z_{1})\gamma(z_{2})\rangle=\sigma_{f}^{2}\,\exp\{-[s(z_{1})-s(z_{2})]^ {2}/(2\ell^{2})\}, \tag{8}\]
with hyperparameters \(\sigma_{f}\) and \(\ell\) that are marginalized over. The \(\gamma(z)\) is a random function inferred from the distribution defined by the covariance, and we adopt \(\gamma(z)=\ln([H^{\rm fid}(z)/H_{0}]/[H(z))/H_{0}])\) to generate expansion histories \(H(z)/H_{0}\) by using Pantheon dataset. Here \(H^{\rm fid}(z)/H_{0}\) is chosen to be the best fit \(\Lambda\)CDM model for the Pantheon data and serves the role of the mean function for GP regression. Since the final reconstruction result is not completely independent of the mean function, it has some influence on the final reconstruction result because the value of the hyperparameter helps to track the deviation from the mean function. And the true model should be very close to flat \(\Lambda\)CDM, hence, our choice for the mean function is very reasonable (Shafieloo et al., 2012, 2013; Aghamousa et al., 2017).
To highlight the purpose of our work, i.e., the Hubble constant and \(\gamma_{\rm PPN}\), we assume a flat universe here. With the reconstructed expansion history \(H(z)/H_{0}\), the unanchored SN luminosity distances can be calculated
\[H_{0}D^{L}(z)=(1+z)\int_{0}^{z}dz^{\prime}/[H(z^{\prime})/H_{0}]~{}. \tag{9}\]
The 1000 unanchored luminosity distance curves \(H_{0}D^{L}(z)\) reconstructed from the SN data are shown in Fig. 1. It shows the shape of the cosmic distance-redshift relation of Pantheon data very well. It should be noted that the redshift of SN dataset well covers range of four strong lensing system redshifts, so the redshift range needn't extrapolation.
### Simultaneous measurements on Hubble constant and PPN parameters
In summary, combining the three measurements, namely optical lensing image, deflector spectroscopies as well as time delays, two distances \((D_{\Delta t},D_{a}^{\prime})\) can be inferred, simultaneously, based on Eqs. (3) and (7). It should be stressed that there is only deflector galaxy distance \((D_{a}^{\prime})\) carried the information of the PPN parameter, while time delay distance \((D_{\Delta t})\) is sensitive to \(H_{0}\). Thus, the combination of \(D_{\Delta t}\) and \(D_{a}^{\prime}\) directly provides a new way for simultaneous measurement of \(H_{0}\) and \(\gamma_{\rm PPN}\).
The steps for simultaneous constraining \(H_{0}\) and \(\gamma_{\rm PPN}\) are summarized as follows:
1. Draw 1000 unanchored luminosity distances curves \(H_{0}D^{L}\) from SN data, and convert to unanchored angular diameter distances \(H_{0}D^{A}\) for serving the strong lensing systems.
2. Calculate 1000 values of \(H_{0}D_{d}\) at the lens from the 1000 \(H_{0}D^{A}\) curves for serving four posterior distributions of lens from H0LiCOW program; Adopt same procedure for source redshifts of the four strong lens systems to calculate 1000 values of \(H_{0}D_{s}\); Combine these \(H_{0}D_{d}\) and \(H_{0}D_{s}\) to calculate \(H_{0}D_{\Delta t}\) for each system using \(H_{0}D_{\Delta t}=(1+z_{d})(H_{0}D_{d})(H_{0}D_{s})/(H_{0}D_{ds})\)4; Footnote 4: We use the standard distance relation to obtain the angular diameter distance between the lens and the source (Weinberg, 1972) for a spatially flat universe \(D_{ds}=D_{s}-[(1+z_{d})/(1+z_{s})]D_{d}\).
3. Compute the likelihood, for each of the 1000 realizations, from the H0LiCOW's \(D_{\Delta t}\) and \(D_{d}\) data for each lens system for many values of \(H_{0}\) and \(\gamma_{\rm PPN}\);
4. Multiply the four likelihoods to form the full likelihood for each realization, for each values of \(H_{0}\) and \(\gamma_{\rm PPN}\);
5. Marginalize over the realizations to form the posterior distributions of \(H_{0}\) and \(\gamma_{\rm PPN}\).
## 3 Results and discussions
Working on four posteriors of \(D_{\Delta t}\) and \(D_{d}\) published by H0LiCOW program and combining reconstructed unanchored SN dataset, we obtain the final distributions for Hubble constant \(H_{0}\) and PPN parameter \(\gamma_{\rm PPN}\). Our model-independent constraints are \(H_{0}=72.9^{+2.0}_{-2.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\gamma_{\rm PPN}=0.89^{+0.17}_{-0.15}\) (median value plus the \(16^{th}\) and \(84^{th}\) percentiles around this) for combination of all lenses. The one-dimensional posterior distributions are shown in Fig. 2. The numerical constraint results for four individual lenses can be found in Table 1. One can see that constraint results of all the lenses show that GR is supported within observational uncertainties, and there is no obvious evidence of GR deviation. This can be compared to the results of Yang et al. (2020) of \(H_{0}=73.65^{+1.65}_{-2.26}\) km s\({}^{-1}\) Mpc\({}^{-1}\) and \(\gamma_{\rm PPN}=0.87^{+0.19}_{-0.17}\) within a flat \(\Lambda\)CDM for combining all lenses, our results are agreement with their results, which support the robustness of our method. Our results on \(H_{0}\) are consistent with the time delay and time delay plus SN results (Collett et al., 2019; Wong et al., 2020; Taubenberger et al., 2019) made within specific cosmological models or with polynomial fitting of the distance relation. However, it should be stressed that, comparing with assuming a specific model, combinations of strong lensing and current astronomical probe such as SN Ia can reduce possible bias in our work. More importantly, there is no significant increase in uncertainty.
On the other hand, Hubble constant and PPN parameter are in fact completely degenerate together, this can also be seen in work of Yang et al. (2020). From a theoretical point of view, PPN parameter is encoded in the inferred angular diameter distance to the lens, \(D_{a}^{\prime}=D_{d}(1+\gamma_{\rm PPN})/2\)
Figure 1: The unanchored luminosity distance curves \(H_{0}D^{L}(z)\) reconstructed from the Pantheon SN Ia data for a representative sample of the 1000 GP realizations.
which leads that PPN parameter and cosmological distance (related to Hubble constant) are degenerated. For instance, if \(\gamma_{\rm PPN}\) increase, it means the enhancement of gravitational force. One also can keep gravity unmodified, but change the corresponding distances. To try to break this degeneracy and do not assume any value for \(H_{0}\), we can also consider ratios of distances \(D_{\Delta t}/D_{d}^{\prime}=\frac{2}{1+p_{\rm PPN}}D_{s}/D_{ds}\), which are independent of \(H_{0}\). The final results are displayed in Fig 3. We see that the mean values of \(\gamma_{\rm PPN}\) for four individual lenses have a little changes (though not significant), the numerical results are shown in Table 2. For combination of all lenses, the \(\gamma_{\rm PPN}\) parameter is even closer to one. However, for the lens J1206, the corresponding constraint result is \(\gamma_{\rm PPN}=1.78^{+0.87}_{-0.67}\). Although GR is somewhat deviated within the \(1\sigma\) confidence level, GR is still supported within the \(\sim 1.2\sigma\) confidence level. As pointed out in work of Millon et al. (2020), the dispersion measurements do not play a significant role in the \(H_{0}\) estimation of the H0LiCOW analysis, except for the lens J1206. In other word, in the analysis of the distance ratios, since we have not given any value for \(H_{0}\), the degeneracy between \(H_{0}\) and \(\gamma_{\rm PPN}\) causes a shift in \(\gamma_{\rm PPN}\). In addition, for the lens B1608, the constraint of \(\gamma_{\rm PPN}\) using the distance ratio does not change at all, the \(D_{\Delta t}\) and \(D_{d}\) of this lens are completely independent, because of the lack of blind analysis of this lens. Since no new data has been added, we do not expect any improvement in precision, and our results show this point.
## 4 Conclusion
In this work, we propose a model-independent method for simultaneous constraining on Hubble constant and post-Newtonian parameter using time-delay strong lensing sys
\begin{table}
\begin{tabular}{l c c} \hline \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the constraints on the Hubble constant \(H_{0}\) and \(\gamma_{\rm PPN}\) from four of the H0LiCOW lenses.
\begin{table}
\begin{tabular}{l c c} \hline \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \(\gamma_{\rm PPN}\) & \(\gamma_{\rm PPN}\) \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the constraints on \(\gamma_{\rm PPN}\) using the distance ratio method from four H0LiCOW lenses.
Figure 3: The one-dimensional posterior distributions of \(\gamma_{\rm PPN}\) using distance ratio method with four H0LiCOW lenses. The dashed line is \(\gamma_{\rm PPN}=1\) predicted by GR.
Figure 2: The simultaneous constraints of \(H_{0}\) (left panel) and \(\gamma_{\rm PPN}\) (right panel) from four of the H0LiCOW lenses. The dashed line is \(\gamma_{\rm PPN}=1\) predicted by GR.
tems and observational SNe Ia. To match the lensing and source redshifts of the four strong lensing systems analyzed by H0LiCOW program, we use GP regression to reconstruct distance from observations of SNe Ia instead of assuming a specific model. Although the observations of SNe Ia provide the unanchored or relative distance, strong lensing systems encode absolute distance. Thus, such dataset combinations can anchor cosmological distances.
Firstly, we directly use four posteriors of \(D_{\Delta t}\) (inferred from lensing mass) and \(D_{d}\) (inferred from dynamic mass) published by H0LiCOW program, combining with reconstructed unanchored SN dataset. For combination of all lenses, we find that the constraint result on PPN parameter is \(\gamma_{\rm PPN}=0.89^{+0.17}_{-0.15}\) which demonstrates that GR is supported within observational uncertainties. The result on Hubble constant is \(H_{0}=72.9^{+2.0}_{-2.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\), which is consistent with the time delay and time delay plus SN results (Collett et al., 2019; Wong et al., 2020; Taubenberger et al., 2019) made within specific cosmological models or with polynomial fitting of the distance relation.
Secondly, in order to avoid the influence of \(H_{0}\) on GR testing, we do not assume any value for \(H_{0}\), and consider a ratio of distance \(D_{\Delta t}/D_{d}^{{}^{\prime}}\) method to test PPN parameter. This method can independently give a constraint on \(\gamma_{\rm PPN}\), because the distance ratio contains only velocity dispersion measurements and the dispersion measurements do not play a significant role in the \(H_{0}\). The mean values of \(\gamma_{\rm PPN}\) for four individual lenses have a little changes (though not significant) using ratio of distance method. However, the constraint result using the lens J1206 is \(\gamma_{\rm PPN}=1.78^{+0.87}_{-0.67}\), which no longer supports that GR holds within observational uncertainty.
As a final remark, we point out that time-delay lenses plus observations of SN provide a quite promising and model independent method to test General Relativity. The uncertainty of model-independent analyses with such dataset combination can be comparable to the uncertainty of assuming specific models, while reducing possible biases. We also look forward to a large amount of future data, not only from strong lensing system, but also from the SN Ia, allowing us to further improve the measurements of \(H_{0}\) and \(\gamma_{\rm PPN}\). In the future, current surveys such as Dark Energy Survey (DES) (Treu et al., 2018) and Hyper SuprimeCam Survey (HSC) (More et al., 2017), and the upcoming wide-area and deep surveys like the Large Synoptic Survey Telescope (LSST) (Oguri & Marshall, 2010) and Euclid and WFIRST satellites (Barnacka, 2018; Petrushevska et al., 2018) with much wider field of view and higher sensitivity will be able to discover and precisely localize a large number of lensed quasars and even other lensed sources, which will have well-measured time delays. The High-resolution imaging from space telescopes such as HST or ground-based adaptive optics will help to better model the stellar kinematic in lensing galaxies. In addition, SN Ia data will continue to improve as well, playing an important role as a dense sampler of cosmic expansion history over a wide range of redshifts. All these help us to carry out more accurate analysis of General Relativity, Hubble constant, and cosmology in our subsequent work.
## 5 Acknowledgments
This work was supported by National Natural Science Foundation of China under Grant Nos. 12203009, 12222302, 11973034. Liu. T.-H was supported by Chutian Scholars Program in Hubei Province (X2023007). Liao. K was supported by Funds for the Central Universities (Wuhan University 1302/600460081).
## 6 Data availability statements
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2309.14167 | Ultrafast Demagnetization through Femtosecond Generation of Non-thermal
Magnons | Ultrafast laser excitation of ferromagnetic metals gives rise to correlated,
highly non-equilibrium dynamics of electrons, spins and lattice, which are,
however, poorly described by the widely-used three-temperature model (3TM).
Here, we develop a fully ab-initio parameterized out-of-equilibrium theory
based on a quantum kinetic approach--termed (N+2) temperature model--that
describes magnon occupation dynamics due to electron-magnon scattering. We
apply this model to perform quantitative simulations on the ultrafast,
laser-induced generation of magnons in iron and demonstrate that on these
timescales the magnon distribution is non-thermal: predominantly high-energy
magnons are created, while the magnon occupation close to the center of the
Brillouin zone even decreases, due to a repopulation towards higher energy
states via a so-far-overlooked scattering term. We demonstrate that the simple
relation between magnetization and temperature computed at equilibrium does not
hold in the ultrafast regime and that the 3TM greatly overestimates the
demagnetization. The ensuing Gilbert damping becomes strongly magnon wavevector
dependent and requires a description beyond the conventional
Landau-Lifshitz-Gilbert spin dynamics. Our ab-initio-parameterized calculations
show that ultrafast generation of non-thermal magnons provides a sizable
demagnetization within 200fs in excellent comparison with experimentally
observed laser-induced demagnetizations. Our investigation emphasizes the
importance of non-thermal magnon excitations for the ultrafast demagnetization
process. | Markus WeiΓenhofer, Peter M. Oppeneer | 2023-09-25T14:22:04Z | http://arxiv.org/abs/2309.14167v3 | # Ultrafast generation of nonthermal magnons in iron: _Ab initio_ parameterized calculations
###### Abstract
Ultrafast laser excitation of ferromagnetic metals gives rise to correlated, highly non-equilibrium dynamics of electrons, spins and lattice, which are, however, poorly described by the widely used three-temperature model (3TM). Here, we develop a fully _ab initio_ parameterized out-of-equilibrium theory based on a quantum kinetic approach - termed _(N+2) temperature model_ - that describes magnon occupation dynamics due to electron-magnon scattering. We apply this model to perform quantitative simulations on the ultrafast, laser-induced generation of magnons in iron and demonstrate that on these timescales the magnon distribution is non-thermal: predominantly high-energy magnons are created, while the magnon occupation close to the center of the Brillouin zone even decreases, due to a repopulation towards higher energy states via a so-far-overlooked scattering term. Moreover, we show that the 3TM can be derived from our model and compare it with our microscopic calculations. In doing so, we demonstrate that the simple relation between magnetization and temperature computed at equilibrium does not hold in the ultrafast regime and that the 3TM greatly overestimates the demagnetization. Our _ab initio_-parametrized calculations show that ultrafast generation of non-thermal magnons provides a sizable demagnetization within 200 fs and, thus, emphasize the importance of magnon excitations for the ultrafast demagnetization process.
## I Introduction
The discovery that magnetic order can be manipulated on sub-picosecond timescales by femtosecond laser pulses [1; 2; 3] has fueled the emergence of intensive experimental and theoretical research efforts on the field of ultrafast magnetization dynamics. What makes this field particularly interesting, apart from its technological potential in future memory and spintronic devices [4; 5], is that many well-established physical paradigms cannot be simply transferred from the equilibrium to the ultrafast regime, due to its highly non-equilibrium nature. Relatedly, albeit more than 25 years of intense research, the underlying mechanisms of ultrafast demagnetization are still heavily debated [6; 7]: while some works [8; 9; 10; 11] lean towards longitudinal excitations - i.e., the reduction of the magnetic moment carried by each atom due to the decrease of exchange splitting - others [12; 13; 14; 15] hint at transverse spin excitations - a reduction of the average magnetization due to the mutual tilting of the moments carried by different atoms - as the main contribution. Non-local contributions due to superdiffusive spin currents [16; 17] are relevant in certain situations [18; 19; 20; 21]. However, it has become evident that they are most likely not the only mechanism of ultrafast demagnetization [22; 23].
Theoretical models describing ultrafast magnetization dynamics typically rely on a separation of electronic, phononic and - if magnetization dynamics are to be considered separately - spin degrees of freedom. Beaurepaire _et al_. [1] introduced the three-temperature model (3TM) to explain the flow of the energy transferred by the laser by assuming that each subsystem is internally in thermal equilibrium and the system can hence be described by three temperatures (for electrons, phonons and spins), together with the respective distributions (Fermi-Dirac and Bose-Einstein). However, it was pointed out in numerous investigations that the distributions are non-thermal on ultrafast timescales [24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. Also, the 3TM discards completely the transfer of angular momentum due to demagnetization, which, according to recent experiments [34; 35], appears to be primarily to the lattice.
Transverse demagnetization is often studied using atomistic spin dynamics simulations based on the stochastic Landau-Lifshitz-Gilbert (LLG) equation together with an extended Heisenberg model [36; 37; 38], which can successfully reproduce experimentally measured demagnetization curves [39; 40]. The stochastic LLG is a Langevin-type equation with a coupling to a heat bath with given temperature via a single parameter, the Gilbert damping parameter. This parameter includes all possible contributions - Fermi surface breathing, crystal defects, coupling to phonons, \(s-d\) coupling, etc. [41; 42; 43; 44; 45; 46; 47; 48] - to damping and while it can in principle be obtained from _ab initio_ calculations, in practice it is typically taken from experimental measurements of ferromagnetic resonance (FMR) [49]. On the one hand, this ensures the versatility of atomistic spin dynamics simulations, but on the other hand, it obscures the details of the underlying microscopic energy and angular momentum transfer processes - which are crucial for understanding the fundamentals of ultrafast demagnetization. For this reason, steps have been taken in recent years to explicitly consider the coupling of spins to phonons [50; 51; 52; 53; 54; 55; 56; 57; 58] and electrons [59; 60; 61]. Also, due to the classical nature of the commonly used stochastic LLG, the equilibrium magnon occupations calculated by it follow Rayleigh-Jeans rather than Bose
Einstein statistics, henceforth leading to the wrong temperature scaling of the magnetization [62; 63]. Implementation of quantum statistics in the spin-dynamics simulations can however provide the correct low-temperature scaling of the magnetization [64; 65].
In this work, we investigate the laser-induced generation of magnons, the low energy transverse excitations of the spin system, due to electron-magnon scattering. We develop a quantum kinetic approach, which will be termed _(N+2)-temperature model_ [(N+2)TM], to perform quantitative simulations of the time evolution of the non-thermal magnon dynamics in bcc iron. Being based on _ab initio_ parameters and considering also non-thermal magnon distributions, our work goes beyond what has been done in Refs. [59; 60; 66] and the conventional 3TM. In addition, we show that the 3TM and its relevant parameters can be obtained from our (N+2)TM and, with that, from _ab initio_ calculations.
## II Out-of-equilibrium magnon dynamics model
To describe the time evolution of the ultrafast non-thermal magnon occupation dynamics, we assume that their creation and annihilation is dominated by electron-magnon scattering processes. In this work, we use the \(sp-d\) model [67; 68] to describe such processes. The basic idea of both \(s-d\) model and \(sp-d\) model is the separation of electrons in localized (\(d\) band) electrons and itinerant (\(s\) band or \(s\) and \(p\) bands) electrons. The magnetic moments of the \(d\) electrons make up the Heisenberg-type [69] magnetic moments of constant length, the small energy excitations of which are the magnons. The itinerant electrons are described within a Stoner-type model [70]. While an unambiguous identification of \(sp\) and \(d\) electrons as localized and itinerant is strictly speaking not possible, it has nonetheless been established in literature that these models provide a suitable framework for the description of electron-spin interaction in many phenomena relevant for spintronics, e.g. magnetic relaxation [71; 72; 73], ultrafast demagnetization [59; 60; 61; 66; 74; 75; 76] and spin-torques [77].
We assume local exchange between the itinerant and localized spins, as given by the Hamiltonian \(\hat{\mathcal{H}}_{\mathrm{em}}\sim\sum_{i=1}^{N}\hat{\mathbf{s}}^{\mathrm{itin}} \cdot\hat{\mathbf{S}}_{i}^{\mathrm{loc}}\), with \(N\) being the number of atoms, and \(\hat{\mathbf{s}}^{\mathrm{itin}}\) and \(\hat{\mathbf{S}}_{i}^{\mathrm{loc}}\) the spin operators for itinerant (\(sp\)) electrons and localized (\(d\)) electrons at atom \(i\). In second quantization and second order in magnon variables (details in Appendix A), the Hamiltonian reads
\[\hat{\mathcal{H}}_{\mathrm{em}}\approx-\Delta\sum_{\mathbf{k}\nu} \left(\hat{c}_{\mathbf{k}\nu\uparrow}^{\dagger}\hat{c}_{\mathbf{k}\nu\uparrow}-\hat{ c}_{\mathbf{k}\nu\downarrow}^{\dagger}\hat{c}_{\mathbf{k}\nu\downarrow}\right)\] \[-\Delta\sqrt{\frac{2}{SN}}\sum_{\mathbf{k}\nu\nu^{\prime},\mathbf{q}}\left( \hat{c}_{\mathbf{k}+\mathbf{q}\nu\uparrow}^{\dagger}\hat{c}_{\mathbf{k}\nu^{\prime} \downarrow}\hat{b}_{-\mathbf{q}}^{\dagger}+\hat{c}_{\mathbf{k}+\mathbf{q}\nu\downarrow}^ {\dagger}\hat{c}_{\mathbf{k}\nu^{\prime}\uparrow}\hat{b}_{\mathbf{q}}\right)\] \[+\frac{\Delta}{SN}\sum_{\mathbf{k}\nu\nu^{\prime},\mathbf{q}\mathbf{q}^{ \prime}}\left(\hat{c}_{\mathbf{k}-\mathbf{q}+\mathbf{q}^{\prime}\nu\uparrow}^{\dagger} \hat{c}_{\mathbf{k}\nu^{\prime}\uparrow}-\hat{c}_{\mathbf{k}-\mathbf{q}+\mathbf{q}^{\prime} \nu\downarrow}^{\dagger}\hat{c}_{\mathbf{k}\nu^{\prime}\downarrow}\right)\hat{b}_{ \mathbf{q}}^{\dagger}\hat{b}_{\mathbf{q}^{\prime}}. \tag{1}\]
Here, \(\Delta\) is the \(sp-d\) exchange parameter, \(S\) is the absolute value of the localized spins, \(\mathbf{k}\) and \(\mathbf{q}\) are vectors in reciprocal space, \(\hat{c}_{\mathbf{k}\nu\sigma}^{(\dagger)}\) is the fermionic electron annihilation (creation) operator for the itinerant electrons - with \(\nu\) being the band index and \(\sigma\in\{\uparrow,\downarrow\}\) - and \(\hat{b}_{\mathbf{q}}^{(\dagger)}\) is the bosonic magnon annihilation (creation) operator. The first term in Eq. (II) describes the spin-splitting of the itinerant electrons due to the exchange with the localized magnetic moments, the second one the excitation (annihilation) of a magnon due to a spin flip process and the third one the spin-conserving scattering of a magnon and an electron from one state to another. It is worth noting that the second term leads to a transfer of both energy and angular momentum (i.e., spin) - since it can change the total number of magnons - while the third term can only transfer energy. For this reason, this term was discarded earlier works [59; 60; 61], however, our quantitative analysis reveals that the energy transferred by this term can exceed the energy transferred by the term first order in magnon operators.
We complete our Hamiltonian \(\mathcal{H}=\hat{\mathcal{H}}_{\mathrm{e}}+\hat{\mathcal{H}}_{\mathrm{m}}+\hat{ \mathcal{H}}_{\mathrm{em}}\) by considering \(\hat{\mathcal{H}}_{\mathrm{e}}=\sum_{\mathbf{k}\nu\sigma}\varepsilon_{\mathbf{k}\nu \sigma}\hat{c}_{\mathbf{k}\nu\sigma}^{\dagger}\hat{c}_{\mathbf{k}\nu\sigma}\) and \(\hat{\mathcal{H}}_{\mathrm{m}}=\sum_{\mathbf{q}}\hbar\omega_{\mathbf{q}}\hat{b}_{\mathbf{q} }^{\dagger}\hat{b}_{\mathbf{q}}\), with \(\varepsilon_{\mathbf{k}\nu\sigma}=\varepsilon_{\mathbf{k}\nu}-2\Delta\delta_{\sigma \uparrow}\) being the mode and spin dependent electron energies that are calculated from first-principles calculations and \(\hbar\omega_{\mathbf{q}}\) being the magnon energies. Note that we have absorbed the term zero-th order in magnon variables in Eq. (II) in the otherwise spin-independent \(\hat{\mathcal{H}}_{\mathrm{e}}\).
Next, we use the Hamiltonian introduced above to construct a quantum kinetic approach for the description of the out-of-equilibrium dynamics of electrons and magnons. We define the rates of energy exchange between both subsystems as
\[\dot{E}_{\mathrm{m}} =\sum_{\mathbf{q}}\hbar\omega_{\mathbf{q}}\hat{n}_{\mathbf{q}} \tag{2}\] \[\dot{E}_{\mathrm{e}} =\sum_{\mathbf{k}\nu\sigma}\varepsilon_{\mathbf{k}\nu\sigma}\hat{f}_{\mathbf{ k}\nu\sigma}=-\sum_{\mathbf{q}}\hbar\omega_{\mathbf{q}}\hat{n}_{\mathbf{q}}. \tag{3}\]
where the dot represents temporal derivative and with the electron (\(f_{\mathbf{k}\nu\sigma}\)) and magnon (\(n_{\mathbf{q}}\)) occupation numbers. The equivalence in Eq. (3) results from the conservation of total energy. The time derivatives of the occupation numbers can be calculated by applying Fermi's golden rule to the scattering Hamiltonian (II). To simplify the calculations, we further assume a thermal elec
tron distribution and can hence introduce a single electronic temperature \(T_{\rm e}\) that relates to the occupation of electronic states via the Fermi-Dirac distribution. This allows us to apply and also extend (by including terms second order in the bosonic operators) the ideas laid out in Allen's seminal work on electron-phonon interaction [78] to electron-magnon scattering, yielding \(\hat{n}_{\mathbf{q}}=\big{[}n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm e})-n_{\mathbf{q}}\big{]} \gamma_{\mathbf{q}}+\sum_{\mathbf{q}^{\prime}}\big{[}(n_{\mathbf{q}}+1)n_{\mathbf{q}^{\prime}}n ^{\rm BE}(\omega_{\mathbf{q}}-\omega_{\mathbf{q}^{\prime}},T_{\rm e})+(\mathbf{q}\leftrightarrow \mathbf{q}^{\prime})\big{]}\Gamma_{\mathbf{qq}^{\prime}}\), with \(n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm e})=[e^{\frac{h\omega_{\mathbf{q}}}{k_{\rm B}T_{ \rm e}}}-1]^{-1}\) being the Bose-Einstein distribution evaluated at the electron temperature. The scattering rates are given by
\[\gamma_{\mathbf{q}} =\frac{4\pi\Delta^{2}}{SN}\omega_{\mathbf{q}}I_{\uparrow\downarrow}( T_{\rm e})\sum_{\mathbf{k}\nu\nu^{\prime}}\delta(\varepsilon_{\rm F}- \varepsilon_{\mathbf{k}-\mathbf{q}\nu\uparrow})\delta(\varepsilon_{\rm F}-\varepsilon _{\mathbf{k}\nu^{\prime}\downarrow}), \tag{4}\]
\[\Gamma_{\mathbf{qq}^{\prime}} =\frac{2\pi\Delta^{2}}{S^{2}N^{2}}(\omega_{\mathbf{q}}-\omega_{\mathbf{q} ^{\prime}})\sum_{\sigma}I_{\sigma\sigma}(T_{\rm e}) \tag{5}\] \[\quad\times\sum_{\mathbf{k}\nu\nu^{\prime}}\delta(\varepsilon_{\rm F} -\varepsilon_{\mathbf{k}-\mathbf{q}+\mathbf{q}^{\prime}\nu\sigma})\delta(\varepsilon_{\rm F }-\varepsilon_{\mathbf{k}\nu^{\prime}\sigma}),\]
with \(\varepsilon_{\rm F}\) being the Fermi energy. The functions \(I_{\sigma\sigma^{\prime}}(T_{\rm e})\) have the property \(\lim_{T_{\rm e}\to 0}I_{\sigma\sigma^{\prime}}(T_{\rm e})=1\) and account for the smearing of the Fermi-Dirac distribution at high electron temperatures, similar to what has been derived for electron-phonon scattering [31]. The expression for \(I_{\sigma\sigma^{\prime}}(T_{\rm e})\) and details of the derivation of Eqs. (4)-(5) are in the Appendix A. Note that a comparison with linear spin-wave theory in the framework of the Landau-Lifshitz-Gilbert equation [79] reveals that \(\gamma_{\mathbf{q}}/\omega_{\mathbf{q}}=\alpha_{\mathbf{q}}\) can be viewed as a mode-dependent Gilbert damping parameter.
Due to the assumption that the electron occupation numbers follow the Fermi-Dirac distribution at all times, the change in electron energy is determined by the change in \(T_{\rm e}\), i.e., \(\hat{E}_{\rm e}=\sum_{\mathbf{k}\nu\sigma}\varepsilon_{\mathbf{k}\nu\sigma}(\partial f _{\mathbf{k}\nu\sigma}/\partial T_{\rm e})\hat{T}_{\rm e}=C_{\rm c}\hat{T}_{\rm e}\), with the electronic heat capacity \(C_{\rm e}=\sum_{\mathbf{k}\nu\sigma}\varepsilon_{\mathbf{k}\nu\sigma}(\partial f_{\bm {k}\nu\sigma}/\partial T_{\rm e})\). By additionally considering the absorption of a laser pulse with power \(P(t)\) by the electrons and a coupling of the electrons to a phonon heat bath as in the 2TM, we finally obtain our out-of-equilibrium magnon dynamics model:
\[\hat{n}_{\mathbf{q}} =\Big{[}n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm e})-n_{\mathbf{q}}\Big{]} \gamma_{\mathbf{q}} \tag{6}\] \[+\sum_{\mathbf{q}^{\prime}}\Big{[}(n_{\mathbf{q}}+1)n_{\mathbf{q}^{\prime}}n ^{\rm BE}(\omega_{\mathbf{q}}-\omega_{\mathbf{q}^{\prime}},T_{\rm e})+(\mathbf{q} \leftrightarrow\mathbf{q}^{\prime})\Big{]}\Gamma_{\mathbf{qq}^{\prime}},\]
\[\hat{T}_{\rm e} =\frac{1}{C_{\rm e}}\Big{[}-\sum_{\mathbf{q}}\hbar\omega_{\mathbf{q}} \hat{n}_{\mathbf{q}}+G_{\rm ep}(T_{\rm p}-T_{\rm e})+P(t)\Big{]}, \tag{7}\] \[\hat{T}_{\rm p} =-\frac{G_{\rm ep}}{C_{\rm p}}(T_{\rm p}-T_{\rm e}). \tag{8}\]
Here, \(T_{\rm p}\), \(C_{\rm p}\) and \(G_{\rm ep}\) are the phonon temperature and heat capacity and electron-phonon coupling constant, respectively. Note that we do not consider direct magnon-phonon coupling, which has been shown to be a reasonable approximation for \(3d\) ferromagnets [39; 40]. We would like to point out that the non-thermal magnon occupations \(n_{\mathbf{q}}\) can be translated to mode-specific temperatures via the Bose-Einstein distribution, \(T_{\mathbf{q}}:=\hbar\omega_{\mathbf{q}}/(k_{\rm B}\ln(n_{\mathbf{q}}^{-1}+1))\). Based on this - and in distinction from the 3TM - we term the framework provided by Eqs. (6)-(8) the _(N+2)-temperature model_ ((N+2)TM). Below, we reveal by solving these coupled equations numerically that they provide a viable framework to describe laser-induced ultrafast magnetization dynamics and the generation of _non-thermal_ magnons, going beyond the well-established 3TM.
Before doing so, we want to shortly discuss the relation between the (N+2)TM introduced here and the 3TM. Albeit their phenomenological nature, the 2TM (\(T_{\rm e}\) and \(T_{\rm p}\)) and the 3TM (\(T_{\rm e}\), \(T_{\rm p}\) and \(T_{\rm m}\)) have been successfully applied to explain a plethora of phenomena [80], perhaps most prominently by Beaurepaire _et al_. to describe the ultrafast demagnetization of Ni [1]. Allen [78] and Manchon _et al_. [74] demonstrated that the 2TM and the 3TM can be derived from a microscopic out-of-equilibrium approach similar to the one used here. By assuming instantaneous relaxation of the magnon occupation numbers to the Bose-Einstein distribution with a single magnon temperature \(T_{\rm m}\), our (N+2)TM reduces to the 3TM (in absence of magnon-phonon coupling),
\[C_{\rm m}\dot{T}_{\rm m} =G_{\rm em}(T_{\rm e}-T_{\rm m}), \tag{9}\] \[C_{\rm e}\dot{T}_{\rm e} =G_{\rm em}(T_{\rm m}-T_{\rm e})+G_{\rm ep}(T_{\rm p}-T_{\rm e})+P(t),\] \[C_{\rm p}\dot{T}_{\rm p} =G_{\rm ep}(T_{\rm e}-T_{\rm p}),\]
with the magnon heat capacity \(C_{\rm m}=\sum_{\mathbf{q}}C_{\mathbf{q}}=\sum_{\mathbf{q}}\hbar\omega_{\mathbf{q}}(\partial n _{\mathbf{q}}/\partial T_{\rm m})\) and the electron-magnon coupling constant
\[G_{\rm em}=\sum_{\mathbf{q}}C_{\mathbf{q}}\Big{[}\gamma_{\mathbf{q}}+\sum_{\mathbf{q}^{\prime}} \frac{k_{\rm B}T_{\rm m}}{\hbar\omega_{\mathbf{q}}}\Gamma_{\mathbf{qq}^{\prime}}\Big{]}. \tag{10}\]
Details of the derivation are found in the Appendix B. The above expression goes beyond what was derived in Ref. [74] by including terms second order in magnon variables and allows us to determine the electron-magnon coupling fully based on _ab initio_ parameters. We would like to point out that it can be extended further by going to higher order in the magnon variables.
## III Results
We apply the (N+2)TM model defined by Eqs. (6)-(8) to bcc iron. To obtain a full solution of the out-of-equilibrium dynamics, it is required to calculate material specific quantities. First, we estimate \(\Delta\approx 0.75\,\)eV from the band structure and with that we compute the quantities \(\gamma_{\mathbf{q}}\), \(\Gamma_{\mathbf{qq}^{\prime}}\) and \(I_{\sigma\sigma^{\prime}}(T_{\rm e})\), both using the full-potential linear augmented plane wave code ELK [81] (details can be found in the Appendix C). For bcc iron
it turns out that \(I_{\sigma\sigma^{\prime}}(T_{\rm e})\) only scales weakly with temperature and hence we use the low temperature limit \(I_{\sigma\sigma^{\prime}}(T_{\rm e})=1\) hereinafter. The parameters governing the magnon energies \(\hbar\omega_{\mathbf{q}}=S(2d+\sum_{j}J_{ij}[1-\exp(-i\mathbf{q}\cdot(\mathbf{r}_{j}-\mathbf{r}_{ i}))])\) were taken from earlier works: the exchange constants \(J_{ij}\) are from first-principles calculations [82] and the magneto-crystalline anisotropy energy \(d=6.97\,\mathrm{\SIUnitSymbolMicro eV}\) per atom is from experiments [83]. Based on these parameters and the formulas derived above, we get \(C_{\rm m}=5.720\times 10^{4}\,\mathrm{Jm^{-3}K^{-1}}\) and \(G_{\rm em}=6.796\times 10^{17}\,\mathrm{Wm^{-3}K^{-1}}\). Notably, the term first order in magnon variables leads to a contribution to \(G_{\rm em}\) that is one order of magnitude smaller than the second-order term. We further use the room-temperature values \(C_{\rm e}=1.013\times 10^{5}\,\mathrm{Jm^{-3}K^{-1}}\), \(C_{\rm p}=3.177\times 10^{6}\,\mathrm{Jm^{-3}K^{-1}}\) and \(G_{\rm ep}=1.051\times 10^{18}\,\mathrm{Wm^{-3}K^{-1}}\) that were obtained in Refs. [31; 33] from first-principles calculations.
Both \(\omega_{\mathbf{q}}\) and the inverse of \(\gamma_{\mathbf{q}}\), i.e., the lifetime of magnons due to the contribution to electron-magnon scattering linear in the magnon variables, are shown in Fig. 1 along high-symmetry lines of the Brillouin zone (BZ). It can readily be observed that the lifetimes of high-frequency magnons is drastically reduced as compared to the low energy ones. The lifetimes relate to mode-specific Gilbert damping \(\alpha_{\mathbf{q}}\) (shown in Appendix C) that range between \(1.5\times 10^{-3}\) and \(1.08\times 10^{-2}\). These values are close to the experimentally obtained ones (via FMR measurements) for Fe ranging from \(1.9\times 10^{-3}\) to \(7.2\times 10^{-3}\)[84; 85; 86; 87; 88; 89], however with a somewhat larger variation with \(\mathbf{q}\) as compared to what was reported in Ref. [79].
Based on these parameters, we calculate the coupled out-of-equilibrium magnon, electron, and phonon dynamics induced by a Gaussian laser pulse \(P(t)=A/\sqrt{2\pi\zeta^{2}}\exp[-(t/\zeta)^{2}/2]\) with \(A=9.619\times 10^{7}\,\mathrm{Jm^{-3}}\) and \(\zeta=60\,\mathrm{fs}\) for \(N=20^{3}\) magnon modes. Note that this value of \(A\) translates to an absorbed fluence of \(0.19\,\mathrm{mJ/cm^{2}}\) for a ferromagnetic layer with thickness of \(20\,\mathrm{nm}\), which is a typical thickness in ultrafast demagnetization experiments [1].
Figure 2(a) depicts the time evolution of electron, phonon and average magnon temperature - together with the temperature range of all magnon temperatures - calculated using the (N+2)TM. The electron temperature reaches a maximum of \(685\,\mathrm{K}\) at around \(52\,\mathrm{fs}\) after the maximum of the laser pulse (located at \(t=0\)) and converges to the phonon temperature in less than \(1.5\,\mathrm{ps}\). The maximum of the average magnon temperature of \(520\,\mathrm{K}\) is reached only slightly after the electronic one at around \(136\,\mathrm{fs}\), followed by a convergence to the electronic and phononic temperature to a final temperature of around \(329\,\mathrm{K}\) at \(3\,\mathrm{ps}\), in agreement with what can be estimated from the energy supplied by the laser pulse and the in
Figure 1: Magnon dispersion of bcc iron with lifetimes \(\gamma_{\mathbf{q}}^{-1}\) given as color code, shown along high-symmetry lines of the BZ. The lifetimes are due to the first-order contribution to the electron-magnon scattering.
Figure 2: Laser-induced ultrafast non-equilibrium dynamics of iron calculated from an _ab initio_ parameterized model. (a) Temporal evolution of electron temperature \(T_{\rm e}\), phonon temperature \(T_{\rm p}\) and average magnon temperature \(\langle T_{\mathbf{q}}\rangle=1/N\sum_{\mathbf{q}}T_{\mathbf{q}}\) obtained by the (N+2)TM (solid lines). The blue shaded region indicates the temperature range within which all magnon temperatures are contained. Dashed lines show the results of the 3TM solved with _ab initio_ calculated input parameters. (b) Relative change of total magnetization of the localized magnetic moments \(\Delta M/M_{0}=\sum_{\mathbf{q}}(n_{\mathbf{q}}^{\rm init}-n_{\mathbf{q}})/(NS-\sum_{\mathbf{q}} n_{\mathbf{q}}^{\rm init})\), with \(n_{\mathbf{q}}^{\rm init}=n^{\rm BE}(\omega_{\mathbf{q}},300\,\mathrm{K})\) being the occupation number before the laser pulse. (c) Demagnetization \(\max(|\Delta M/M_{0}|)\) versus laser fluence computed for a ferromagnetic layer with a thickness of \(20\,\mathrm{nm}\). The dotted line serves as a guide to the eye.
dividual heat capacities via \(\Delta T=A/(C_{\rm m}+C_{\rm e}+C_{\rm p})=28.8\,\)K. Notably, the magnon temperatures still cover a range of around 50 K at this point in time. Our results clearly demonstrate the shortcomings of the conventional 3TM (shown as dotted lines): While the initial increase of temperatures is comparable to the (N+2)TM, magnon thermalization happens much faster in the 3TM.
In Fig. 2(b), we show the laser-induced change in magnetization (associated with the localized magnetic moments) due to the creation of additional magnons. We observe ultrafast transversal demagnetization of around one percent in less than 300 fs, demonstrating that the timescales obtained by our _ab initio_ based calculations are in reasonable agreement with experimental measurements (see, e.g., [90]). Notably, the minimum of the magnetization and the maximum in the average magnon temperature computed by the (N+2)TM are at different points in time. Also, the drop in the (localized) magnetization is much less pronounced than expected from the increase in average temperature: in thermal equilibrium, a temperature increase from 300 K to above 500 K approximately leads a demagnetization of 20% for iron [91]. These observations clearly demonstrate the shortcomings of the 3TM - where a _thermal_ magnon distribution at all times is assumed - and underline the importance of treating the full, non-thermal magnon distribution in the ultrafast regime.
Figure 2(c) depicts the maximum of the demagnetization versus laser fluence for an iron layer of 20 nm. We find a nonlinear dependence, which is a result of the non-linearity of our (N+2)TM, and a substantial demagnetization of around ten percent at 0.95 mJ/cm\({}^{2}\). While one could in principle go to higher fluences, we refrain from doing so, because at the current stage higher order magnon terms (i.e., magnon-magnon scattering terms) are not included in our model but could play a role for higher magnon excitation densities. The obtained amount of demagnetization and the magnetization decay time (below 200 fs) for this fluence are comparable with experiments, which supports that ultrafast magnon excitation [12; 13; 14] provides a viable mechanism for ultrafast laser-induced demagnetization. It is also consistent with time-resolved extreme ultraviolet magneto-optical and photoemission investigations that detected magnon excitations during ultrafast demagnetization of elemental ferromagnetic samples [92; 15].
The non-thermal magnon dynamics are analyzed in more detail in Fig. 3. There, we show the magnon temperatures versus frequency (a) and along high-symmetry lines of the BZ (b) at different points in time. The laser pulse primarily heats up high energy magnons, while the temperature of low energy magnons barely changes and even decreases slightly in the vicinity of the \(\Gamma\) point (the temperatures drop by up to around 2.5 K). This surprising observation is caused by a redistribution of magnons from this region to other parts of the BZ due to the term second order in the magnon operators in Eq. (1); the effective second order scattering rate \(\gamma_{\mathbf{q}}^{(2)}:=\sum_{\mathbf{q}^{\prime}}\Gamma_{\mathbf{qq}^{\prime}}\) is negative for low magnon frequencies (more details can be found in the Appendix C). It is also observed that although the magnon temperatures reached after the laser pulse are generally higher at higher frequencies, however, there is not necessarily a monotonous increase of temperature with frequency at all times: e.g., at 100 fs after the laser pulse [Fig. 3(b)], the temperatures at the points H, N, and P is higher than in between these points. Notably, the position of the maximum magnon temperature in the BZ also varies with time.
## IV Conclusions
We have developed an _ab initio_ parameterized quantum kinetic approach to study the laser-induced generation of magnons due to electron-magnon scattering, which we applied to iron. Our results clearly demonstrate that on ultrafast timescales the magnon distribution is non-thermal and that henceforth the simple relation between magnetization and temperature via the \(M(T)\) curves computed at equilibrium does not hold: since predominantly high-energy magnons are excited the energy transferred from the laser-excited electrons creates relatively few magnons and hence the demagnetization (proportional to the total number of magnons) is much less pronounced than expected from the increase of the average magnon temperature. Notably, the number of magnons actually decreases near the center of the Brillouin zone, which is due to the scattering from low to high energy magnons by a previously neglected scattering term that can transfer energy but not angular momentum - a crucial quantity in ultrafast demagnetization.
Our _ab initio_-based calculations of the induced demagnetization in iron furthermore provide strong evidence that non-thermal magnons are excited fast and lead to a sizable demagnetization within 200 fs, which in turn establishes the relevance of magnon excitations for the process of ultrafast optically induced demagnetization.
###### Acknowledgements.
The authors thank K. Carva for valuable discussions. This work has been supported by the Swedish Research Council (VR), the German Research Foundation (Deutsche Forschungsgemeinschaft) through CRC/TRR 227 "Ultrafast Spin Dynamics" (project MF), and the K. and A. Wallenberg Foundation (Grant No. 2022.0079). Part of the calculations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at NSC Linkoping partially funded by the Swedish Research Council through grant agreement No. 2022-06725.
## Conflict of interest
The authors declare no conflict of interest.
## Data availability statement
Data available on request from the authors.
## Keywords
Ultrafast magnetism, electron-magnon coupling, non-thermal magnons
Figure 3: Magnon temperatures of iron during ultrafast laser excitation at different points in time (w.r.t. the maximum of the laser pulse) calculated from the _ab initio_ parameterized (N+2)TM. (a) Magnon temperatures (dots) versus frequency. The solid line indicates the electron temperature. (b) Magnon dispersion and their temperatures, depicted by the color code, shown along high-symmetry lines of the BZ.
## Appendix A Derivation of electron-magnon scattering rates
In this Appendix we derive the (N+2)TM for the description of non-thermal magnons from a microscopic Hamiltonian for electron-magnon scattering. We start with a local \(sp-d\) model Hamiltonian,
\[\hat{\mathcal{H}}_{\text{em}}=-J^{\text{sp-d}}\sum_{i}\delta(\mathbf{r}-\mathbf{r}_{i}) \hat{\mathbf{s}}^{\text{itin}}\cdot\mathbf{S}_{i}^{\text{loc}}, \tag{10}\]
with \(J^{\text{sp-d}}\) being the \(sp-d\) volume interaction energy, \(\hat{\mathbf{s}}^{\text{itin}}=\hat{\mathbf{\sigma}}\) being the spin operators of itinerant (\(s\) and \(p\)) electrons and \(\mathbf{S}_{i}^{\text{loc}}\) being the localized (\(d\)) spins located at \(\mathbf{r}_{i}\). For now, we treat the latter as classical vectors. The expectation value for a given spin wave function \(\mathbf{\Psi}(\mathbf{r})\) is given by
\[\langle\hat{\mathcal{H}}_{\text{em}}\rangle =-J^{\text{sp-d}}\sum_{i}\int\mathbf{\Psi}^{\dagger}(\mathbf{r})\delta( \mathbf{r}-\mathbf{r}_{i})\hat{\mathbf{s}}^{\text{itin}}\cdot\mathbf{S}_{i}^{\text{loc}}\mathbf{ \Psi}(\mathbf{r})d\mathbf{r} \tag{11}\] \[=-J^{\text{sp-d}}\sum_{i}\int\delta(\mathbf{r}-\mathbf{r}_{i})\left(\Psi _{\uparrow}^{*}(\mathbf{r}),\ \Psi_{\downarrow}^{*}(\mathbf{r})\right)\left\{\hat{\sigma}_{x}S_{i}^{x}+\hat{ \sigma}_{y}S_{i}^{y}+\hat{\sigma}_{z}S_{i}^{z}\right\}\begin{pmatrix}\Psi_{ \uparrow}(\mathbf{r})\\ \Psi_{\downarrow}(\mathbf{r})\end{pmatrix}d\mathbf{r}\] (12) \[=-J^{\text{sp-d}}\sum_{i}\int\delta(\mathbf{r}-\mathbf{r}_{i})\Bigg{\{} \Psi_{\uparrow}^{*}(\mathbf{r})\Psi_{\downarrow}(\mathbf{r})S_{i}^{-}+\Psi_{\downarrow }^{*}(\mathbf{r})\Psi_{\uparrow}(\mathbf{r})S_{i}^{+}+(\Psi_{\uparrow}^{*}(\mathbf{r}) \Psi_{\uparrow}(\mathbf{r})-\Psi_{\downarrow}^{*}(\mathbf{r})\Psi_{\downarrow}(\mathbf{r} ))S_{i}^{z}\Bigg{\}}d\mathbf{r}. \tag{13}\]
Here, we have introduced \(S_{i}^{\pm}=S_{i}^{x}\pm iS_{i}^{y}\). Next, we perform a plane wave expansion of the wave functions (for a single band of itinerant electrons),
\[\Psi_{\sigma}(\mathbf{r})=\frac{1}{\sqrt{V}}\sum_{\mathbf{k}}e^{i\mathbf{k}\cdot\mathbf{r}}c_ {\mathbf{k}\sigma}, \tag{14}\]
and a Holstein-Primakoff transformation of the localized spins,
\[S_{i}^{+}=\sqrt{2S-b_{i}^{*}b_{i}}b_{i},\qquad S_{i}^{-}=b_{i}^{*}\sqrt{2S-b_ {i}^{*}b_{i}},\qquad S_{i}^{z}=S-b_{i}^{*}b_{i}, \tag{15}\]
together with introducing the Fourier transform of the magnon amplitudes
\[b_{i}^{*}=\frac{1}{\sqrt{N}}\sum_{\mathbf{q}}e^{-i\mathbf{q}\cdot\mathbf{r}_{i}}b_{\mathbf{q} }^{*},\qquad b_{i}=\frac{1}{\sqrt{N}}\sum_{\mathbf{q}}e^{i\mathbf{q}\cdot\mathbf{r}_{i}}b _{\mathbf{q}}. \tag{16}\]
Insertion of (14)-(16) into (13) and keeping terms up to second order in magnon variables, we get
\[\begin{split}\langle\hat{\mathcal{H}}_{\text{em}}\rangle& =-\frac{J^{\text{sp-d}}}{V}\sum_{i}\sum_{\mathbf{k}\mathbf{k}^{\prime}} \Bigg{\{}\sqrt{\frac{2S}{N}}\sum_{\mathbf{q}}e^{-i(\mathbf{k}-\mathbf{k}^{\prime}+\mathbf{q}) \cdot\mathbf{r}_{i}}c_{\mathbf{k}\uparrow}^{*}c_{\mathbf{k}^{\prime}\downarrow}b_{\mathbf{q}} ^{*}+\sqrt{\frac{2S}{N}}\sum_{\mathbf{q}}e^{-i(\mathbf{k}-\mathbf{k}^{\prime}-\mathbf{q})\cdot \mathbf{r}_{i}}c_{\mathbf{k}\downarrow}^{*}c_{\mathbf{k}^{\prime}\uparrow}b_{\mathbf{q}}\\ &\qquad\qquad\qquad+Se^{-i(\mathbf{k}-\mathbf{k}^{\prime})\cdot\mathbf{r}_{i} }(c_{\mathbf{k}\uparrow}^{*}c_{\mathbf{k}^{\prime}\uparrow}-c_{\mathbf{k}\downarrow}^{*}c_ {\mathbf{k}^{\prime}\downarrow})-\frac{1}{N}\sum_{\mathbf{qq}^{\prime}}e^{-i(\mathbf{k}- \mathbf{k}^{\prime}+\mathbf{q}-\mathbf{q}^{\prime})\cdot\mathbf{r}_{i}}(c_{\mathbf{k}\uparrow}^{ *}c_{\mathbf{k}^{\prime}\uparrow}-c_{\mathbf{k}\downarrow}^{*}c_{\mathbf{k}^{\prime} \downarrow})b_{\mathbf{q}}^{*}b_{\mathbf{q}^{\prime}}\Bigg{\}}\\ &=-\frac{J^{\text{sp-d}}SN}{V}\sum_{\mathbf{k}}(c_{\mathbf{k}\uparrow}^{*} c_{\mathbf{k}\uparrow}-c_{\mathbf{k}\downarrow}^{*}c_{\mathbf{k}\downarrow})-\frac{J^{ \text{sp-d}}SN}{V}\sum_{\mathbf{k}\mathbf{q}}\sqrt{\frac{2}{SN}}\Big{(}c_{\mathbf{k}+\mathbf{q} \uparrow}^{*}c_{\mathbf{k}\downarrow}b_{-\mathbf{q}}^{*}+c_{\mathbf{k}+\mathbf{q}\downarrow}^{*} c_{\mathbf{k}\uparrow}b_{\mathbf{q}}\Big{)}\\ &\quad+\frac{J^{\text{sp-d}}}{V}\sum_{\mathbf{k}\mathbf{qq}^{\prime}} \Big{(}c_{\mathbf{k}-\mathbf{q}+\mathbf{q}^{\prime}\uparrow}^{*}c_{\mathbf{k}\uparrow}-c_{\mathbf{k}- \mathbf{q}+\mathbf{q}^{\prime}\downarrow}^{*}c_{\mathbf{k}\downarrow}\Big{)}b_{\mathbf{q}}^{* }b_{\mathbf{q}^{\prime}}.\end{split} \tag{17}\]
For multiple itinerant bands and in second quantization we obtain
\[\begin{split}\hat{\mathcal{H}}_{\text{em}}&=-\Delta\sum_{ \mathbf{k}\nu}(\hat{c}_{\mathbf{k}\nu\uparrow}^{\dagger}\hat{c}_{\mathbf{k}\nu\uparrow}- \hat{c}_{\mathbf{k}\nu\downarrow}^{\dagger}\hat{c}_{\mathbf{k}\nu\downarrow})-\Delta \sqrt{\frac{2}{SN}}\sum_{\mathbf{k}\nu\nu^{\prime},\mathbf{q}}\Big{(}\hat{c}_{\mathbf{k}+\mathbf{ q}\nu\uparrow}^{\dagger}\hat{c}_{\mathbf{k}\nu^{\prime}\downarrow}\hat{b}_{-\mathbf{q}} ^{\dagger}+\hat{c}_{\mathbf{k}+\mathbf{q}\nu\downarrow}^{\dagger}\hat{c}_{\mathbf{k}\nu\nu^{ \prime}\uparrow}\hat{b}_{\mathbf{q}}\Big{)}\\ &\quad+\frac{\Delta}{SN}\sum_{\mathbf{k}\nu\nu^{\prime},\mathbf{qq}^{ \prime}}\Big{(}\hat{c}_{\mathbf{k}-\mathbf{q}+\mathbf{q}^{\prime}\uparrow}^{\dagger}\hat{c}_{ \mathbf{k}\nu^{\prime}\uparrow}-\hat{c}_{\mathbf{k}-\mathbf{q}+\mathbf{q}^{\prime}\downarrow}^{ \dagger}\hat{c}_{\mathbf{k}\nu^{\prime}\downarrow}\Big{)}\hat{b}_{\mathbf{q}}^{\dagger} \hat{b}_{\mathbf{q}^{\prime}}.\end{split} \tag{18}\]
where we have introduced \(\Delta=\frac{J^{\rm{P}-d}SN}{V}\). Note that due to the plane wave ansatz we have implicitly assumed that the itinerant electrons are completely delocalized and interband scattering (from \(\nu\) to \(\nu^{\prime}\neq\nu\)) fully contributes to the electron-magnon scattering.
Next, we use Fermi's golden rule to get the change of the magnon occupation number \(n_{\mathbf{q}}=\langle\hat{b}_{\mathbf{q}}^{\dagger}\hat{b}_{\mathbf{q}}\rangle\). Fermi's golden rule computes the probability \(W(i\to f)\) for a small perturbation term in the Hamiltonian, \(\hat{H}^{\prime}\) (in our specific case, \(\hat{\mathcal{H}}_{\rm em}\)) via
\[W(i\to f)=\frac{2\pi}{\hbar}|\langle f|\hat{H}^{\prime}|i\rangle|^{2}\delta(E_ {f}-E_{i}), \tag{10}\]
where \(|i\rangle\) and \(|f\rangle\) denote the initial and final state, respectively.
We start with the term first order in the magnon variables,
\[\begin{split}\dot{n}_{\mathbf{q}}^{(1)}&=W(n_{\mathbf{q}} \to n_{\mathbf{q}}+1)-W(n_{\mathbf{q}}\to n_{\mathbf{q}}-1)\\ &=\frac{2\pi}{\hbar}\frac{2\Delta^{2}}{SN}\sum_{{\mathbf{k}}\nu\nu^{ \prime}}\big{\{}(1-f_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow})f_{{\mathbf{k}}\nu^{\prime} \downarrow}-(f_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow}-f_{{\mathbf{k}}\nu^{\prime} \downarrow})n_{\mathbf{q}}\big{\}}\delta(\varepsilon_{{\mathbf{k}}\nu^{\prime} \downarrow}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow}-\hbar\omega_{\mathbf{q}}), \end{split} \tag{11}\]
with \(f_{{\mathbf{k}}\nu\sigma}=\langle\hat{c}_{{\mathbf{k}}\nu\sigma}^{\dagger}\hat{c}_{{ \mathbf{k}}\nu\sigma}\rangle\) and \(\varepsilon_{{\mathbf{k}}\nu\sigma}\) and \(\hbar\omega_{\mathbf{q}}\) being the eigenenergies of electrons and magnons, respectively.
Hereinafter, we make the assumption that due to the fast equilibration processes for electrons, they always follow the Fermi-Dirac distribution, \(f^{\rm FD}(\varepsilon_{{\mathbf{k}}\nu\sigma},T_{\rm e})=[e^{(\varepsilon_{{\mathbf{ k}}\nu\sigma}-\varepsilon_{\rm F})/k_{\rm B}T_{\rm e}}+1]^{-1}\), with a single electron temperature \(T_{\rm e}\). Before we continue we need the following relation,
\[\begin{split} f^{\rm FD}(\varepsilon_{{\mathbf{k}}\nu^{\prime} \downarrow},T_{\rm e})(1-f^{\rm FD}(\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow },T_{\rm e}))\delta(\varepsilon_{{\mathbf{k}}\nu^{\prime}\downarrow}-\varepsilon_ {{\mathbf{k}}-{\mathbf{q}}\nu\uparrow}-\hbar\omega_{\mathbf{q}})=\\ (f^{\rm FD}(\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow},T_{\rm e} )-f^{\rm FD}(\varepsilon_{{\mathbf{k}}\nu^{\prime}\downarrow},T_{\rm e}))n^{\rm BE }(\omega_{\mathbf{q}},T_{\rm e})\delta(\varepsilon_{{\mathbf{k}}\nu^{\prime} \downarrow}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow}-\hbar\omega_{\mathbf{q}}) \end{split} \tag{12}\]
with \(n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm e})=[e^{\frac{\hbar\omega_{\mathbf{q}}}{k_{\rm B }T_{\rm e}}}-1]^{-1}\) being the Bose-Einstein distribution evaluated at the electron temperature. Now we can simplify Eq. (11), yielding
\[\dot{n}_{\mathbf{q}}^{(1)} \approx \frac{2\pi}{\hbar}\frac{2\Delta^{2}}{SN}\sum_{{\mathbf{k}}\nu\nu^{ \prime}}\big{[}n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm e})-n_{\mathbf{q}}\big{]}(f^{\rm FD }(\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow},T_{\rm e})-f^{\rm FD}(\varepsilon _{{\mathbf{k}}\nu^{\prime}\downarrow},T_{\rm e}))\delta(\varepsilon_{{\mathbf{k}}\nu^ {\prime}\downarrow}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow}-\hbar\omega_{ \mathbf{q}}) \tag{13}\] \[= \big{[}n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm e})-n_{\mathbf{q}}\big{]} \gamma_{\mathbf{q}}.\]
With \(\gamma_{\mathbf{q}}\) being the linewidth - i.e., the inverse lifetime - of the magnon due to the first order contribution to electron-magnon scattering. Following the ideas laid out by Allen [78] and Maldonado _et al._[31], it can be computed as
\[\gamma_{\mathbf{q}} = \frac{2\pi}{\hbar}\frac{2\Delta^{2}}{SN}\sum_{{\mathbf{k}}\nu\nu^{ \prime}}[f^{\rm FD}(\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow},T_{\rm e})-f^ {\rm FD}(\varepsilon_{{\mathbf{k}}\nu^{\prime}\downarrow},T_{\rm e})]\delta( \varepsilon_{{\mathbf{k}}\nu^{\prime}\downarrow}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}} \nu\uparrow}-\hbar\omega_{\mathbf{q}}) \tag{14}\] \[= \frac{2\pi}{\hbar}\frac{2\Delta^{2}}{SN}\sum_{{\mathbf{k}}\nu\nu^{ \prime}}\int d\varepsilon\ \delta(\varepsilon-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow})\int d \varepsilon^{\prime}\ \delta(\varepsilon^{\prime}-\varepsilon_{{\mathbf{k}}\nu^{\prime} \downarrow})[f^{\rm FD}(\varepsilon,T_{\rm e})-f^{\rm FD}(\varepsilon^{ \prime},T_{\rm e})]\delta(\varepsilon^{\prime}-\varepsilon-\hbar\omega_{\mathbf{q}})\] (15) \[\approx \frac{2\pi}{\hbar}\frac{2\Delta^{2}}{SN}\sum_{{\mathbf{k}}\nu\nu^{ \prime}}\delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow}) \delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime}\downarrow})\int d \varepsilon\ \int d\varepsilon^{\prime}\ [f^{\rm FD}(\varepsilon,T_{\rm e})-f^{\rm FD}(\varepsilon^{ \prime},T_{\rm e})]\delta(\varepsilon^{\prime}-\varepsilon-\hbar\omega_{\mathbf{q}}) \frac{g_{\uparrow}(\varepsilon)g_{\downarrow}(\varepsilon^{\prime})}{g_{\uparrow}( \varepsilon_{\rm F})g_{\downarrow}(\varepsilon_{\rm F})}\] (16) \[\approx \frac{2\pi}{\hbar}\frac{2\Delta^{2}}{SN}\hbar\omega_{\mathbf{q}}\sum_{{ \mathbf{k}}\nu\nu^{\prime}}\delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}} \nu\uparrow})\delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime} \downarrow})\int d\varepsilon\ \ (-1)\frac{\partial f^{\rm FD}(\varepsilon,T_{\rm e})}{\partial \varepsilon}\frac{g_{\uparrow}(\varepsilon)g_{\downarrow}(\varepsilon+\hbar\omega_{ \mathbf{q}})}{g_{\uparrow}(\varepsilon_{\rm F})g_{\downarrow}(\varepsilon_{\rm F})}\] (17) \[\approx \frac{2\pi}{\hbar}\frac{2\Delta^{2}}{SN}\hbar\omega_{\mathbf{q}}\sum_{{ \mathbf{k}}\nu\nu^{\prime}}\delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}} \nu\uparrow})\delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime} \downarrow})\int d\varepsilon\ (-1)\frac{\partial f^{\rm FD}(\varepsilon,T_{\rm e})}{\partial \varepsilon}\frac{g_{\uparrow}(\varepsilon)g_{\downarrow}(\varepsilon)}{g_{\uparrow }(\varepsilon_{\rm F})g_{\downarrow}(\varepsilon_{\rm F})}\] (18) \[= \frac{4\pi\Delta^{2}}{NS}\omega_{\mathbf{q}}\sum_{{\mathbf{k}}\nu\nu^{ \prime}}\delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}\nu\uparrow}) \delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime}\downarrow})I_{ \uparrow\downarrow}(T_{\rm e}) \tag{19}\]
with \(\varepsilon_{\rm F}\) being the Fermi energy, the spin-dependent density of states is \(g_{\sigma}(\varepsilon)=\sum_{{\mathbf{k}}\nu}\delta(\varepsilon-\varepsilon_{{\mathbf{k}} \nu\sigma})\) and the thermal correction factor given by
\[I_{\sigma\sigma^{\prime}}(T_{\rm e})=\int d\varepsilon\ (-1)\frac{\partial f^{\rm FD}( \varepsilon,T_{\rm e})}{\partial\varepsilon}\frac{g_{\sigma}(\varepsilon)g_{ \sigma}^{\prime}(\varepsilon)}{g_{\sigma}(\varepsilon_{\rm F})g_{\sigma}^{ \prime}(\varepsilon_{\rm F})}. \tag{20}\]
It is obvious that \(\lim_{T_{\rm z}\to 0}I_{\sigma\sigma^{\prime}}(T_{\rm e})=1\). Note that we have used that the energy scale of magnons is much smaller than the one of electrons, i.e., that \(\hbar\omega_{\mathbf{q}}\ll\varepsilon,\varepsilon^{\prime}\).
The contribution of the term second order in magnon variables to the occupation number can be calculated analogous and reads
\[\dot{n}_{\mathbf{q}}^{(2)} =\frac{2\pi}{\hbar}\Big{(}\frac{\Delta}{SN}\Big{)}^{2}\sum_{{\bm {k}}\nu\nu^{\prime}\sigma,{\mathbf{q}}^{\prime}}\Big{\{}(n_{\mathbf{q}}+1)n_{{\mathbf{q}} ^{\prime}}\Big{(}(1-f^{\rm FD}(\varepsilon_{{\mathbf{k}}-{\mathbf{q}}+{\mathbf{q}}^{\prime }\nu\sigma},T_{\rm e}))f^{\rm FD}(\varepsilon_{{\mathbf{k}}\nu^{\prime}\sigma},T_{ \rm e})\delta(\hbar\omega_{\mathbf{q}}-\hbar\omega_{{\mathbf{q}}^{\prime}}+\varepsilon _{{\mathbf{k}}-{\mathbf{q}}+{\mathbf{q}}^{\prime}\nu\sigma}-\varepsilon_{{\mathbf{k}}\nu^{ \prime}\sigma})\Big{)}\] \[\qquad\qquad-\Big{(}{\mathbf{q}}\leftrightarrow{\mathbf{q}}^{\prime} \Big{)}\Big{\}} \tag{144}\] \[\delta(\hbar\omega_{\mathbf{q}}-\hbar\omega_{{\mathbf{q}}^{\prime}}+ \varepsilon_{{\mathbf{k}}-{\mathbf{q}}+{\mathbf{q}}^{\prime}\nu\sigma}-\varepsilon_{{\mathbf{ k}}\nu^{\prime}\sigma})-\Big{(}{\mathbf{q}}\leftrightarrow{\mathbf{q}}^{\prime} \Big{)}\Big{\}}\] \[\approx\frac{2\pi}{\hbar}\Big{(}\frac{\Delta}{SN}\Big{)}^{2}\sum_{{ \mathbf{k}}\nu\nu^{\prime}\sigma,{\mathbf{q}}^{\prime}}\Big{\{}(n_{\mathbf{q}}+1)n_{{\mathbf{ q}}^{\prime}}n^{\rm BE}(\omega_{\mathbf{q}}-\omega_{{\mathbf{q}}^{\prime}},T_{\rm e})( \hbar\omega_{\mathbf{q}}-\hbar\omega_{{\mathbf{q}}^{\prime}})\delta(\varepsilon_{\rm F }-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}+{\mathbf{q}}^{\prime}\nu\sigma})\delta( \varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime}\sigma})I_{\sigma\sigma}( T_{\rm e})-\Big{(}{\mathbf{q}}\leftrightarrow{\mathbf{q}}^{\prime}\Big{)}\Big{\}}\] (145) \[=\frac{2\pi}{\hbar}\Big{(}\frac{\Delta}{SN}\Big{)}^{2}\sum_{{\bm {q}}^{\prime}}\Big{\{}(n_{\mathbf{q}}+1)n_{{\mathbf{q}}^{\prime}}n^{\rm BE}(\omega_{ \mathbf{q}}-\omega_{{\mathbf{q}}^{\prime}},T_{\rm e})+\big{(}{\mathbf{q}}\leftrightarrow{ \mathbf{q}}^{\prime}\big{)}\Big{\}}\!\!\!\!\sum_{{\mathbf{k}}\nu\nu^{\prime}\sigma}( \hbar\omega_{\mathbf{q}}-\hbar\omega_{{\mathbf{q}}^{\prime}})\delta(\varepsilon_{\rm F }-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}+{\mathbf{q}}^{\prime}\nu\sigma})\delta( \varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime}\sigma})I_{\sigma\sigma}( T_{\rm e})\] (146) \[=\frac{2\pi}{\hbar}\Big{(}\frac{\Delta}{SN}\Big{)}^{2}\sum_{{\bm {q}}^{\prime}}\Big{\{}(n_{\mathbf{q}}+1)n_{{\mathbf{q}}^{\prime}}n^{\rm BE}(\omega_{ \mathbf{q}}-\omega_{{\mathbf{q}}^{\prime}},T_{\rm e})+\big{(}{\mathbf{q}}\leftrightarrow{ \mathbf{q}}^{\prime}\big{)}\Big{\}}\!\!\!\!\sum_{{\mathbf{k}}\nu\nu^{\prime}\sigma}( \hbar\omega_{\mathbf{q}}-\hbar\omega_{{\mathbf{q}}^{\prime}})\delta(\varepsilon_{\rm F }-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}+{\mathbf{q}}^{\prime}\nu\sigma})\delta( \varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime}\sigma})I_{\sigma\sigma}( T_{\rm e})\] (147) \[=\sum_{{\mathbf{q}}^{\prime}}\Big{\{}(n_{\mathbf{q}}+1)n_{{\mathbf{q}}^{ \prime}}n^{\rm BE}(\omega_{\mathbf{q}}-\omega_{{\mathbf{q}}^{\prime}},T_{\rm e})+\big{(} {\mathbf{q}}\leftrightarrow{\mathbf{q}}^{\prime}\big{)}\Big{\}}\Gamma_{{\mathbf{q}}{\mathbf{q }}^{\prime}}(T_{\rm e}) \tag{148}\]
with
\[\Gamma_{{\mathbf{q}}{\mathbf{q}}^{\prime}}(T_{\rm e})=\frac{2\pi}{\hbar}\Big{(}\frac{ \Delta}{SN}\Big{)}^{2}(\hbar\omega_{\mathbf{q}}-\hbar\omega_{{\mathbf{q}}^{\prime}}) \sum_{\sigma}I_{\sigma\sigma}(T_{\rm e})\sum_{{\mathbf{k}}\nu\nu^{\prime}}\delta( \varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}-{\mathbf{q}}+{\mathbf{q}}^{\prime}\nu\sigma}) \delta(\varepsilon_{\rm F}-\varepsilon_{{\mathbf{k}}\nu^{\prime}\sigma}). \tag{149}\]
## Appendix B Derivation of the three temperature model
In what follows, it is demonstrated that the three temperature model (3TM) can be obtained from the (N+2)-temperature model derived in the main text,
\[\dot{n}_{\mathbf{q}} =\Big{[}n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm e})-n_{\mathbf{q}}\Big{]} \gamma_{\mathbf{q}}+\sum_{{\mathbf{q}}^{\prime}}\Big{[}(n_{\mathbf{q}}+1)n_{{\mathbf{q}}^{ \prime}}n^{\rm BE}(\omega_{\mathbf{q}}-\omega_{{\mathbf{q}}^{\prime}},T_{\rm e})+({ \mathbf{q}}\leftrightarrow{\mathbf{q}}^{\prime})\Big{]}\Gamma_{{\mathbf{q}}{\mathbf{q}}^{\prime}}, \tag{150}\] \[\dot{T}_{\rm e} =\frac{1}{C_{\rm e}}\Big{[}-\sum_{\mathbf{q}}\hbar\omega_{\mathbf{q}}\dot{n }_{\mathbf{q}}+G_{\rm ep}(T_{\rm p}-T_{\rm e})+P(t)\Big{]},\] (151) \[\dot{T}_{\rm p} =-\frac{G_{\rm ep}}{C_{\rm p}}(T_{\rm p}-T_{\rm e}), \tag{152}\]
by assuming instantaneous relaxation of the magnon occupation numbers to the Bose-Einstein distribution with a single magnon temperature \(T_{\rm m}\), i.e., \(n_{\mathbf{q}}=n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm m})\). For the sake of readability we rewrite \(n^{\rm BE}(\omega_{\mathbf{q}},T_{\rm m})=n_{\mathbf{q}}(T_{\rm m})\).
We start with the first order scattering term:
\[\dot{n}_{\mathbf{q}}^{(1)}=[n_{\mathbf{q}}(T_{\rm e})-n_{\mathbf{q}}(T_{\rm m})]\gamma_{\mathbf{q }}\approx(T_{\rm e}-T_{\rm m})\frac{\partial n_{\mathbf{q}}(T)}{\partial T}\bigg{|} _{T=T_{\rm m}}\gamma_{\mathbf{q}}(T_{\rm e})=(T_{\rm e}-T_{\rm m})\frac{C_{\mathbf{q}} \gamma_{\mathbf{q}}}{\hbar\omega_{\mathbf{q}}}. \tag{153}\]
Here we have introduced the mode-dependent magnon heat capacity \(C_{\mathbf{q}}=\hbar\omega_{\mathbf{q}}\frac{\partial n_{\mathbf{q}}(T_{\rm m})}{\partial T}\).
In order to calculate the scattering term second order in the magnon variables, we first introduce the following relation
\[\big{(}n_{\mathbf{q}^{\prime}}(T_{\rm m})+1\big{)}n_{\mathbf{q}}(T_{\rm m})=\big{[}n_{\bm {q}^{\prime}}(T_{\rm m})-n_{\mathbf{q}}(T_{\rm m})\big{]}n_{\mathbf{q}-\mathbf{q}^{\prime}}(T _{\rm m}). \tag{100}\]
Now we calculate
\[\hat{n}_{\mathbf{q}}^{(2)}=\sum_{\mathbf{q}^{\prime}}\Big{(}(n_{\mathbf{q}}(T _{\rm m})+1)n_{\mathbf{q}^{\prime}}(T_{\rm m})n_{\mathbf{q}-\mathbf{q}^{\prime}}(T_{\rm e})+ (\mathbf{q}\leftrightarrow\mathbf{q}^{\prime})\Big{)}\Gamma_{\mathbf{q}\mathbf{q}^{\prime}} \tag{101}\] \[=\sum_{\mathbf{q}^{\prime}}\Big{(}n_{\mathbf{q}^{\prime}-\mathbf{q}}(T_{\rm m })n_{\mathbf{q}-\mathbf{q}^{\prime}}(T_{\rm e})-(\mathbf{q}\leftrightarrow\mathbf{q}^{\prime} )\Big{)}\times\big{(}n_{\mathbf{q}}(T_{\rm m})-n_{\mathbf{q}^{\prime}}(T_{\rm m}) \big{)}\Gamma_{\mathbf{q}\mathbf{q}^{\prime}}\] (102) \[=\sum_{\mathbf{q}^{\prime}}\frac{1}{2}\bigg{(}\coth\bigg{(}\frac{ \hbar(\omega_{\mathbf{q}^{\prime}}-\omega_{\mathbf{q}})}{2k_{\rm B}T_{\rm e}}\bigg{)} -\coth\bigg{(}\frac{\hbar(\omega_{\mathbf{q}^{\prime}}-\omega_{\mathbf{q}})}{2k_{\rm B }T_{\rm m}}\bigg{)}\bigg{)}\big{(}n_{\mathbf{q}}(T_{\rm m})-n_{\mathbf{q}^{\prime}}(T _{\rm m})\big{)}\Gamma_{\mathbf{q}\mathbf{q}^{\prime}}\] (103) \[\approx\sum_{\mathbf{q}^{\prime}}\frac{n_{\mathbf{q}}(T_{\rm m})-n_{\mathbf{q }^{\prime}}(T_{\rm m})}{\hbar(\omega_{\mathbf{q}^{\prime}}-\omega_{\mathbf{q}})}k_{\rm B }(T_{\rm e}-T_{\rm m})\Gamma_{\mathbf{q}\mathbf{q}^{\prime}}\] (104) \[\approx\sum_{\mathbf{q}^{\prime}}\frac{\partial n_{\mathbf{q}}(T_{\rm m}) }{\partial(\hbar\omega_{\mathbf{q}})}k_{\rm B}(T_{\rm m}-T_{\rm e})\Gamma_{\mathbf{q} \mathbf{q}^{\prime}}\] (105) \[=\sum_{\mathbf{q}^{\prime}}\frac{\partial n_{\mathbf{q}}(T)}{\partial T} \bigg{|}_{T=T_{\rm m}}\frac{k_{\rm B}T_{\rm m}}{\hbar\omega_{\mathbf{q}}}(T_{\rm e }-T_{\rm m})\Gamma_{\mathbf{q}\mathbf{q}^{\prime}}\] (106) \[=\sum_{\mathbf{q}^{\prime}}C_{\mathbf{q}}\frac{k_{\rm B}T_{\rm m}}{(\hbar \omega_{\mathbf{q}})^{2}}(T_{\rm e}-T_{\rm m})\Gamma_{\mathbf{q}\mathbf{q}^{\prime}}. \tag{107}\]
Using the expressions for \(\hat{n}_{\mathbf{q}}^{(1)}\) and \(\hat{n}_{\mathbf{q}}^{(2)}\), the change in total energy of the magnons can then be calculated as
\[\frac{\partial E_{\rm m}}{\partial t}=\frac{\partial E_{\rm m}}{ \partial T_{\rm m}}\frac{\partial T_{\rm m}}{\partial t}=\underbrace{\sum_{ \mathbf{q}}\hbar\omega_{\mathbf{q}}\frac{\partial n_{\mathbf{q}}(T)}{\partial T}|_{T=T_{ \rm m}}}_{C_{\rm m}}\frac{\partial T_{\rm m}}{\partial t}=(T_{\rm e}-T_{\rm m} )\underbrace{\sum_{\mathbf{q}}C_{\mathbf{q}}\Big{(}\gamma_{\mathbf{q}}+\sum_{\mathbf{q}^{ \prime}}\frac{k_{\rm B}T_{\rm m}}{\hbar\omega_{\mathbf{q}}}\Gamma_{\mathbf{q}\mathbf{q}^{ \prime}}\Big{)}}_{G_{\rm em}}. \tag{108}\]
With that, the (N+2)TM transforms into the 3TM (in the absence of magnon-phonon coupling), which is given by
\[C_{\rm m}\dot{T}_{\rm m} =G_{\rm em}(T_{\rm e}-T_{\rm m}),\] \[C_{\rm e}\dot{T}_{\rm e} =G_{\rm em}(T_{\rm m}-T_{\rm e})+G_{\rm ep}(T_{\rm p}-T_{\rm e})+P (t), \tag{109}\] \[C_{\rm p}\dot{T}_{\rm p} =G_{\rm ep}(T_{\rm e}-T_{\rm p}).\]
## Appendix C _Ab initio_ calculations
To obtain a full solution of the (N+2)TM, it is necessary to compute the material specific quantities \(\Delta\), \(\gamma_{\mathbf{q}}\), \(\Gamma_{\mathbf{q}\mathbf{q}^{\prime}}\) and \(I_{\sigma\sigma}(T_{\rm e})\). For this purpose, we use the full-potential linear augmented plane wave code ELK [81].
As a first step, we determine the coupling parameter \(\Delta\) of the \(sp-d\) model, which sets the general scale of the electron-magnon scattering. As shown in the main text, the first term (zeroth order in magnon variables) in the electron-magnon scattering Hamiltonian reads \(\tilde{H}_{\rm em}^{(0)}=-\Delta\sum_{\mathbf{k}\mathbf{\nu}}(\hat{c}_{\mathbf{k}\mathbf{\nu} \uparrow}^{\dagger}\hat{c}_{\mathbf{k}\mathbf{\nu}\uparrow}-\hat{c}_{\mathbf{k}\mathbf{\nu} \downarrow}^{\dagger}\hat{c}_{\mathbf{k}\mathbf{\nu}\downarrow})\), with \(\nu\in\{s,p\}\). Based on this, \(\Delta\) can be estimated from the projected density of states (DOS), since it is one half of the spin-dependent energy splitting of the \(s\)- and \(p\)-bands. In general, this splitting may vary for different electronic states. This is not accounted for in the model used here, where instead a single parameter is used to model the spin splitting. We find, however, that for bcc iron this is justified, since the shift in both \(s\)- and \(p\)-bands around the Fermi energy - the relevant region for electron-magnon scattering - between spin up and down states is approximately constant with a value of \(\Delta\approx 0.75\,\)eV, see left panel of Fig. 4.
Now we calculate the first and second order scattering rates using the formulas derived above,
\[\gamma_{\mathbf{q}} =\frac{4\pi\Delta^{2}}{SN}\omega_{\mathbf{q}}I_{\uparrow\downarrow}(T_{ \rm e})\sum_{\mathbf{k}\mathbf{\nu}\nu\nu^{\prime}}\delta(\varepsilon_{\rm F}- \varepsilon_{\mathbf{k}\mathbf{-}\mathbf{q}\mathbf{\nu}\uparrow})\delta(\varepsilon_{\rm F}- \varepsilon_{\mathbf{k}\mathbf{\nu}^{\prime}\downarrow}), \tag{110}\] \[\Gamma_{\mathbf{q}\mathbf{q}^{\prime}} =\frac{2\pi\Delta^{2}}{S^{2}N^{2}}(\omega_{\mathbf{q}}-\omega_{\mathbf{q}^{ \prime}})\sum_{\sigma}I_{\sigma\sigma}(T_{\rm e})\sum_{\mathbf{k}\mathbf{\nu}\nu^{ \prime}}\delta(\varepsilon_{\rm F}-\varepsilon_{\mathbf{k}-\mathbf{q}+\mathbf{q}^{\prime}\nu \sigma})\delta(\varepsilon_{\rm F}-\varepsilon_{\mathbf{k}\mathbf{\nu}^{\prime}\sigma}). \tag{111}\]
The calculation of both quantities requires a spin-dependent summation over the Fermi surface, analogous to what was done in Ref. [93] for the evaluation of the spin-dependent Eliashberg function for electron-phonon scattering. As in Ref. [93] we use a Gaussian broadening of the Dirac delta distributions by \(0.03\,\mathrm{eV}\). Also, since we only include the contribution of \(s\)- and \(p\)-states (indicated by \(\nu,\nu^{\prime}\)) to the scattering, we have to project the Kohn-Sham states (indicated by \(n,n^{\prime}\)) onto the spherical harmonics \(Y_{l}^{m}\) via
\[\delta(\varepsilon_{\mathrm{F}}-\varepsilon_{\mathbf{k}\nu\sigma})\delta( \varepsilon_{\mathrm{F}}-\varepsilon_{\mathbf{k}^{\prime}\nu^{\prime}\sigma^{ \prime}})=\sum_{nn^{\prime}}P^{n\nu}_{\mathbf{k}\sigma}P^{n^{\prime}_{\mathbf{k}^{ \prime}}\sigma^{\prime}}_{\mathbf{k}^{\prime}\sigma^{\prime}}\delta(\varepsilon_{ \mathrm{F}}-\varepsilon_{\mathbf{k}n\sigma})\delta(\varepsilon_{\mathrm{F}}- \varepsilon_{\mathbf{k}^{\prime}n^{\prime}\sigma^{\prime}}), \tag{10}\]
with \(P^{n\nu}_{\mathbf{k}\sigma}\) being projector functions.
The functions \(I_{\sigma\sigma^{\prime}}(T_{\mathrm{e}})\) describe corrections to the scattering rate at high electron temperatures and are given by
\[I_{\sigma\sigma^{\prime}}(T_{\mathrm{e}})=\int d\varepsilon\ (-1)\frac{ \partial f^{\mathrm{FD}}(\varepsilon,T_{\mathrm{e}})}{\partial\varepsilon} \frac{g_{\sigma}(\varepsilon)g_{\sigma}^{\prime}(\varepsilon)}{g_{\sigma}( \varepsilon_{\mathrm{F}})g_{\sigma}^{\prime}(\varepsilon_{\mathrm{F}})}, \tag{11}\]
with \(g_{\sigma}(\varepsilon)=\sum_{\mathbf{k}\nu}\delta(\varepsilon-\varepsilon_{\mathbf{k }\nu\sigma})=\sum_{\mathbf{k}\nu}\sum_{n}P^{n\nu}_{\mathbf{k}\sigma}\delta(\varepsilon -\varepsilon_{\mathbf{k}n\sigma})\) being the cumulative DOS of both \(s\)- and \(p\)-states. We find that they increase monotonously with the electron temperature (see right panel of Fig. 4). However, even for temperature up to \(2000\,\mathrm{K}\), the \(I_{\sigma\sigma^{\prime}}(T_{\mathrm{e}})\) functions are below two. Hence, we concluded that the approximation \(I_{\sigma\sigma^{\prime}}=1\) is reasonable for the laser fluences - heating the electrons up to around \(700\,\mathrm{K}\) - considered in the main text.
Figure 5 depicts the numerically calculated scattering rates using \(I_{\sigma\sigma^{\prime}}=1\) and \(\Delta=0.75\,\mathrm{eV}\) as obtained above. In the left panel, we show the scattering rate \(\gamma_{\mathbf{q}}\) that is first order in the magnon variables through color code on the magnon dispersion. It is strictly positive and tends to increase with magnon frequency. The right panel shows the effective scattering rate \(\gamma_{\mathbf{q}}^{(2)}=\sum_{\mathbf{q}^{\prime}}\Gamma_{\mathbf{q}\mathbf{q}^{\prime}}\) due to the scattering term second order in magnon variables. Notably, this quantity is negative for low frequencies and positive for high frequencies, indicating that it leads to a depopulation of magnons at low energies due a scattering from low to high energies (the total magnon number is kept constant). In general, the values of the effective second order scattering rate are comparable to the one first order in magnon variables. They are, however, distributed differently: e.g., for magnons close to the \(\Gamma\) point the second order scattering rate is by far the dominating one. This is the reason why, as demonstrated in the main text, a laser pulse can in fact lead to a cooling of low energy magnons, i.e., to a decrease of their occupation numbers.
Lastly, we show in Fig. 6 the _ab initio_ computed mode-dependent Gilbert damping, \(\alpha_{\mathbf{q}}=\omega_{\mathbf{q}}/\gamma_{\mathbf{q}}\). Interestingly, the Gilbert damping \(\alpha_{\mathbf{q}}\) is large (\(\sim 0.01\)) at the BZ center and at the high-symmetry points H, N and P at the BZ edge. There is also a noticeable directional anisotropy in the Gilbert damping for modes along \(\Gamma-\)H and \(\Gamma-\)P. We emphasize that the Gilbert damping is here due to the electron-magnon scattering term that is first order in the magnon variables. Other scattering mechanisms as phonon-magnon scattering could contribute further to the mode-specific Gilbert damping.
Figure 4: _Left_: Projected spin-polarized DOS for bcc iron. Spin-minority density is shown by positive values, spin-majority density by negative values. The exchange splitting is \(2\Delta\approx 1.5\,\mathrm{eV}\) in a large interval around the Fermi energy and for both \(s\)- and \(p\)-states. _Right_: Thermal correction factors \(I_{\sigma\sigma^{\prime}}\) versus electron temperature \(T_{\mathrm{e}}\) calculated from the projected DOS. |
2309.15279 | Test of Barrow entropy using a model independent approach | Taking into consideration of a fractal structure for the black hole horizon,
Barrow argued that the area law of entropy get modified due to
quantum-gravitational effects. Accordingly, the corrected entropy takes the
form $S\sim A^{1+\frac{\Delta}{2}}$, where $0\leq\Delta\leq1,$ indicates the
amount of the quantum-gravitational deformation effects. By considering the
modified Barrow entropy associated with the apparent horizon, the Friedmann
equations get modified as well. We show that considering a universe filled with
the matter and cosmological constant $\Lambda$, it is possible to determine the
amount of deviation from standard cosmology by reconstructing the parameter
$\delta$ in terms of curvature parameters $\{q,Q,\Omega_{k}\}$ as
$\Delta=\frac{(Q-1-\Omega_k)(1+\Omega_k)}{(1+\Omega_k+q)^{2}}$. Here, $q$ is
the deceleration parameter and $Q$ is the third derivative of scale factor .
This relation provides some advantages. The first is that it indicates that
there is profound connection between quantum-gravitational deformation effects
and curvature effects, for $\Omega_k\simeq0$ the pair $\{q,Q\}$ can be regarded
as deviation curvature factors which reflect the amount of deviation of the
model from the standard model. The second interesting feature is that, since
this pair are observational parameters which can be directly measured in a
model independent approach, they can be regarded as powerful tools to enable us
to put constraint on parameter $\Delta$ and test the Barrow entropy model. Our
analysis predicts the value for $Q_{0}$ which is slightly deviates from 1 as
$(Q_{0}-1)<0.001$. This can be a relativity well target and criterion for
theoretical and observational measurements of parameter $Q_{0}$. Hence we can
hope and wait the improvement of the high redshift data in the future to
support it. | Amin Salehi | 2023-09-26T21:33:07Z | http://arxiv.org/abs/2309.15279v1 | # Test of Barrow entropy using a model independent approach
###### Abstract
Taking into consideration of a fractal structure for the black hole horizon, Barrow argued that the area law of entropy get modified due to quantum-gravitational effects. Accordingly, the corrected entropy takes the form \(S\sim A^{1+\frac{\Delta}{2}}\), where \(0\leq\Delta\leq 1\), indicates the amount of the quantum-gravitational deformation effects. By considering the modified Barrow entropy associated with the apparent horizon, the Friedmann equations get modified as well. We show that considering a universe filled with the matter and cosmological constant \(\Lambda\), it is possible to determine the amount of deviation from standard cosmology by reconstructing the parameter \(\delta\) in terms of curvature parameters \(\{q,Q,\Omega_{k}\}\) as \(\Delta=\frac{(Q-1-b_{1})(1+b_{0})}{(1+1+b_{0})^{2}}\). Here, \(q\) is the deceleration parameter and \(Q\) is the third derivative of scale factor. This relation provides some advantages. The first is that it indicates that there is profound connection between quantum-gravitational deformation effects and curvature effects, for \(\Omega_{k}\simeq 0\) the pair \(\{q,Q\}\) can be regarded as deviation curvature factors which reflect the amount of deviation of the model from the standard model. The second interesting feature is that, since this pair are observational parameters which can be directly measured in a model independent approach, they can be regarded as powerful tools to enable us to put constraint on parameter \(\Delta\) and test the Barrow entropy model. Our analysis predicts the value for \(Q_{0}\) which is slightly deviates from \(1\) as \((Q_{0}-1)<0.001\). This can be a relativity well target and criterion for theoretical and observational measurements of parameter \(Q_{0}\). Hence we can hope and wait the improvement of the high redshift data in the future to support it.
pacs: 98.80.-k, 04.50.Kd, 04.25.Nx The quantum phenomenon of Hawking radiation indicates that black hole has a temperature proportional to its surface gravity and an entropy proportional to its horizon area [1]-[3]. This issue led to discovery of profound connection between gravity and thermodynamics which was first addressed by Jacobson [4] who disclosed that the Einstein gravitational theory for the spacetime metric can be extracted from the horizon entropy-area relation by using the fundamental Clausius relation \(\delta Q=T\Delta S\)[5]. The investigations on the relation between Einstein field equations and first law of thermodynamics in the setup of black hole spacetime, have been generalized to the cosmological context to derive Friedmann equations with any spatial curvature by applying the Clausius relation to apparent horizon of the FRW universe [6]-[7]. See [8] for further studies of thermodynamical aspects of gravity. Recently, Barrow [9] explained that quantum gravitational effects might create complex and fractal properties on the black hole horizon. In this perspective, the entropy of a black hole no longer follows the law of area, but can instead be represented by exponential raising of the area \(\Delta\) as
\[S_{B}=\left(\frac{A}{A_{0}}\right)^{1+\frac{\Delta}{2}} \tag{1}\]
Where \(A\) is the normal horizon region and \(A_{0}\) is the Planck region. Quantum gravitational disturbance is shown by the new represented symbol \(\Delta\). There are characteristic values for \(\Delta\). For example, when \(\Delta=0\), we have the simplest horizon. On the other hand, when \(\Delta=1\), we see the phenomenon called maximum deformation. The authors of In this paper we want to show that there is connection between parameter \(\Delta\) which reflects the quantum gravitational effects and deceleration parameter and jerk parameter. which arise from curvature and its changes. Several studies have been devoted to the possibility that quantum gravity might triggered by curvature, but the relevant literature has so far focused exclusively on a subclass of scenarios such that the quantum-gravity effects are independent of (macroscopic) curvature. In this paper we want to show that there is connection between parameter \(\Delta\) which reflects the quantum gravitational effects and curvature effects. which arise from curvature and its changes. FLRW metric In the background of FRW universe, the line elements of the metric
\[ds^{2}=-dt^{2}+a^{2}(t)\left(\frac{dr^{2}}{1-kr^{2}}+r^{2}(d\theta^{2}+\sin^{ 2}\theta d\phi^{2})\right), \tag{2}\]
where \(a(t)\) is scale factor of the universe, \(k=1,0,-1\) stand for closed, flat and open geometries respectively, and \((t,r,\theta,\phi)\) are the co-moving coordinates.In this we have adopted the convention of \(a_{0}=1\), where the subscript \(0\) denotes the value at present time (zero redshift). The Ricci scalar curvature is given by
\[R=-6\Big{(}\dot{H}+2H^{2}+\frac{k}{a^{2}}\Big{)} \tag{3}\]
where, \(H=\frac{\ddot{a}}{a}\) is the Hubble parameter, dot denotes the derivative with respect to the cosmic time \(t\). The deceleration parameter \(q\) and the third derivative of dimensionless scale factor \(Q\) defined as
\[q = -\frac{\ddot{a}}{aH^{2}}=-\Big{(}1+\frac{\dot{H}}{H^{2}}\Big{)} \tag{4}\] \[Q = \frac{\dddot{a}}{aH^{3}}=\frac{\ddot{H}}{H^{3}}-3q-2 \tag{5}\]
One can obtain \(q\) and \(Q\) in terms of \(R\) and its time derivation \(\dot{R}\) as
\[q = \frac{R}{6H^{2}}+1+\Omega_{k} \tag{6}\] \[Q = -\frac{\dot{R}}{6H^{3}}-\frac{R}{6H^{2}}+3\Omega_{k}+3 \tag{7}\]
Where, \(\Omega_{k}=\frac{k}{a^{2}H^{2}}\). The equations (6) and (7) indicate that there is connection between parameters \((q,Q)\) and curvature \(R\) and its time derivation \(\dot{R}\). Hence in the rest of the paper we may call the as curvature parameters. In principal by considering the modified Barrow entropy associated with the apparent horizon, the Friedmann equations get modified as well. In this paper we show that it is possible to determine the amount of deviation from standard cosmology by reconstructing the parameter \(\Delta\) in terms of curvature parameters \(\{q,Q\}\).
## I Modified Friedmann equations based on Barrow entropy
In this section we present derivation of modified Friedmann equations based on Barrow entropy by using the gravity-thermodynamics conjecture. Modification of Friedman equations based on Barrow entropy was first explored by [10]. Then another approach was presented by [13]. Although these two approaches seem different, we show that they are equivalent. The author of [10] deduced that in an expanding universe, during a time interval \(dt\) the heat flow through the horizon is easily found to be [12]
\[\delta Q=-dE=A(\rho_{m}+p_{m})Hr_{A}dt. \tag{8}\]
Where \(A=4\pi r_{A}^{2}\) and \(r_{A}^{2}\) is radius of apparent horizons. From the thermodynamical viewpoint the apparent horizon is a suitable horizon consistent with first and second law of thermodynamics [15]-[19]. Assuming that the Universe is bounded by the apparent horizon of radius
\[\ddot{r}_{A}=1/\sqrt{H^{2}+k/a^{2}} \tag{9}\]
and considering that for the universe horizon the temperature associated with the horizon is given by [11]
\[T_{h}=\frac{1}{2\pi\ddot{r}_{A}} \tag{10}\]
We describe the content of the Universe as a perfect fluid of energy-momentum tensor \(T_{\mu\nu}=(\rho_{m}+p_{m})u_{\mu}u_{\nu}+p_{m}g_{\mu\nu}\), where \(\rho_{m}\) and \(p_{m}\) are the energy density and pressure, respectively. The energy-momentum tensor is conserved, \(\nabla_{\mu}T^{\mu\nu}=0\), which implies the continuity equation, \(\dot{\rho_{m}}+3H(\rho_{m}+p_{m})=0\). Differentiating the Barrow entropy(1), yields
\[dS_{h}=(2+\Delta)\left(\frac{4\pi}{A_{0}}\right)^{1+\Delta/2}\ddot{r}_{A}^{1+ \Delta}\dot{\bar{r}}_{A}dt. \tag{11}\]
Inserting these relations into the first law of thermodynamics, and substituting \(\dot{r}_{A}\) using (9), we finally result to
\[-(4\pi)^{(1-\Delta/2)}A_{0}^{(1+\Delta/2)}(\rho_{m}+p_{m})=\] \[2(2+\Delta)\frac{\dot{H}-\frac{k}{a^{2}}}{\left(H^{2}+\frac{k}{a^ {2}}\right)^{\Delta/2}}. \tag{12}\]
Lastly, integrating, for the validity region \(0\leq\Delta\leq 1\) gives
\[\frac{(4\pi)^{(1-\Delta/2)}A_{0}^{(1+\Delta/2)}}{6}\rho_{m}= \frac{2+\Delta}{2-\Delta}\left(H^{2}+\frac{k}{a^{2}}\right)^{1- \Delta/2}\] \[-\frac{C}{3}A_{0}^{(1+\Delta/2)}, \tag{13}\]
With C the integration constant. In the second approach which explored by [13], the temperature associated with the horizon is given by [14]
\[T_{h}=-\frac{1}{2\pi\tilde{r}_{A}}\left(1-\frac{\dot{\bar{r}}_{A}}{2H\tilde{r }_{A}}\right), \tag{14}\]
Since the Universe is expanding, the work density associated with the volume change of the expanding universe, is also given by \(W=(\rho-p)/2\)[20]. Using the gravity-thermodynamics conjecture, the Friedmann equation can be obtained by considering the Universe as a thermodynamic system bounded by the apparent horizon and applying the first law of thermodynamics
\[dE=T_{h}dS_{h}+WdV, \tag{15}\]
Note that the \(dE\) which defied here is different from that defined in the equation (8) of the first approach. In equation (8) the \(dE\) is just the energy flux crossing the apparent horizon, and the apparent horizon radius is kept fixed during an infinitesimal internal of time \(dt\). Also, since, it was assumed that for infinitesimal internal \(r_{A}=0\), then the definition of temperature in two approach are different, in other word in the first approach, the term related to \(\dot{r_{A}}=0\) has bee omitted. However, in the equation (15) of the second, it was assumed that the first law of thermodynamics on the apparent horizon in the form, \(dE\) is the change in the energy inside the apparent horizon due to the volume change \(dV\) of the expanding Universe
where \(E=\rho V\) is the total energy of the Universe of 3-dimensional volume \(V=\frac{4\pi}{3}\tilde{r}_{A}^{3}\) with the area of apparent horizon \(A=4\pi\tilde{r}_{A}^{2}\) and \(T_{h}\) and \(S_{h}\) are, respectively, the temperature and entropy associated with the apparent horizon. Taking differential form of the total matter and energy, we find
\[dE=4\pi\tilde{r}_{A}^{2}\rho d\tilde{r}_{A}+\frac{4\pi}{3}\tilde{r}_{A}^{3}\dot {\rho}dt \tag{16}\]
By combining with conservation equation, we obtain
\[dE=4\pi\tilde{r}_{A}^{2}\rho d\tilde{r}_{A}-4\pi H\tilde{r}_{A}^{3}(\rho_{m}+p_ {m})dt. \tag{17}\]
Finally, combining Eqs. (14), (17) and (11) with the first law of thermodynamics (15) and using continuity relation, after some algebraic calculations, we obtain
\[-\frac{2+\Delta}{2\pi A_{0}}\left(\frac{4\pi}{A_{0}}\right)^{\Delta/2}\frac{d \tilde{r}_{A}}{\tilde{r}_{A}^{3-\Delta}}=\frac{d\rho_{m}}{3}. \tag{18}\]
After integration, we find the first modified Friedmann equation in Barrow cosmology,
\[\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2}=\frac{8\pi G_{\rm eff}}{3} \rho_{m}+\frac{\Lambda}{3}, \tag{19}\]
where \(\Lambda\) is a constant of integration which can be interpreted as the cosmological constant, and \(G_{\rm eff}\) stands for the effective Newtonian gravitational constant,\(G_{\rm eff}\equiv\frac{A_{0}}{4}\left(\frac{2-\Delta}{2+\Delta}\right)\left( \frac{A_{0}}{4\pi}\right)^{\Delta/2}\). Eq. (19), can be rewritten as
\[\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2}=\frac{8\pi G_{\rm eff}}{3}( \rho_{m}+\rho_{\Lambda}). \tag{20}\]
Where, \(\rho_{\Lambda}=\Lambda/(8\pi G_{\rm eff})\). The second Friedmann equation, can be obtained by the continuity equation with the first Friedmann equation (19). [13]
\[(2-\Delta)\frac{\ddot{a}}{a}\left(H^{2}+\frac{k}{a^{2}}\right)^{- \Delta/2}+(1+\Delta)\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2}\] \[=-8\pi G_{\rm eff}(p_{m}+p_{\Lambda}), \tag{21}\]
where \(p_{\Lambda}=-\Lambda/(8\pi G_{\rm eff})\). In the limiting case where \(\Delta=0\) (\(G_{\rm eff}\to G\)), Eq. (21) reduces to the second Friedmann equation in standard cosmology.
It is easy to show that equations (13) and (12) of the first approach are equivalent to the equations (20) and (21) of the second approach. Note tat equation (12)can be simplified as
\[\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2}=(\frac{2-\Delta}{2+ \Delta})\frac{(4\pi)^{(1-\Delta/2)}A_{0}^{(1+\Delta/2)}}{6}\rho_{m} \tag{22}\] \[+\frac{C}{3}(\frac{2-\Delta}{2+\Delta})A_{0}^{(1+\Delta/2)},\]
Equation (22) can be simplified as
\[\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2}=\frac{8\pi G_{eff}}{3}\rho_{m }+\Lambda. \tag{23}\]
Where,
\[G_{eff}=(\frac{2-\Delta}{2+\Delta})\frac{(4\pi)^{(1-\Delta/2)}A_{ 0}^{(1+\Delta/2)}}{6}= \tag{24}\] \[\frac{A_{0}}{4}\left(\frac{2-\Delta}{2+\Delta}\right)\left(\frac {A_{0}}{4\pi}\right)^{\Delta/2} \tag{25}\]
and
\[\Lambda=\frac{C}{3}(\frac{2-\Delta}{2+\Delta})A_{0}^{(1+\Delta/2)} \tag{26}\]
We see that equation (23) which was obtained in the first approach is exactly equivalent to the equation 20 or 19 in the second approach. It is also easy to show that from equation (12), we can obtain equation (21). Note that equation (12) can be rewritten as
\[(2-\Delta)(\dot{H}-\frac{k}{a^{2}})(\left(H^{2}+\frac{k}{a^{2}}\right)^{- \Delta/2}=-8\pi G_{\rm eff}(\rho_{m}+p_{m}) \tag{27}\]
Considering that \(\dot{H}=\frac{\ddot{a}}{a}-H^{2}\). Hence, the equation (27) can be rewritten as
\[(2-\Delta)(\frac{\ddot{a}}{a})\left(H^{2}+\frac{k}{a^{2}}\right)^ {-\Delta/2}\] \[-(2-\Delta)\left(H^{2}+\frac{k}{a^{2}}\right)^{1-\Delta/2}=-8\pi G _{\rm eff}(\rho_{m}+p_{m}) \tag{28}\]
Where, using equation (20) and noting that \(p_{\Lambda}=-\rho_{\Lambda}\), equation (28) will be simplified to the equation (21). Hence the two approaches are equivalent. It is also interesting to note that, in the approach explored by [10] by applying the first law of thermodynamics, the second Friedman equation is extracted, then by integration of this equation the first Friedman is also obtained, however in approach explored by [13], by applying the first law of thermodynamics the first Friedman equation is extracted then by taking derivative of this equation and applying the conservation equation the second Friedman equation is also obtained
## II Reconstructing the parameters of the model in terms of geometrical parameters
In this section we aim to reconstruct the parameters of the model in terms of geometrical parameters. For simplicity, we define the following variables
\[\Omega_{m}=\frac{8\pi G_{\rm eff}\rho_{m}}{3H^{2}},\Omega_{\Lambda}=\frac{8\pi G _{\rm eff}\rho_{\Lambda}}{3H^{2}} \tag{29}\]
Employing the above variables we can derive the following dynamical system
\[\frac{d}{dx}\left(\begin{array}{c}\Omega_{m}\\ \Omega_{k}\\ \Omega_{\Lambda}\\ H\end{array}\right)=\left(\begin{array}{cccc}-1+2q&0&0&0\\ 0&2q&0&0\\ 0&0&2+2q&0\\ 0&0&0&-1-q\end{array}\right)\left(\begin{array}{c}\Omega_{m}\\ \Omega_{k}\\ \Omega_{\Lambda}\\ H\end{array}\right)\] \[. \tag{30}\]
where \(q\) is deceleration parameter and \(x=\ln a\). Also we can use (29) to rewrite the the first Friedman equation (20) as
\[\Omega_{m}+\Omega_{\Lambda}=H^{-\Delta}\left(1+\Omega_{k}\right)^{1-\frac{1}{ 2}\Delta} \tag{31}\]
Hence, by differentiating equation (31) with respect to \(x\) and using (30), We can drive
\[\Delta(1+q)H^{-\Delta}+(1-2q)H^{-\Delta}\left(1+\Omega_{k}\right) ^{1-\frac{1}{2}\Delta}\] \[+q(2-\Delta)H^{-\Delta}\Omega_{k}\left(1+\Omega_{k}\right)^{- \frac{1}{2}\Delta}=3\Omega_{\Lambda} \tag{32}\]
In 1970, Alan Sandage [21] interpreted cosmology as the search for two numbers: \(H_{0}\) and \(q_{0}\),then Weinberg [22] drew attention to the issue of extracting the value of constant spatial curvature \(k\) and deceleration parameter \(q\) from the observations, without considering cosmological constant and/or scalar field. In 1976 Harisson [23] challenged Sandage's remark and proved that the third derivative of the scale factor \(Q\), is of great importance for observational cosmology, in a universe with indeterminate (dust) matter density. He considered a universe containing the cosmological constant \(\Lambda\) and non-relativistic matter. In this case, the Einstein equations reduce to the following Friedmann equations
\[H^{2}+\frac{kc^{2}}{a^{2}}=\frac{8\pi G}{3}\rho+\frac{\Lambda}{3} \tag{33}\] \[\dot{H}+H^{2}=-\frac{4\pi G}{3}(\rho+3P)+\frac{\Lambda}{3} \tag{34}\]
For zero-pressure model, the equations (33) and (34) can be combined as
\[K=4\pi G\rho-H^{2}(q+1) \tag{35}\]
Where \(K=\frac{kc^{2}}{a^{2}}\), \(\rho\) is the average mass density. The verification of the cosmological equation (35) requires the measurement of the quantities \(K,H,\rho\) and \(q\)[23]. In principle, these quantities \(K,H,q\) can be determined, although in practice their precise determination is difficult [21]-[23]. If every thing could be known about the matter filling of the universe, then the cosmological constant will be derived as
\[\Lambda=4\pi G\rho-3qH^{2} \tag{36}\]
Hence, for a universe with known amount of matter, the general relativity (GR) can be tested by measuring \(H\) and \(q\). However, the average density \(\rho\) can not be determined, even in principal [21]-[23]. Here, the third derivative of the scale factor is required to test the validity of the equation (35). Since the parameters \((k,\Lambda,\rho)\) can be obtained in terms of first three derivatives of the scale factor as follows.
\[K=H^{2}(Q-1) \tag{37}\] \[\Lambda=H^{2}(Q-2q)\] (38) \[4\pi G\rho=H^{2}(Q+q) \tag{39}\]
It is easy to obtain the following relation
\[\frac{dq}{dx}=-Q+q+2q^{2} \tag{40}\]
Hence, by differentiating equation (32) with respect to \(x\) and using (30) and (40), we can drive
\[(Q-1)= \Omega_{k}-\Delta\,q^{2}\Omega_{k}\,\left(1+\Omega_{k}\right)^{-1}\] \[+\left(1+2\,q+q^{2}+\Omega_{k}\right)\Delta \tag{41}\]
Hence, the Barrow parameter \(\Delta\) can be obtained as
\[\Delta=\frac{(Q-1-\Omega_{k})(1+\Omega_{k})}{(1+\Omega_{k}+q)^{2}} \tag{42}\]
It is interesting to note that the relation (42) is established throughout the history of the Universe and is not limited to the present time, but we can use the information related to the current values of these parameters to determine the constant value of \(\Delta\). Based on the Planck 2018 results [24], where, \(\Omega_{k}=0.001\pm 0.002\), \(q_{0}=-0.527\pm 0.01\) and the results of [25] which have put constraint on parameter \(Q_{0}\) as \(Q_{0}=1.01^{+0.08}_{-0.021}\), the deviation relation shows that \(\Delta=0.04^{+0.385}_{-0.068}\). In the the limiting case where \(\Delta=0\), the equation (41) reduces to
\[\Omega_{k}=(Q-1) \tag{43}\]
Consequently using equation (43), for \(\Delta=0\), the equations (31) and (32) gives
\[\Omega_{\Lambda}=\frac{1}{3}(Q-2q) \tag{44}\] \[\Omega_{m}=\frac{2}{3}(Q+q) \tag{45}\]
Equations (43)-(45) are equivalent to equations (37)-(39) which previously obtained by [23].
### Solution for case \(\Omega_{k}=0\)
Considering \(\Omega_{k}=0\), equation (42) gives the Barrow parameter \(\Delta\) as
\[\Delta=\frac{Q-1}{(q+1)^{2}} \tag{46}\]
The first interesting result of equation (46) is that it indicates that there is profound connection between quantum-gravitational deformation effects \(\Delta\) and geometrical cosmology parameters \((q,Q)\). Another interesting result is that this equation indicates that the geometrical parameters \((q,Q)\) can be regarded as deviation curvature factors which reflect the amount of deviation of the model from the standard model. The equation (46) indicates that for \(Q=1\) the standard model \(\Delta=0\) is recovered. This is an expected result because \(Q=1\) is correspond to \(\Lambda CDM\) model. In fact almost all current cosmological observations can be summarized by a simple case (\(Q=1\)) [26],[27],[28]. Also Visser[29], have investigated in some details the condition (\(Q=1\)). In principal the case \(Q=1\) is a third order ODE which is hold only for \(\Lambda CDM\) model (It is easy to find that for flat case of \(\Lambda CDM\) model, \(Q=1+2\Omega_{r}\), hence neglecting radiation density energy \(\Omega_{r}=0\), \(Q=1\)). This relation can also be immediately deduced from Eq (37). In principle the parameter \(Q\) as one of the pairs statefinder diagnostic \(\{r\equiv Q,s\}\) can distinguish \(\Lambda CDM\) form other cosmological models(For more discussion one can see ([30] and [27]). The relation \(Q=1\) exactly gives the evolution of the universe for \(\Lambda CDM\) model. It gives the expected thermal history of the universe from matter dominated to dark energy dominated. To show this, we start from equation (40).By integration, it can be rewritten as
\[\int dx=\int\frac{dq}{-Q+q+2q^{2}} \tag{47}\]
Inserting \(Q=1\), the above integral gives \(x=\frac{1}{3}\ln(\frac{2q-1}{1+q})+\frac{1}{3}\ln(C)\). Here \(C\) is constant of integration. Since, \(x=\ln a=-\ln(1+z)\), the above equation can be simplified as
\[\ln\frac{1}{(1+z)^{3}}=\ln\Big{(}C(\frac{2q-1}{1+q})\Big{)} \tag{48}\]
Hence, the deceleration parameter \(q\) will be obtained in terms of redshift \(z\) as
\[q=\frac{C(1+z)^{3}+1}{2C(1+z)^{3}-1} \tag{49}\]
By set \(q_{0}\) for \(z=0\) in this equation, we can obtain the constant \(C\) in terms of \(q_{0}\) as \(C=\frac{1+q_{0}}{2q_{0}-1}\). Also transition redshift \(z_{t}\) will be obtained as
\[z_{t}=-1+(\frac{-1}{C})^{\frac{1}{3}}=-1+(\frac{1-2q_{0}}{1+q_{0}})^{\frac{1 }{3}} \tag{50}\]
According to the equation (49), when \(z\rightarrow\infty\), the deceleration parameter \(q\rightarrow\frac{1}{2}\) and when \(z\rightarrow-1\), the deceleration parameter \(q\rightarrow-1\). Also if we get \(q_{0}\simeq-0.53\) which is the result of Planck 2018, the equation (50) gives \(z_{t}\simeq 0.65\)
In Fig. 1, we have also plotted the evolution of deceleration parameter against redshift \(z\) according to the equation (49). As can be seen the evolution is exactly the evolution of the deceleration parameter for \(\Lambda CDM\) model. Hence \(Q=1\) is specified to flat(k=0) \(\Lambda CDM\) by considering \(\Omega_{r}=0\).
Therefore, it is expected that models whose structure is similar to the \(\Lambda CDM\) model will have their Q closer to 1. This usually can be achieved in the models such as Chevallier-Polarski-Linder (CPL), \(\omega CDM\), \(XCDM\) Phenomenologically Emergent Dark Energy (PEDE). There are some previous studies which have also reported \(Q\simeq 1\) (i.e. the parameter \(Q\) should be near 1). Moreover, there are some observational studies which have supported this expectation, \(Q=1\). The Refs [26],[27] find \(Q=1\) and \(Q=1.02\), respectively. Also, There are also some previous studies which have obtained values close to 1. For example see [31], [32], [33], [34], [36].
In order to examine the sensitivity of the deviation parameter \(\Delta\) with changes of each of these parameters, we introduce the following partial differential equations
\[d_{Q}=\frac{\partial\Delta}{\partial Q},\ \ d_{q}=\frac{\partial\Delta}{ \partial q} \tag{51}\]
Hence, using equations (46) and(51), we can find
\[d_{Q}= =\frac{1}{(q+1)^{2}} \tag{52}\] \[d_{q}= -2\frac{Q-1}{(q+1)^{3}}=-2\frac{\Delta}{q+1} \tag{53}\]
Hence, the absolute value of the ratio of these changes would be
\[\Big{|}\frac{d_{q}}{d_{Q}}\Big{|}=\Big{|}\frac{\Delta(q+1)}{2}\Big{|} \tag{54}\]
Since, \(|q+1|<2\), one can deduced that
\[\Big{|}\frac{d_{q}}{d_{Q}}\Big{|}<\Delta \tag{55}\]
Due to this fact that \(\Delta\ll 1\), it can be concluded that \(d_{q}\ll d_{Q}\). In other word, \(\Delta\) is more sensitive to changes of \(Q\). Hence, parameter \(Q\) can be regarded as deviation parameter which determines the deviation of the model from \(\Lambda CDM\). The more the difference of the \(Q\) value
Fig. 1: The evolution of deceleration parameter against redshift \(z\) according to equation (49)
is greater than \(1\), the greater the deviation of the model from the Standard model. Since, \(0\leq\Delta\leq 1\), so applying this condition on equation (46), a bound is obtained for parameter \(Q\) as
\[1\leq Q\leq 2+2q+q^{2} \tag{56}\]
Since, for accelerating Universe, \(q_{0}<0\), the condition (56)gives interesting result. It indicates that upper bound for \(Q_{0}\) in Barrow cosmology is \(Q_{0}<2\). This bound is further restricted if we substitute the observational value of \(q_{0}\). For \(q_{0}\simeq-0.5\), which is the result of example for [24], the condition is restricted to \(1<Q_{0}<1.25\). This condition has interesting result as indicates that observational studies on Kaniadakkis cosmology must give the value for \(Q_{0}\) which be closer to \(1\). It also indicates that as \(q_{0}\) gets closer to \(1\), the region becomes more limited, where for \(q_{0}=-1\) this region becomes a points as \(Q_{0}=1\). This interesting result indicates that \(\Delta=0\) is correspond to \((q_{0}=-1,Q_{0}=1)\) and these values are hold for \(\Lambda CDM\) model.
Here we aim to find parameters of the model in terms of geometrical parameters \((H,q,Q)\). Equation (32) for \(\Omega_{k}=0\) can be rewritten as
\[\Big{(}\Delta(1+q)+1-2q\Big{)}H^{-\Delta}=3\Omega_{\Lambda} \tag{57}\]
Where, inserting the \(\Delta\) from equation (46) into the equation(57), the parameter, \(\Omega_{\Lambda}\) will be obtained in terms of \((H,q,Q)\) as
\[\Omega_{\Lambda}=\frac{1}{3}\Big{(}\frac{Q-q-2q^{2}}{q+1}\Big{)}H^{-(\frac{Q -1}{(q+1)^{2}})} \tag{58}\]
Also, inserting \(H^{-\Delta}\), from equation (31) into the equation(57)
\[\Big{(}\Delta(1+q)+1-2q\Big{)}=\frac{3\Omega_{\Lambda}}{\Omega_{m}+\Omega_{ \Lambda}} \tag{59}\]
Where, inserting the \(\Delta\) from equation (46) into the equation(59), we arrive
\[\frac{\Omega_{m}}{\Omega_{\Lambda}}=-\frac{(Q-4q-2q^{2}-3)}{(Q-q-2q^{2})} \tag{60}\]
Combining equations (58) and (60), the parameter \(\Omega_{m}\) also can be obtained in terms of \((H,q,Q)\) as
\[\Omega_{m}=-\frac{1}{3}\Big{(}\frac{Q-4q-2q^{2}-3}{q+1}\Big{)}H^{-(\frac{Q-1} {(q+1)^{2}})} \tag{61}\]
The equations (46), (58) and (61) are equations that express the parameters of the model,\((\Omega_{m},\Omega_{\Lambda},\Delta)\) in terms of geometrical parameters \((H,q,Q)\).
According to the approach explored by [10], we can also find the interesting parameters \(\Omega_{DE}\) and \(\omega_{DE}\). For this peropus, we can can re-express equations (12) and (13) as
\[H^{2}=\frac{8\pi G}{3}\left(\rho_{m}+\rho_{DE}\right) \tag{62}\] \[\dot{H}=-4\pi G\left(\rho_{m}+p_{m}+\rho_{DE}+p_{DE}\right), \tag{63}\]
where
\[\rho_{DE}=\frac{3}{8\pi G}\left\{\frac{\Lambda}{3}+H^{2}\left[1- \frac{\beta(\Delta+2)}{2-\Delta}H^{-\Delta}\right]\right\}, \tag{64}\]
\[p_{DE}=-\frac{1}{8\pi G}\left\{\Lambda+2\dot{H}\left[1-\beta\left(1+\frac{ \Delta}{2}\right)H^{-\Delta}\right]\right\}\] \[+3\left\{H^{2}\left[1-\frac{\beta(2+\Delta)}{2-\Delta}H^{-\Delta }\right]\right\} \tag{65}\]
where respectively are the energy density and pressure of the effective dark energy sector and \(\beta\equiv\frac{4(4\pi)^{\Delta/2}G}{A_{0}^{1+\Delta/2}}\). Hence, by defining \(\Omega_{DE}=\frac{8\pi G\rho_{DE}}{3H^{2}}\) and using equation (64), we have
\[\Omega_{DE}=\frac{8\pi G\rho_{DE}}{3H^{2}}=\Omega_{\Lambda}+\left[1-\frac{ \beta(\Delta+2)}{2-\Delta}H^{-\Delta}\right] \tag{66}\]
Also the effective equation of state read
\[\omega_{DE}=\frac{p_{DE}}{\rho_{DE}}=\frac{-1}{3}\frac{\left\{3\Omega_{\Lambda }-2(1+q)\left[1-\beta\left(1+\frac{\Delta}{2}\right)H^{-\Delta}\right]+\left[ 1-\frac{\beta(2+\Delta)}{2-\Delta}H^{-\Delta}\right]\right\}}{\Omega_{\Lambda}+ \left[1-\frac{\beta(\Delta+2)}{2-\Delta}H^{-\Delta}\right]} \tag{67}\]
We, obtained \(\Delta\) in terms of \(\{q,Q\}\), also \(\Omega_{\Lambda}\) and \(\Omega_{m}\) in terms of cosmographic parameters \(\{H,q,Q\}\), according to the above equations, \(\omega_{DE}\) and \(\Omega_{DE}\) can also be expressed in terms of cosmographic parameters \(\{H,q,Q\}\).
It is also possible to obtain the current value of parameters \(\Omega_{m},\Omega_{\Lambda},\Omega_{DE},\omega_{DE}\) in terms of the current values
\((H_{0},q_{0},Q_{0})\). Here, we aim to test the model for different cosmographic sets. It is important to note that although, the present observational data could provide strong constraint on the current values of Hubble parameter \(H_{0}\) and deceleration parameter \(q_{0}\), but due to the fact that constraining on \(Q_{0}\) requires high redshift data and until today, there is no enough high-quality data at high redshift,it is not possible to put strong constrain on \(Q_{0}\) and different studies have reported different values for this parameter. Hence there is no very reliable set of \(\{H_{0},q_{0},Q_{0}\}\). However, it is possible to estimate this parameter theoretically in different cosmological models. We can use the results of Planck 2018 [24] for \(H_{0}\) and \(q_{0}\) as robust observational measurements. According to the Planck results, \(H_{0}=(67.4\pm 0.5)\) and \(q_{0}=-0.527\pm 0.01\). However, since no strong measurements on \(Q_{0}\) and \(\Delta\) and wide ranges of values have been reported for this parameters. For example authors of [35], using same model, reported different values \(Q_{0}\simeq 1.268\) and \(Q_{0}\simeq-7.746\) for different data. In the [35], they also reported \(Q_{0}\simeq-13.695\) for another category of the model. So there is no strong constrain on \(Q_{0}\). We are also facing this problem for the parameter \(\Delta\). Recently, using Big Bang Nucleosynthesis (BBN) data [37] have imposed constraint on the exponent \(\Delta\) and found that Barrow exponent should be inside the bound \(\Delta\leq 1.4\times 10^{-4}\), also the authors of [38], found the value \(\Delta=5.912\times 10^{-4}\) which is comparable with[37]. The authors of [46], found \(\Delta\simeq 10^{-4}\). In [40] the entropic modified Friedmann equation within the gravity-thermodynamics approach is confronted with a set of cosmological probes, including Pantheon SNeIa and a BAO sample, and obtain \(\Delta\sim 10^{-4}\). In [39] the authors do not use any early-times probe, but only SNeIa, CC and GRBs, and a different horizon from ours, and their final estimation for the Tsallis parameter is \(\delta\approx 0.16\), which corresponds to \(\Delta\approx-1.68\), clearly out of the physical boundary required by Barrow theory. In [41] the holographic principle is applied and they found the Tesallis parameter as \(\delta\approx 1.07\) corresponding to \(\Delta\approx 0.14\). Similar results are obtained in [42, 43] using only late-times data, with \(\Delta\sim 0.09\). Finally, in [44], the Barrow entropy parameter was constrained as \(\Delta\sim 0.03\). The authors of[45], using Barrow Holographic Dark Energy Model with GO Cut-of found \(\Delta=0.063\pm 0.029\)
We see that different values from order \(10^{-4}\) to order 1 have been reported for parameter \(\Delta\). Hence same as \(Q_{0}\), there is no convergence in results.
Here, according to the analytical relations which have obtained, we want to test the different results.
At first we call the relation (46). Since, for accelerating universe \(q_{0}<0\), hence, \((q_{0}+1)<1\) and \(((q_{0}+1)^{2}<1\) Hence according to the equation (46)
\[(Q_{0}-1)\leq\Delta \tag{68}\]
This condition indicates that for those studies that reported \(\Delta\simeq 10^{-4}\), the condition \((Q_{0}-1)<10^{-4}\) should be satisfied.. This indicates that \(Q_{0}\) should be very close to 1. It also points out an important point; Observational constraining on \(Q_{0}\) should be performed with high accuracy and sensitivity, so that the error of the order of \(10^{-4}\) is measured. Although, there are many studies that have reported best fitted values for \(Q_{0}\) which are close to 1, but in none of them, a best fitted value less than 1.01 has been reported. Among all the values reported for \(Q_{0}\), the closest value to 1 has been reported by [25], which is \(Q_{0}=1.01^{+0.08}_{-0.021}\). Considering this value and result of Planck 2018[24] for \(q_{0}\); \(q_{0}=-0.527\pm 0.01\), the relation gives \(\Delta=0.04^{+0.385}_{-0.068}\). Although this result is consistence with some of previous studies, however, the other parameters should be also measured to test the validation of the result. Considering the Planck 2018 result[24]\(H_{0}=(67.4\pm 0.5)\), the current values of parameters of the model for \(\Delta=0.04^{+0.385}_{-0.068}\), are reconstructed as
\[\Omega_{m0}=.255,\ \ \Omega_{\Lambda_{0}}=0.573,\] \[\Omega_{DE_{0}}=0.07,\ \ \omega_{DE0}\simeq-8.36\] \[\beta\simeq 1.73 \tag{69}\]
As can be seen, the values obtained for \(\Omega_{DE0},\omega_{DE0}\) are not consistence with that expected by robust observational data and indicates that the value obtained for \(\Delta\) is not desired value. This is due to this fact that parameter \(\Delta\) should be very close to 0, and this requires the value of \(Q_{0}\) is very close to 1. If we check the model for upper bound of \(\Delta\) which reported by [37]; \(\Delta=1.4\times 10^{-4}\) and considering the Planck results \(q_{0}=-0.527\pm 0.01\),\(H_{0}=(67.4\pm 0.5)\), the results would be
\[\Omega_{m0}\simeq 0.3151,\ \ \Omega_{\Lambda_{0}}\simeq 0.6842,\] \[\Omega_{DE_{0}}\simeq 0.6830,\ \ \omega_{DE0}\simeq-1.0014,\] \[\beta\simeq 1.00122 \tag{70}\]
The values obtained for this case are in excellent agreement with observations. In particular the value \(\Omega_{m0}\simeq 0.3151\) is very close to that obtained by [24] as \(\Omega_{m0}\simeq 0.315\pm 0.007\). According to the equation (46) for \(\Delta=1.4\times 10^{-4}\) and \(q_{0}=-0.527\pm 0.01\) the current value \(Q_{0}\) is
\[(Q_{0}\simeq 1.00002) \tag{71}\]
If we test the model for \(\Delta=5.912\times 10^{-4}\) which obtained by [38], the following results would be obtained
\[\Omega_{m0}\simeq 0.3144,\ \ \Omega_{\Lambda_{0}}\simeq 0.6830,\] \[\Omega_{DE_{0}}\simeq 0.6776,\ \ \omega_{DE0}\simeq-1.0080,\] \[\beta\simeq 1.00122 \tag{72}\]
These results are very close to results of [38], who found
\[\Omega_{m0}=311^{+0.006}_{-0.005},\ \omega_{DE0}\simeq-1.0001,\beta=0.920^{+0.042}_{-0.042} \tag{73}\]
For this case the current value of \(Q_{0}\) be
\[(Q_{0}\simeq 1.00012) \tag{74}\]
However, as we mentioned, so far, the measurement on \(Q_{0}\) has not been done with this high accuracy. The fact that values closer to 1 are not reported may be due to this fact that, so far, the importance of very small changes of order \(10^{-4}\) of this parameter and how much it can affect the results has not been paid attention to. So observational measurements and numerical analysis on \(Q_{0}\) are not done with this accuracy. As mentioned, another reason could be the lack of high quality data required for constraining this parameter. So this study indicates the parameter \(Q\) is important parameter in Barrow campanology which its current value is very close to 1. In Table(1), we have obtained current values for parameters of the model, for different values of \(Q_{0}\) by fixing Planck 2018 results for \(q_{0}\) and \(H_{0}\). Due to the high sensitivity of the model with small changes of this parameter, even very small errors in estimation of this parameter lead to very different results for the model. As we investigated, very different results was obtained for two very close values (\(Q_{0}\simeq 1.01\)) and (\(Q_{0}\simeq 1.001\)).
### Solution for case \(\Omega_{k}\neq 0\)
For general case (\(\Delta\neq 0,\Omega_{k}\neq 0\)), from equation(41), \(\Omega_{k}\) can be obtained as
\[\Omega_{k}=-1+\frac{-2\,\Delta\,q+Q\pm\Big{(}Q^{2}-4\,\Delta\,qQ-4\,\Delta\,q^ {2}\Big{)}^{\frac{1}{2}}}{2(1+\Delta)}, \tag{75}\]
By substituting \(\Omega_{k}\) from equation (75) in equation (32), \(\Omega_{\Lambda}\) is obtained in terms of \((H,q,Q,\Delta)\). In equation (42), we have obtained \(\Delta\) in terms of the cosmographic parameters \((q,Q)\) and \(\Omega_{k}\), however if we want to obtain \(\Delta\) only in terms of the cosmographic parameters and eliminate the parameter \(\Omega_{k}\), we need the cosmographic parameter \(X=\frac{\omega_{0}}{aH^{2}}\). For this respect, by taking derivative of both sides of equation (41) and using equation (30) we can obtain
\[-4\Delta q^{3}+(-8\Delta\Omega_{k}-6\Delta)q^{2}+\Big{(}(-4\Delta -4)\Omega_{k}^{2}\] \[+(5Q-6\Delta-4)\Omega_{k}+(2Q-2)\Delta+3Q\Big{)}q\] \[+(1+\Omega_{k})(X+2Q+12+2\Delta Q)=0 \tag{76}\]
Where we have also used the following relation
\[\frac{dQ}{dx}=X+(2+3q)Q \tag{77}\]
From equations (76) and (75), the barrow parameter \(\Delta\) can be obtained in term of the cosmographic parameters \((q,Q,X)\) as
\[\Delta=-\frac{1}{4(q+Q)^{3}}\Big{(}2AQ+\frac{1}{2}AqQ+2Aq+6A+\frac {1}{2}A\] \[+(qQ^{2}+XQ+12Q)(q+Q)^{3}\Big{)} \tag{78}\]
Where
\[A= -4qQ-q^{2}Q-4q^{2}-12q-qX\] \[+\Big{(}144q^{2}+12XQq^{2}+56q^{3}Q+2q^{3}QX\] \[+q^{4}Q^{2}+8q^{4}Q+q^{2}X^{2}+144q^{2}Q+96q^{3}\] \[+20q^{2}Q^{2}+12q^{3}Q^{2}+24Xq^{2}+8Xq^{3}+16q^{4}\] \[+8qQ^{3}+4Q^{4}+4q^{2}Q^{3}+4qXQ^{2}+48qQ^{2}\Big{)}^{\frac{1}{2}} \tag{79}\]
We have used Maple software to derive the above relations. By substituting \(\Delta\) from equation (78) into equation (75), the parameter \(\Omega_{k}\) is also obtained in terms of cosmographic parameters \((q,Q,X)\). Also, from equation (32), the parameter \(\Omega_{\Lambda}\) is obtained in terms of \((H,q,Q,X)\), finally from equation (31), the parameter \(\Omega_{m}\) is also obtained in terms of \((H,q,Q,X)\). This means that it is possible to reconstruct the parameter of the Barrow model \((\Delta,\Omega_{k},\Omega_{m},\Omega_{\Lambda})\) in terms of directly measurable parameters \((H,q,Q,X)\).
## III Summary and remarks
In this paper we reconstructed the parameters of the modified Friedman equation due to the gravity-thermodynamics approach and assuming the Barrow entropy in terms of the first four cosmographic parameters \((H,q,Q,X)\). Barrow entropy arises from the fact that the black-hole surface may be deformed due to quantum-gravitational effects, and its deviation from Bekenstein-Hawking one is quantified by \(\Delta\). Due to significant role of the parameter \(\Delta\) in describing the late-time universe and early-time behavior, most of the recent studies on Barrow entropy have focused on imposing constraints on the exponent \(\Delta\). We found that for a universe filled
\begin{table}
\begin{tabular}{c c c c c c c} \(Q_{0}\) & \(\Delta\) & \(\Omega_{m0}\) & \(\Omega_{\Lambda 0}\) & \(\Omega_{DE0}\) & \(\omega_{DE0}\) & \(\beta\) \\ \hline \hline
1 & 0 & 0.3155 & 0.6846 & 0.6846 & -1 & 1 \\ \hline
1.00001 & 0.00004 & 0.3152 & 0.6845 & 0.6841 & -1.0006 & 1.00054 \\ \hline
1.0001 & 0.000044 & 0.31467 &.6834 &.6793 & -1.0066 & 1.0055 \\ \hline
1.001 & 0.0044 & 0.30876 & 0.67259 & 0.63118 & -1.0725 & 1.05645 \\ \hline
1.005 & 0.0223 & 0.2838 & 0.6263 & 0.4014 & -1.6098 & 1.31601 \\ \hline
1.008 & 0.0357 & 0.2664 & 0.5938 & 0.2103 & -2.9622 & 1.5517 \\ \hline
1.01 & 0.0446 & 0.2553 & 0.5730 & 0.0726 & -8.36038 & 1.7318 \\ \hline
1.1 & 0.446 & 0.0372 & 0.11499 & -57.13 & -0.0831 & 242.77 \\ \end{tabular}
\end{table}
Table 1: Reconstructing parameters for different values of \(Q_{0}\) by fixing \(q_{0}=-0.527\)
with mater and cosmological constant \(\Lambda\) this parameter could be obtained in terms of \(\Omega_{k}\) and two important cosmographic parameters \((q,Q)\) where for flat universe or neglecting \(\Omega_{k}\) it can be obtained only in terms of \((q,Q)\). The connection between parameter \(\Delta\) which reflects the quantum-gravitational effects and cosmographic parameters which reflect the curvature effects has interesting features which can be summarized as follows.
The first interesting feature is that this relation indicates that it is possible to imposing constrain on parameter \(\Delta\) by imposing constrain on parameters current values of \((q,Q)\). This can be interesting because of this facts that cosmographic parameters \((q_{0},Q_{0})\) could be measured directly without need of any background cosmological model, in addition one of the main purposes of observational cosmology is to get precise measurements of the first cosmographic parameters \((H_{0},q_{0},Q_{0})\) that will provide a crucial test for cosmological models [21],[23]. Hence a vast majority of cosmological observational studies have focused on constraining on these parameters. As a robust observational data, the Planck 2018 [24] providing a major source of information relevant to many cosmological parameters indicates that \(q_{0}=-0.527\pm 0.01\). Also [23] proved that the third derivative of the scale factor \(Q\), is of great importance for observational cosmology, hence some of studies only have focused on constraining cosmic jerk parameter \(Q\) in different cosmological models.
The authors of [25] have put constrain on parameter \(Q_{0}\) in \(f(R)\) gravity and find \(Q_{0}=1.01^{+0.08}_{-0.021}\). Considering this value and the value of \(q_{0}=-0.527\pm 0.01\) obtained from Planck 2018 results the deviation parameter \(\Delta\) is obtained as \(\Delta=0.04^{+0.385}_{-0.068}\). We showed that although this result is consistence with some of previous studies, however, the other important parameters should be also measured to test the validation of the result. We reconstructed important parameters \(\Omega_{DE_{0}},omega_{DE0}\) and \(\beta\) according to the approach explored by [10] and test the model by this parameter. For \(\Delta=0.04^{+0.385}_{-0.068}\) considering the Planck 2018 result[24]\(H_{0}=(67.4\pm 0.5)\), the current values of parameters of the model for this value, are reconstructed as \(\Omega_{DE_{0}}=0.07,\omega_{DE0}\simeq-8.36\) which are not consistence with that expected by robust observational data and indicates that the value obtained for \(\Delta\) is not desired value. This is due to this fact that parameter \(\Delta\) should be very close to \(0\), and this requires the value of \(Q_{0}\) be very close to \(1\). We checked the model for upper bound of \(\Delta\) which reported by [37]; \(\Delta=1.4\times 10^{-4}\) and considering the Planck results \(q_{0}=-0.527\pm 0.01\),\(H_{0}=(67.4\pm 0.5)\), and
\[\Omega_{m0}\simeq 0.315,\ \Omega_{\Lambda_{0}}\simeq 0.684,\Omega_{DE_{0}}\simeq 0.683,\omega_{DE0}\simeq-1.0014,\]
Also \(\beta\) was found as \(\beta\simeq 1.00122\). The values obtained for this case are in excellent agreement with observations. In particular the value \(\Omega_{m0}\simeq 0.3151\) is very close to that obtained by [24] as \(\Omega_{m0}\simeq 0.315\pm 0.007\). According to the equation (46) for \(\Delta=1.4\times 10^{-4}\) and \(q_{0}=-0.527\pm 0.01\) the current value \(Q_{0}\) is (\(Q_{0}\simeq 1.00002\)). We also test the model for \(\Delta=5.912\times 10^{-4}\) which obtained by [38], the following results would be obtained
\[\Omega_{m0}\simeq 0.314,\ \ \Omega_{\Lambda_{0}}\simeq 0.683,\ \ \Omega_{DE_{0}} \simeq 0.677,\ \ \omega_{DE0}\simeq-1.008.\]
For this case the parameter \(\beta\) was found as \(\beta\simeq 1.00122\). These results are very close to results of [38], who found
\[\Omega_{m0}=311^{+0.006}_{-0.005},\ \omega_{DE0}\simeq-1.0001,\beta=0.920^{+0.042}_ {-0.042}\]
For this case the current value of \(Q_{0}\) be (\(Q_{0}\simeq 1.00012\)). However, so far, the measurement on \(Q_{0}\) has not been done with this high accuracy. The fact that values closer to \(1\) are not reported may be due to this fact that, so far, the importance of very small changes of order \(10^{-4}\) of this parameter and how much it can affect the results has not been paid attention to. So observational measurements and numerical analysis on \(Q_{0}\) are not done with this accuracy. Another reason could be the lack of hat high redshift there is no strong constrain on this parameter.
So, we can conclude that this study reveals the importance of parameter \(Q\) in Barrow campanology, it also indicates that the current value of \(Q_{0}\) is very close to \(1\). Our theoretical approach indicates that, the works done by [37] and [38] which found that Barrow exponent should be inside the bound \(\Delta\leq 1.4\times 10^{-4}\) have good agrement with observations. In other word for this bound, the values which reconstructed for parameters of the model are in excellent agreement wit observations. This bound indicates that by considering Planck results [24], for \(q_{0}=-0.527\pm 0.01\) and \(\Omega_{k}=0.001\pm 0.002\) the relation (42) predicts the value for \(Q_{0}\) which is slightly deviates from \(1\) as \((Q_{0}-1)<0.001\). This can be a relativity well target and criterion for theoretical and observational measurements of parameter \(Q\), so we can hope and wait the improvement of the high redshift data in the future to support it.
## IV Data availability statement
The manuscript has no associated data or the data will not be deposited
|
2309.12516 | Effective versus Floquet theory for the Kerr parametric oscillator | Parametric gates and processes engineered from the perspective of the static
effective Hamiltonian of a driven system are central to quantum technology.
However, the perturbative expansions used to derive static effective models may
not be able to efficiently capture all the relevant physics of the original
system. In this work, we investigate the conditions for the validity of the
usual low-order static effective Hamiltonian used to describe a Kerr oscillator
under a squeezing drive. This system is of fundamental and technological
interest. In particular, it has been used to stabilize Schr\"odinger cat
states, which have applications for quantum computing. We compare the states
and energies of the effective static Hamiltonian with the exact Floquet states
and quasi-energies of the driven system and determine the parameter regime
where the two descriptions agree. Our work brings to light the physics that is
left out by ordinary static effective treatments and that can be explored by
state-of-the-art experiments. | Ignacio GarcΓa-Mata, Rodrigo G. CortiΓ±as, Xu Xiao, Jorge ChΓ‘vez-Carlos, Victor S. Batista, Lea F. Santos, Diego A. Wisniacki | 2023-09-21T22:43:45Z | http://arxiv.org/abs/2309.12516v4 | # Effective versus Floquet theory for the Kerr parametric oscillator
###### Abstract
Parametric gates and processes engineered from the perspective of the static effective Hamiltonian of a driven system are central to quantum technology. However, the perturbative expansions used to derive static effective models may not be able to efficiently capture all the relevant physics of the original system. In this work, we investigate the conditions for the validity of the usual low-order static effective Hamiltonian used to describe a Kerr oscillator under a squeezing drive. In this work, we exploit the opportunity provided by this system, which is sufficiently simple to be built and operated in the lab, sufficiently complex to require modern calculation techniques for the exploration of nontrivial parameter regimes, and sufficiently rich to be of fundamental and technological interest. We compare the low-order static effective states and energies with the exact Floquet states and quasi-energies and determine the parameter regime where the descriptions agree. Our work brings to light the physics that is left out by ordinary static effective treatments and that can be explored by state-of-the-art experiments.
## 1 Introduction
Driven systems can present unexpected behaviors oftentimes without a static analog. A typical example is the Kapitza pendulum, where a rapidly driven rigid pendulum can stabilize against gravity by developing a minimum of potential energy when pointing upward [1, 2]. In the quantum regime, this static effective potential was proposed as a way to generate an error-protected qubit [3] and the electronic analog of this mechanical system was recently named "Kapitzonium" [4]. Another example are the Paul traps [5, 6], that use time-dependent electric fields to trap charged particles and therefore bypass Earnshaw's theorem, which states that a charge distribution cannot be stabilized in a stationary equilibrium configuration via its electrostatic interactions. By changing the electric field faster than the escaping rate of the particles, an average confining force can be created. This idea is at the basis of some atomic clocks and trapped-ion quantum technologies [7, 8].
Usually the perspective of a static effective model, which provides analytical expressions and simplifications to the time-dependent problem, is used to study driven systems. Many useful methods to achieve static effective Hamiltonians have been derived in the last century [9, 10, 11, 12, 13] (see [3] for a classification), but they are not without limitations. In particular, in highly nonlinear systems, micromotion caused by the kicks of the rapidly changing potential can feedback into the dynamics and produce sizable effects associated with nonlinear mixing and amplifications [14]. These effects ultimately lead to regimes beyond the scopes of the static effective Hamiltonian. Alternatively, when the system is periodically driven, it can be studied numerically via Floquet theory. The advantage is that this method can be carried
out with a minimal amount of approximations, yet it mostly remains a numerical treatment.
In this work, we discuss to what extent the ordinary static effective theory and the numerical Floquet treatment agree. We direct our attention to a dynamically rich system that is central to ongoing investigations: the Kerr parametric oscillator. It consists of a nonlinear oscillator subjected to a squeezing drive and boasts a storied history. It has served as an exemplary instance of a parametric oscillator [15, 16, 17], an amplifier [18, 19, 20], a tool for stabilizing quantum information [21, 22, 23, 24, 25], a framework for quantum optical tuning [26], and more recently, as a platform to study excited state phase transitions (ESQPTs) [27] and tunneling [28]. The model has also been experimentally implemented with superconducting circuits, being employed to generate Schrodinger's cat states [29, 30], detect ESQPTs (aka "spectral kissing") [31], and analyze tunneling [32, 33]. For the parameters values used, these experiments were well described by low-order static effective Hamiltonians. However, steady experimental progress and access to broader ranges of parameters prompt a deeper investigation beyond this regime. In this case, the full Floquet numerical analysis becomes useful.
Here, we quantify the proximity of the driven and effective descriptions of the Kerr parametric oscillator by comparing the Floquet quasienergies and Floquet states of the time-dependent Hamiltonian with the eigenenergies and eigenstates of the corresponding low-order effective Hamiltonian. This is done via numerical simulations covering experimentally relevant parameters regimes. We find that deviations between the two approaches become evident when the nonlinear effects are significant. Understanding the parameter region where the effective low-order Hamiltonian that has been used to describe recent experiments is valid and reliable is of paramount importance for the design of quantum technologies, including qubits, gates, and circuits. Analyzing when low-order expansions may fail can point in the direction of new physics and new possibilities for applications in quantum information processing.
The paper is structured as follows. In Sec. 2, we introduce the model system, the time-dependent Hamiltonian and its corresponding static effective approximation. In Sec. 3, we compare the eigenvalues and eigenvectors of the effective Hamiltonian with the quasienergies and Floquet states of the time-dependent system. This allows us to determine the experimentally accessible parameters regimes, where the two descriptions agree and where they are expected to diverge. Conclusions are presented in Sec. 4 and additional results in the appendices.
## 2 Model system: Kerr parametric oscillator
The Hamiltonian of the driven superconducting nonlinear oscillator - the Kerr parametric oscillator - that we analyze [30, 31, 32] is analogous to a one-dimensional asymmetric driven quantum pendulum. In terms of dimensionless coordinates, the Hamiltonian is given by
\[\begin{split}\frac{\hat{H}(t)}{\hbar}=\omega_{o}\hat{a}^{\dagger} \hat{a}&+\frac{g_{3}}{3}(\hat{a}+\hat{a}^{\dagger})^{3}+\frac{g_ {4}}{4}(\hat{a}+\hat{a}^{\dagger})^{4}\\ &-i\Omega_{d}(\hat{a}-\hat{a}^{\dagger})\cos\omega_{d}t,\end{split} \tag{1}\]
where \(\omega_{o}\) is the bare frequency of the oscillator, \(\hat{a}\) (\(\hat{a}^{\dagger}\)) is the bosonic annihilation (creation) operator satisfying the commutation relation \([\hat{a},\hat{a}^{\dagger}]=1\), the third and fourth order nonlinearities of the potential energy are \(g_{3},g_{4}\ll\omega_{o}\), and the drive is characterized by its strength \(\Omega_{d}\) and its frequency \(\omega_{d}\). The dimensionless coordinate, \(\hat{a}=\frac{1}{2}(\frac{\hat{X}}{X_{\text{aps}}}+i\frac{\hat{P}}{P_{\text{aps }}})\), is written in terms of the position-like \(\hat{X}\) and the momentum-like \(\hat{P}\) coordinates, the zero-point spread in the position-like degree of freedom of the oscillator \(X_{\text{aps}}=\sqrt{\hbar/(2M\omega_{o})}\), and the zero-point spread in momentum \(P_{\text{aps}}=\hbar/(2X_{\text{aps}})\), where \(M\) is the effective mass of the oscillator (for the mapping to a superconducting quantum system see [34, 35]). Following the experiments [30, 31, 32], we consider the condition to create parametric squeezing, that is, the system is driven at twice its bare oscillation frequency, \(\omega_{d}\approx 2\omega_{o}\). Under this condition, the system undergoes a period-doubling bifurcation [26, 21, 15, 16, 36], whose static effective description corresponds to a double-well system [15, 16, 17, 27].
In Ref. [31], it was shown that the experiment carried out with the time-dependent Hamiltonian in Eq. (1) could be described by a low-order static effective Hamiltonian. To compute the effective Hamiltonian, two transformations must be ap
plied to Eq. (1). The first one is a displacement into the linear response of the oscillator, where the effective amplitude of the displacement is \(\Pi\approx\frac{2\Omega_{d}}{3\omega_{d}}\). The second corresponds to a change into a rotating frame induced by \(\frac{\omega_{d}}{2}\hat{a}^{\dagger}\hat{a}\), transforming Eq. (1) to
\[\begin{split}\frac{\hat{\mathcal{H}}(t)}{\hbar}=&- \delta\hat{a}^{\dagger}\hat{a}+\sum_{m=3}^{4}\frac{g_{m}}{m}(\hat{a}e^{-i \omega_{d}t/2}+\\ &\hat{a}^{\dagger}e^{i\omega_{d}t/2}+\Pi e^{-i\omega_{d}t}+\Pi^{ *}e^{i\omega_{d}t})^{m},\end{split} \tag{2}\]
where \(\delta=\frac{\omega_{d}}{2}-\omega_{o}\ll\omega_{d}\). This choice of frame brings the period doubling dynamics into focus (note that the periodicity of eq.(2) is two times the periodicity of the drive. See [15] section B.I for a discussion).
The propagator over a period \(T=4\pi/\omega_{d}\) induced by eq.(2) is given by
\[\hat{U}(T)=\mathcal{T}e^{\frac{-i}{\hbar}\int_{0}^{T}\hat{\mathcal{H}}(t)dt} =e^{\frac{i}{\hbar}\hat{S}(T)}e^{-\frac{i}{\hbar}\hat{H}_{\text{eff}}T}e^{- \frac{i}{\hbar}\hat{S}(0)}, \tag{3}\]
where \(\mathcal{T}\) is the time ordering operator which does not appear on the right-hand side of the equation. This defines the operator \(\hat{S}(t)=\hat{S}(t+T_{d})=\hat{S}(t+T)\), whose purpose is to generate a canonical transformation to a frame where the evolution is ruled by a time-independent Hamiltonian \(\hat{H}_{\text{eff}}\). This provides an important simplification. As discussed in detail in the next section, to compare the eigenstates of the effective Hamiltonian with the Floquet states, we have to take into account the unitary transformation,
\[\hat{U}_{S}=e^{-\frac{i}{\hbar}\hat{S}}, \tag{4}\]
where \(\hat{S}=\hat{S}(0)=\hat{S}(T)\).
So far no approximation has been made, but one can compute \(\hat{H}_{\text{eff}}\) and \(\hat{S}\) perturbatively to arbitrary order using a mutually recursive formula developed in [3]. Following the approach in [14, 3], one uses the zero-point spread of the oscillator, \(X_{\text{zps}}\), as the perturbation parameter and reaches the second-order effective Hamiltonian [31]
\[\frac{\hat{H}_{\text{eff}}^{(2)}}{\hbar}=\epsilon_{2}(\hat{a}^{\dagger 2}+\hat{a }^{2})-K\hat{a}^{\dagger 2}\hat{a}^{2}. \tag{5}\]
This Hamiltonian conserves parity, \([\hat{H}_{\text{eff}}^{(2)},e^{i\pi\hat{a}^{\dagger}\hat{a}}]=0\). The driving condition for the period-doubling bifurcation can now be better specified as \(\omega_{d}=2\omega_{a}\), where \(\omega_{a}\approx\omega_{a}^{(2)}=\omega_{o}+3g_{4}-20g_{3}^{2}/3\omega_{o}+(6 g_{4}+9g_{3}^{2}/\omega_{o})(2\Omega_{d}/3\omega_{o})^{2}\) includes the Lamb and Stark shift to the bare frequency \(\omega_{o}\). In Eq. (5), the Kerr nonlinearity to leading order is \(K\approx K^{(2)}=-\frac{3g_{4}}{2}+\frac{10g_{4}^{2}}{3\omega_{o}}\) and \(\epsilon_{2}\approx\epsilon_{2}^{(2)}=g_{3}\frac{2\Omega_{d}}{3\omega_{o}}\). In these expressions, all nonlinear corrections are kept to order \(X_{\text{zps}}^{2}\). For the parameters ranges that have so far been considered experimentally [32, 31], the Hamiltonian \(\hat{H}_{\text{eff}}^{(2)}\) matches spectroscopically the experimental results for the driven systems up to the tenth excited state.
In what follows, we study the limits of applicability of the low-order static effective theory for a wide range of parameters of experimental interest. Various combinations of the native parameters \(g_{3}\) and \(g_{4}\) in Eq. (1) can yield the same emergent effective Kerr nonlinearity \(K\), yet the effective Hamiltonian in Eq. (5) is not equally good for all choices. This analysis is important for applications in quantum computing, where the Kerr nonlinearity is required to be much larger than the decoherence rate of the driven system [36, 37, 38]. The ability to engineer the correct static effective spectrum for the highly excited states of the driven nonlinear oscillator is also paramount for applications in quantum computing based on Kerr-cat qubits, because the autonomous error-protecting properties of the qubit encoded in the bifurcated oscillator depend strongly on the dynamics of the excited states [32, 39, 40, 31].
Going beyond Eq. (5), we write down in appendix A the fourth-order static effective Hamiltonian. In appendix B, we extend the comparison between the time-dependent Hamiltonian in Eq. (2) and the effective model in Eq. (5), that is developed in the main text, to include the fourth- and sixth-order effective Hamiltonians. The results are qualitatively similar.
## 3 Spectra of the Floquet and the Effective Hamiltonian
We start this section by comparing the solutions of the time-dependent system described by the periodically driven Hamiltonian \(\hat{\mathcal{H}}(t)\) in Eq. (2) with those of the effective Hamiltonian \(\hat{H}_{\text{eff}}^{(2)}\) in Eq. (5). The time-dependent oscillator in Eq. (2) relies on five parameters: \(\omega_{o},\;g_{3},\;g_{4},\;\omega_{d}\) and \(\Omega_{d}\). From now on, we set \(\omega_{o}=\hbar=1\).
We denote the eigenstates and eigenvalues of \(\hat{H}_{\rm eff}^{(2)}\) as \(|\psi_{k}\rangle\) and \(E_{k}\), and its ground-state energy as \(E_{0}\). The driven system is described by the Floquet states [41],
\[|\Psi_{k}(t)\rangle=e^{-i\varepsilon_{k}t}|\phi_{k}(t)\rangle,\]
where \(|\phi_{k}(t)\rangle=|\phi_{k}(t+T)\rangle\) are the Floquet modes and \(\varepsilon_{k}\) are the Floquet quasienergies. The Floquet modes are the eigenstates of the time-evolution operator (Floquet operator) after two periods of the drive, at \(T=2T_{d}\),
\[\hat{U}(T)|\phi_{k}\rangle=e^{-i\varepsilon_{k}T}|\phi_{k}\rangle,\]
and the quasienergies are obtained by diagonalizing \(\hat{U}(T)\). Notice that at \(t=nT\), where \(n\) is a natural number, the Floquet states and the Floquet modes coincide, \(|\Psi_{k}\rangle=|\phi_{k}\rangle\). The quasienergies are uniquely defined modulo \(\hbar\omega_{d}/2=2\pi\hbar/T\), that is,
\[\varepsilon_{k}\in[0,\hbar\omega_{d}/2].\]
### Quasienergies vs Eigenvalues
In the top panel of Fig. 1, we show the excitation energies \(\hat{E}=E-E_{0}\) (black lines) of \(\hat{H}_{\rm eff}^{(2)}\) as a function of the control parameter \(\epsilon_{2}/K\) and compare them with the quasienergies (orange lines) computed with respect to the ground-state quasienergy, \(\tilde{\varepsilon}=\varepsilon-\varepsilon_{0}\). To be able to compare \(\tilde{E}\) and \(\tilde{\varepsilon}\) properly, we plot \(\tilde{E}\equiv(E-E_{0})/K\) and \(\tilde{\varepsilon}=[(\varepsilon-\varepsilon_{0})\mod(\omega_{d}/2)]/K\). For this figure, we chose the value of the nonlinearities to be \(g_{3}/\omega_{o}=0.00075\), \(g_{4}/\omega_{o}=1.27\times 10^{-7}\), and therefore \(K/\omega_{o}=1.685\times 10^{-6}\).
To determine the ground-state quasienergy \(\varepsilon_{0}\), we use the method developed in [42] to follow the eigenstates with definite localization properties. We first determine the ground state \(|\Psi_{0}(\epsilon_{2}/K=0)\rangle\) of the Hamiltonian in Eq. (2) with \(\Omega_{d}=0\). We then turn on the drive with a small increment \(\delta\Omega_{d}\) that results in an increase \(\delta(\epsilon_{2}/K)\) of the control parameter. To determine the new Floquet ground state, \(|\Psi_{0}(\delta(\epsilon_{2}/K))\rangle\), we search for the Floquet state that has the largest overlap with \(|\Psi_{0}(\epsilon_{2}/K=0)\rangle\). The procedure is repeated each time the drive amplitude is increased so that the Floquet ground state is recursively updated.
For the small values of \(g_{3}\) and \(g_{4}\) considered in Fig. 1, \(\hat{H}_{\rm eff}^{(2)}\) accurately describes the spectrum of \(\hat{\mathcal{H}}(t)\), both Hamiltonians exhibiting an ESQPT. Specifically, the spacing between pairs of adjacent levels \(E_{k}\) belonging to different parity sectors gets exponentially small at the energy of the ESQPT [27], a phenomenon that became dubbed as "spectral kissing" in [31]. The critical energy of the ESQPT grows as \(\epsilon_{2}/K\) increases. The coalescence of the energies is accompanied by the clustering of the energy levels and the consequent divergence of the density of states at the ESQPT critical energy [27].
In Figs. 1(i)-(xii), we show the Wigner functions for the Floquet states. They are visually indistinguishable from the corresponding eigenstates of \(H_{\rm eff}^{(2)}\) in this regime (see below).
The behavior of the states below and above
Figure 1: Top panel: (Rescaled) Excitation energies \(\hat{E}=(E-E_{0})/K\) (black lines) of \(\hat{H}_{\rm eff}^{(2)}\) in Eq. (5) and quasienergies \(\tilde{\varepsilon}=[(\varepsilon-\varepsilon_{0})\mod(\omega_{d}/2)]\) (orange) of \(\hat{\mathcal{H}}(t)\) in Eq. (2) as a function of the control parameter \(\epsilon_{2}/K\). Panels (i)-(vi): Wigner functions of the Floquet states corresponding to the quasienergies at the cross symbols x in the top panel for \(\epsilon_{2}/K=13\) and \(\tilde{\varepsilon}_{k}\) equal to (i) 0, (ii) 51.25, (iii) 97.9, (iv) 170.1 (v) 251.74, and (vi) 364.76. States (i)-(iii) lie below the ESQPT and state (iv) is at the ESQPT critical energy. Panels (vii)-(xii): Wigner functions of the Floquet states corresponding to the circle symbols \(\circ\) in the top panel for \(\tilde{\varepsilon}_{6}\) and \(\epsilon_{2}/K\) equal to (vii) 2.19, (iii) 4.38, (ix) 8.76, (x) 10.96, (x) 19.73, and (xii) 35.1. All panels: Basis size \(N=200\), \(g_{3}/\omega_{o}=0.00075\), \(g_{4}/\omega_{o}=1.27\times 10^{-7}\), and \(K/\omega_{o}=1.685\times 10^{-6}\). In the Wigner functions, red corresponds to positive values and blue corresponds to negative values.
the ESQPT is markedly different. Below the ESQPT, the system is represented by a double well and the states exhibit a cat-like structure that can be used in quantum information processing [43]. This structure is revealed in Figs. 1(i)-(iii) by the Wigner functions of three Floquet states of \(\hat{\mathcal{H}}(t)\) that show two ellipses (one on the extreme left and the other on the extreme right of each panel) located at the minima of the effective double well, and in between them, centered at \(q=0\), we see the interference fringes. The state shown in Fig. 1(iv) is at the ESQPT critical energy, being highly localized at the Fock state \(|0\rangle\), which translates into a state concentrated at the origin of the classical phase space, where there is an unstable hyperbolic point [27]. The Wigner function of this state in Fig. 1(iv) is visibly localized along the separatrix (classical stable and unstable manifolds). Above the ESQPT, as the energy increases, the Floquet states approach the eigenstates of a harmonic potential, as seen in Figs. 1(v)-(vi).
In Figs. 1(vii)-(xii), we show the Wigner functions of the Floquet state with quasienergy \(\tilde{\varepsilon}_{5}\) for different values of \(\epsilon_{2}/K\). They are marked with circles in the top panel of Fig. 1. For the value of \(\epsilon_{2}/K\) in Fig. 1(vii), this Floquet state is above the ESQPT and resembles an eigenstate of a harmonic potential, but for the values of \(\epsilon_{2}/K\) in Figs. 1(x)-(xii), the state is below the ESQPT and turns into a cat-state like.
In Fig. 2, we continue our comparison of the quasienergies (orange lines) of the driven Hamiltonian in Eq. (2) with the eigenenergies (black lines) of the effective Hamiltonian \(\hat{H}^{(2)}_{\rm eff}\) in Eq. (5). In the Fig. 2(a) of the top row, the choices of \(g_{3}\) and \(g_{4}\) lead to \(K<0\), while the Figs. 2(b)-(d) of the top row have \(K>0\). Notice that the Fig. 2(b) of the top row is the same as Fig. 1 and it exhibits almost perfect coincidence between \(\tilde{E}\) and \(\tilde{\varepsilon}\). This panel is shown here again for a good comparison with the other cases.
Spectral kissing is still observed in the Fig. 2(a) of the top row, but the agreement between \(\tilde{E}\) and \(\tilde{\varepsilon}\) deteriorates for larger values of \(\epsilon_{2}/K\). In the Figs. 2(c)-(d) of the top row, in addition to the reduced agreement between the spectrum of \(\hat{H}^{(2)}_{\rm eff}\) (black lines) and part of the quasienergies of \(\hat{\mathcal{H}}(t)\) (orange points), we also show with gray points the quasienergies that have no relationship with \(\tilde{E}\). These are the quasienergies that are folded to the first Brillouin zone. Since the quasienergies are defined modulo \(\hbar\omega_{d}/2\), after folding them to the first Brillouin zone, they get clustered, as seen with those gray points. This issue becomes more evident in the Figs. 2(c)-(d) of the top row, where \(g_{3}\) is larger than in the Fig. 2(b). To compare \(\tilde{\varepsilon}\) and \(\tilde{E}\), visually, we distinguish the quasienergies for the states associated with a small mean number of photons \(\langle\phi|\hat{a}^{\dagger}\hat{a}|\phi\rangle\) (orange) - smaller than some maximum value that is set differently depending on \(g_{3}\) and \(g_{4}\) - with those that have a large mean number of photons (gray).
In addition to the spectral kissing and the divergence of the density of states, the presence of an ESQPT is also characterized by a discontinuity in some observables at the ESQPT energy [27; 44]. This can be seen in the bottom row of Fig. 2, where we show the average number of photons for each excited state as a function of the rescaled excitation energy (quasienergy) \(\tilde{E}(\tilde{\varepsilon})\) for \(\epsilon_{2}/K=20\) for the same nonlinearities used in the top row of Fig. 2. The sudden drop in the values of \(\langle\phi|\hat{a}^{\dagger}\hat{a}|\phi\rangle\) at the ESQPT critical energy is visible in Figs. 2(a)-(b) of the bottom row. This becomes less evident in the Fig. 2(d) of the bottom row, reflecting the weaker agreement between eigenvalues and quasienergies in the Fig. 2(d) of the top row.
### Floquet States vs Eigenstates
A deeper comparison between the driven system and its corresponding effective model can be achieved through the analysis of the structure of the Floquet states of \(\hat{\mathcal{H}}(t)\) and the eigenstates of \(\hat{H}_{\rm eff}^{(2)}\). Due to the importance of the cat-like states for quantum information processing, we concentrate our analysis on the states that lie below the ESQPT.
To quantify the proximity of an eigenstate \(|E_{k}\rangle\) of the effective Hamiltonian to a Floquet state, we use the inverse participation ratio (IPR), which is a measure of the level of delocalization of quantum states (see [45] and references therein). Here, we define the IPR of state \(|E_{k}\rangle\) as \(I_{k}=\sum_{j}|a_{j}|^{4}\), where \(a_{j}\) are the coefficients given by the expansion of the eigenstate in the basis of the Floquet modes, that is, \(|E_{k}\rangle=\sum_{j}a_{j}|\phi_{j}\rangle\). The IPR ranges from \(1/N\) for a completely delocalized state in the given basis (\(N\) is the number of basis states) to \(1\) for a state that is completely localized in a single basis state.
There is, however, one further aspect that needs to be taken into account. According to Eq. (3), the Floquet modes \(|\phi_{j}\rangle\) and the effective Hamiltonian eigenstates \(|E_{k}\rangle\) live in different reference frames separated by \(\hat{S}\). To be able to compare them in the same frame, we need to compute the IPR defined as
\[\mathcal{I}_{k}=\sum_{j}|\langle\phi_{j}|\hat{U}_{S}|E_{k}\rangle|^{4}. \tag{6}\]
In addition to \(\mathcal{I}_{k}\), the other quantity that we use as a figure of merit is the average
\[\overline{\mathcal{I}}=\frac{1}{n_{b}}\sum_{k=1}^{n_{b}}\mathcal{I}_{k} \tag{7}\]
where \(n_{b}\approx 2\frac{\epsilon_{2}}{\pi K}\) is the number of states of the effective Hamiltonian below the energy of the ESQPT (\(E_{k}\leq\epsilon_{2}^{2}/K\)) obtained in [36, 27].
In the left panel of Fig. 3, we show \(\overline{\mathcal{I}}\) (colored lines) as a function of \(\epsilon_{2}/K\) for each eigenstate of \(\hat{H}_{\rm eff}^{(2)}\) with energy below the ESQPT. The parameters are the same as in Fig. 2(c). One sees that as \(\tilde{E}\) (black lines) and \(\tilde{\varepsilon}\) (orange points) distant themselves in the Fig. 2(c) of the top row, \(\mathcal{I}\) decreases significantly in the left panel of Fig. 3. This separation between the spectrum of the driven system and of the effective model is more visible for larger values of the control parameter \(\epsilon_{2}/K\) and for larger energies. In this region, higher orders of the effective Hamiltonian are needed to get better agreement.
In the two columns of panels in the middle of Fig. 3, we have a closer look at the structure of two selected states, which are marked with a circle (at \(\epsilon_{2}/K=10\)) and a square (at \(\epsilon_{2}/K=30\)) in the left panel of Fig. 3. The left column of panels is for Wigner function of the state at \(\epsilon_{2}/K=10\) and the right column for the state at \(\epsilon_{2}/K=30\). The top panels of the columns give the Wigner function for the Floquet state, the middle panels are for the Floquet state after the \(\hat{U}_{S}^{\dagger}\) transformation, and the bottom panels for the eigenstate of \(\hat{H}_{\rm eff}^{(2)}\). After the \(\hat{U}_{S}^{\dagger}\) transformation, the Floquet states approach the eigenstates
Figure 3: Left panel: Same as Fig. 2(c), but showing also the inverse participation ratio, \(\mathcal{I}\), of each eigenstate below the ESQPT. The circle and the square symbols mark the state at \(\epsilon_{2}/K=10\) and \(\epsilon_{2}/K=30\), respectively. Middle of the figure: Left (Right) column gives the Wigner functions for the state marked with a circle (square) on the left panel; top panel in the column corresponds to the Floquet state, middle panel in the column corresponds to the Floquet state after the \(\hat{U}_{S}\) transformation, and bottom panel in the column corresponds to the eigenstate of \(\hat{H}_{\rm eff}^{(2)}\). Right panel: Average IPR, \(\overline{\mathcal{I}}\), for the eigenstates below the ESQPT. All panels: Basis size \(N=200\), \(g_{3}/\omega_{o}=0.015\) and \(g_{4}/\omega_{o}=10^{-7}\).
of \(\hat{H}_{\text{eff}}^{(2)}\).
In the right panel of Fig. 3, we show the average \(\overline{\mathcal{I}}\) for the states below the ESQPT as a function of \(\epsilon_{2}/K\). For small \(\epsilon_{2}/K\), reflecting the left panel of Fig. 3, \(\overline{\mathcal{I}}\approx 1\). As \(\epsilon_{2}/K\) increases, the Floquet states and eigenstates drift apart, more excited states fall under the well (that is, \(n_{b}\) increases), and \(\overline{\mathcal{I}}\) gradually decreases.
To gain further insight into the structures of the states and their dependence on the nonlinearities, we perform in Fig. 4 a systematic study of the mean value of the IPR, defined in Eq. (7), as a function of \(g_{3}\) and \(g_{4}\). In the top panel of Fig. 4, \(\epsilon_{2}/K=10\), in the bottom panel, \(\epsilon_{2}/K=30\), and the thick white line in both panels marks the values of \(g_{3}\) and \(g_{4}\) that lead to \(K=0\). We observe that there is a yellow region around \(K=0\), where \(\mathcal{I}\approx 1\), which indicates that the eigenstates of \(\hat{H}_{\text{eff}}^{(2)}\) describe very well the Floquet states of the driven Hamiltonian in Eq. (2).
Comparing the top and bottom panels of Fig. 4, we see that the yellow region, indicating the excellent agreement between eigenstates and Floquet states, decreases as \(\epsilon_{2}/K\) increases. This is evident for small values of \(g_{3}\), where the yellow region is bounded by a power-law curve (dashed orange line) empirically found to be approximately \(g_{4}\sim g_{3}^{3/4}/(\epsilon_{2}/K)\), and for large values of \(g_{3}\), where the region of low values of \(\mathcal{I}\) (blue) grows to the left.
The cross symbols marked with the letters a, b, c, and d in Fig. 4 correspond toe the values of \(g_{3}\) and \(g_{4}\) used, respectively, in Figs. 2(a), (b), (c), and (d). One sees that even though the average IPR in point a, \(\mathcal{I}\approx 1\), indicates significant localization for \(\epsilon_{2}/K=10\) in the top panel of Fig. 4, the same point shows delocalization (\(\mathcal{I}\approx 0\)) for \(\epsilon_{2}/K=30\) in the bottom panel. This behavior matches what is observed for the spectrum in the Fig. 2(a) of the top row. The
correspondence with the top row of Fig. 2 holds for the other three points b, c, and d. Points c and d, in particular, have large values of the \(g_{3}\)-nonlinearity, where the agreement between eigenstates and Floquet states are expected to break down for large \(\epsilon_{2}/K\), as indeed verified in the bottom panel of Fig. 4.
The red lines in Fig. 4 indicate constant values of \(K/\omega_{o}\). They serve as reference for experimental designs, so that one can know for which values of \(g_{3}/\omega_{o}\) and \(g_{4}/\omega_{o}\), the constant \(K/\omega_{o}\) is inside a region of agreement between Floquet states and eigenstates.
In the appendix C, we show a figure equivalent to Fig. 4, but for \(g_{4}<0\). This is done, because it can be important for current and future experiments. In this case, one cannot reach \(K=0\), but we verify that the results are very similar to those found for \(g_{4}>0\) in Fig. 4.
The message conveyed by Fig. 4 rich and subtle. One could straightforwardly infer that inside the yellow region, where \(\bar{\mathcal{I}}\approx 1\), the static effective theory describes well the Floquet system. But this is true provided the transformation \(\hat{U}_{S}\) is taken into account. To better understand this point, we show in Fig. 5 the distance between \(\hat{U}_{S}\) and the identity operator \(\hat{\mathbb{I}}\), defined as
\[d(\hat{U}_{S},\hat{\mathbb{I}})=\frac{1}{2N}\|\hat{U}_{S}-\hat{\mathbb{I}}\|, \tag{8}\]
where \(\|\cdot\|\) is the trace norm.
In the regime of parameters where \(d(\hat{U}_{S},\hat{\mathbb{I}})\ll 1\) (Fig. 5) we find that all the static effective theory is contained within the effective Hamiltonian: the complicated frame transformation \(\hat{U}_{S}\) away from the (trivially displaced and rotating) lab frame is negligibly small. In this regime, a large overlap between states and quasistates (\(\bar{\mathcal{I}}\approx 1\), Fig. 4) is guaranteed.
A sizable \(d(\hat{U}_{S},\hat{\mathbb{I}})\) may still allow for a large overlap, provided the frame transformation modifying the eigenstates of the effective Hamiltonian is taken into account when comparing them to the Floquet states. The relevant states in the lab frame may look very different from the eigenstates of the static effective Hamiltonian. That is, Fig. 5 measures the distance in between the reference frame in which eq.(2) described the system and the frame generated by \(\hat{S}\) where the static effective description is valid.
We then remark that Fig. 4 measures the accuracy of the full static effective theory while Fig. 5 measures the accuracy of the usual static effective Hamiltonian treatment, which assumes that eq.(2) and the static effective Hamiltonian describe the system in the same frame [30, 31, 32, 33, 29].
Finally, we observe that the regime of sizable \(d(\hat{U}_{S},\hat{\mathbb{I}})\), the coupling to the environment is itself modified. Considering the Hamiltonian of the system plus its environment, one finds that the operator \(\hat{U}_{S}\), built iterative to produce a static effective description of the system alone, forcefully modifies the coupling to the environment too. This leads to exotic forms of nonlinear-driven dissipation [46] that are currently being explored as possible culprits for discrepancies between open quanutm system experiment and theory [47]. Our analysis provides a different viewpoint on this open problem.
Finally, we observe that the regime of sizable \(d(\hat{U}_{S},\hat{\mathbb{I}})\), the coupling to the environment is itself modified. Considering the Hamiltonian of the system plus its environment -- whether it's a measurement device, a coherent control pulse, or a heat reservoir -- it is evident that the operator \(\hat{U}_{S}\), which is iteratively constructed to produce a static effective description of the system alone, also forcefully modifies the coupling to the environment. This could lead to measurement infidelity, control anomalies, or exotic forms of nonlinear-driven dissipation [46]. Presently, this nonlinear-driven dissipation is under investigation as a potential reason for discrepancies observed between open quantum system experiments and theoretical models [47]. Our analysis provides a different viewpoint on this open problem.
## 4 Final remarks
We analyzed the conditions under which the time-dependent Hamiltonian describing a driven superconducting circuit can be approximated by low-order effective time-independent Hamiltonians. The main takeaway is the observation that the static effective theory goes beyond the static effective Hamiltonian treatment, and we introduced a metric to map the parameter space. Our focus was on the part of the spectrum below the ESQPT, where the states exhibit cat-like features that are relevant for quantum computing and quantum information science.
We found that there exists a well-defined re
gion of values of the nonlinearity parameters \(g_{3}\) and \(g_{4}\), where the eigenvalues and the eigenstates of the effective Hamiltonian \(\hat{H}^{(2)}_{\rm eff}\) describe correctly the quasienergies and the Floquet states of the time-dependent system. However, in the limit of large values of \(g_{3}\) or \(g_{4}\), the correspondence breaks down. The phase diagram of the nonlinearities \(g_{3}\) and \(g_{4}\) that we provided for the analysis of coincidence between the effective and driven models has practical implications for the design of Kerr parametric oscillators and other parametric processes that tend to be overlooked.
An important conclusion of our study is that the effective Hamiltonian suffices for the comparison between the quasienergies of the driven system and the eigenvalues of the static effective description, but the comparison between states requires also the analysis of the unitary transformation \(\hat{U}_{S}\). The static effective theory includes both the effective Hamiltonian and \(\hat{U}_{S}\). Without the latter, one may infer the failure of a given order of the effective description for parameter values, where it may actually still hold.
We finish with a brief discussion about the analysis presented in the appendix B, where we showed that the agreement between the driven and effective model can hold for larger values of the nonlinearities if one increases the perturbation order. This raises the question of whether one could expect exact agreement between the driven and static descriptions for an infinite order. The answer is negative, because for large nonlinearities and strong drive, the driven system can develop chaos, as shown elsewhere [48], and in this case, there is no perturbation order that can lead to agreement with the static effective Hamiltonian, which is necessarily integrable. Chaos can produce the collapse of the ESQPT [49], while the static effective description is integrable by construction.
## Acknowledgments
This work was supported by the NSF CCI grant (Award Number 2124511). D.A.W and I.G.-M. received support from CONICET (Grant No. PIP 11220200100568CO), UBACyT (Grant No. 20020170100234BA) and ANCyPT (Grants No. PICT-2020-SERIEA-00740 and PICT-2020-SERIEA-01082). I.G.-M. received support from CNRS (France) through the International Research Project (IRP) "Complex Quantum Systems" (CoQSys).
## Appendix A Fourth-order effective Hamiltonian
At fourth order, the effective Hamiltonian includes a four-photon drive and the first non-squeezing drive term resulting in
\[\frac{\hat{H}^{(4)}_{\rm eff}}{\hbar}= - \Delta^{(4)}\hat{a}^{\dagger}\hat{a}-K^{(4)}\hat{a}^{\dagger 2} \hat{a}^{2}-\lambda^{(4)}\hat{a}^{\dagger 3}\hat{a}^{3} \tag{9}\] \[+ \epsilon^{4}_{(4)}\hat{a}^{\dagger 4}+\epsilon^{*}_{(4)}\hat{a}^ {4},\]
where
\[\Delta^{(4)}=\sum_{k=0,1,2}\Delta^{(4)}_{[k]}|\Pi|^{2k}, \tag{10}\] \[K^{(4)}=\sum_{k=0,1}K^{(4)}_{[k]}|\Pi|^{2k}, \tag{11}\]
and displayed below, are the analytical expressions for all the coefficients of Eq. (9):
\[-\Delta_{[0]}^{(4)} =\frac{9g_{4}^{2}}{\omega_{a}}+47\frac{g_{3}^{2}g_{4}}{\omega_{a}^{ 2}}-\frac{6269}{324}\frac{g_{3}^{4}}{\omega_{a}^{3}}\] \[-\Delta_{[1]}^{(4)} =\frac{54}{5}\frac{g_{4}^{2}}{\omega_{a}}+\frac{671}{10}\frac{g_ {3}^{2}g_{4}}{\omega_{a}^{2}}+\frac{113}{360}\frac{g_{3}^{4}}{\omega_{a}^{3}}\] \[-\Delta_{[2]}^{(4)} =-\frac{9}{2}\frac{g_{4}^{2}}{\omega_{a}}+\frac{15113}{600}\frac {g_{3}^{2}g_{4}}{\omega_{a}^{2}}-\frac{297947}{32400}\frac{g_{3}^{4}}{\omega_ {a}^{3}}\] \[-K_{[0]}^{(4)} =\frac{153}{16}\frac{g_{4}^{2}}{\omega_{a}}+\frac{225}{4}\frac{ g_{3}^{2}g_{4}}{\omega_{a}^{2}}+\frac{805}{36}\frac{g_{3}^{4}}{\omega_{a}^{3}}\] \[-K_{[1]}^{(4)} =\frac{27}{5}\frac{g_{4}^{2}}{\omega_{a}}+\frac{671}{20}\frac{g_ {3}^{2}g_{4}}{\omega_{a}^{2}}+\frac{113}{720}\frac{g_{3}^{4}}{\omega_{a}^{3}}\] \[-\lambda^{(4)} =\frac{17}{8}\frac{g_{4}^{2}}{\omega_{a}}+\frac{25}{2}\frac{g_{3} ^{2}g_{4}}{\omega_{a}^{2}}+\frac{805}{162}\frac{g_{3}^{4}}{\omega_{a}^{4}}\] \[\epsilon_{4}^{(4)} =\left(\frac{33}{8}\frac{g_{4}^{2}}{\omega_{a}}-\frac{101}{96} \frac{g_{3}^{2}g_{4}}{\omega_{a}}-\frac{2009}{1296}\frac{g_{3}^{4}}{\omega_{a }^{3}}\right)\Pi^{2}.\]
## Appendix B Convergence of the static effective theory
In the main text, we focused on the static effective description of the Kerr parametric oscillator to the first nontrivial order, which corresponds to the second-order effective Hamiltonian \(\hat{H}_{\rm eff}^{(2)}\). In this appendix, we evaluate the convergence of the high order static effective theory when applied to the Kerr parametric oscillator. This is done by comparing the fourth-order (see appendix A) and the sixth-order static effective Hamiltonians with the driven Hamiltonian in Eq. (2).
Motivated by experimental developments in the field of quantum circuits, two methods to go beyond this first-order theory in a systematic manner were derived [3, 14]. These approaches, useful to explain experimental data and numerical simulations [3, 14], requires a symbolic computer program to carry out the analytical calculation, since the number of terms is far too great to write down by hand.
In Fig. 6, we show \(\mathcal{I}\) computed for the eigenstates of the static effective Hamiltonian \(\hat{H}_{\rm eff}^{(4)}\) (see appendix'A) in comparison with the Floquet states obtained from \(\mathcal{H}(t)\). The results are qualitatively similar to those seen for \(\hat{H}_{\rm eff}^{(2)}\) in Fig. 4, but there are quantitative differences. To illustrate these differences, we mark in Fig. 6 the place where, for small values of \(g_{3}\), we get \(\mathcal{I}\approx 0.5\) for the second-order static effective Hamiltonian \(\hat{H}_{\rm eff}^{(2)}\) (gray circles), the fourth-order \(\hat{H}_{\rm eff}^{(4)}\) (red squares), and the sixth-order \(\hat{H}_{\rm eff}^{(6)}\) sixth order (white triangles). We see that the region of
disagreement between Floquet states and eigenstates (blue region) decreases as the order increases.
In Fig. 7, we show the average IPR, \(\mathcal{I}\), as a function of \(g_{3}\), for a fixed value \(g_{4}=10^{-7}\) (note that for large \(g_{3}\), \(\tilde{\mathcal{I}}\) does not depend on \(\mathrm{g_{4}}\) ), for \(\hat{H}_{\mathrm{eff}}^{(2)}\), \(\hat{H}_{\mathrm{eff}}^{(4)}\), and \(\hat{H}_{\mathrm{eff}}^{(6)}\). The horizontal line marks the point (circle) in the curves where \(\mathcal{I}\approx 0.5\). One sees that the value of \(g_{3}\) for \(\mathcal{I}\approx 0.5\) gets displaced to the right, effectively enlarging the area where there is good agreement between the eigenstates and Floquet states below the ESQPT. However, the rate of convergence of this expansion is slow and comes at the cost of highly complex expressions.
## Appendix C Spectrum for negative \(g_{4}\)
For completeness and because it can be important for present and future experiments, we show in Fig. 8 the average IPR, equivalently to what was done in Fig. 4, but now for \(g_{4}<0\). In this case, one cannot reach \(K=0\), but the results are very similar to those found for \(g_{4}>0\).
|
2309.17309 | Polyglot Jet Finding | The evaluation of new computing languages for a large community, like HEP,
involves comparison of many aspects of the languages' behaviour, ecosystem and
interactions with other languages. In this paper we compare a number of
languages using a common, yet non-trivial, HEP algorithm: the \akt\ clustering
algorithm used for jet finding. We compare specifically the algorithm
implemented in Python (pure Python and accelerated with numpy and numba), and
Julia, with respect to the reference implementation in C++, from Fastjet. As
well as the speed of the implementation we describe the ergonomics of the
language for the coder, as well as the efforts required to achieve the best
performance, which can directly impact on code readability and sustainability. | Graeme Andrew Stewart, Philippe Gras, Benedikt Hegner, Atell Krasnopolski | 2023-09-29T15:08:21Z | http://arxiv.org/abs/2309.17309v3 | # Polyglot Jet Finding
###### Abstract
The evaluation of new computing languages for a large community, like HEP, involves comparison of many aspects of the languages' behaviour, ecosystem and interactions with other languages. In this paper we compare a number of languages using a common, yet non-trivial, HEP algorithm: the anti-\(k_{\mathrm{T}}\) clustering algorithm used for jet finding. We compare specifically the algorithm implemented in Python (pure Python and accelerated with numpy and numba), and Julia, with respect to the reference implementation in C++, from Fastjet. As well as the speed of the implementation we describe the ergonomics of the language for the coder, as well as the efforts required to achieve the best performance, which can directly impact on code readability and sustainability.
## 1 Introduction
High energy physics (HEP), as a discipline, has undergone at least two major shifts in language after the widespread adoption of Fortran in the 1960s [1]. A first was a significant shift from Fortran to C++, starting with the BaBar experiment, then gathering pace at the end of the the Large Electron-Positron Collider (LEP) era, c. 2000, when the Large Hadron Collider (LHC) experiments adopted C++ more or less wholesale. The second shift happened with the gradual incorporation of Python into the language ecosystem of HEP, from about 2010.
In the first transition, Fortran was almost completely displaced by C++ in the HEP experiments; in the theory domain the evolution was more gradual and mixed, with Fortran and C++ still both used today. In the second, a different type of transition took place, where Python became more and more popular, but co-exists with C++. The C++ is largely used in performance critical areas, with Python finding traction when flexibility and rapid turn-around is needed, e.g., in configuration and steering. Python code is typically used to interface to higher performance C and C++ libraries, both generic (e.g., numpy) and specific HEP codes.
Although the field is, for reasons of stability and legacy, slow to move to new languages, there are some significant issues with the current language choices that make an exploration of alternatives worthwhile. For example, the interfaces between Python and C++ are a source of friction, both for passing data and error messages back and forth, as well as being obliged to switch languages and reimplement code on occasion, when moving from a prototype to production (assuming the the developer actually has skills in both languages, which is not a given). This _two language problem_ has potentially been addressed in the _Julia_ programming language [2; 3], with promising prospects for HEP, in particular, [4; 5] as well as other
STEM1 areas [6]. Julia offers just in time compilation giving an ergonomic experience much like Python, but with runtime speeds comparable to C and C++. C++ is also a notoriously tricky language to use, particularly related to memory handling [7] and HEP C++ codes are frequently riddled with code defects [8].
Footnote 1: Science, technology, engineering, and mathematics
Evaluation of the prospects for a language in any particular domain area should be done with a real problem from that domain, rather than any synthetic benchmark. In this paper we look at the problem of jet finding, or clustering, which is a use case from high energy physics used in calorimeter reconstruction. This is a good example as it is not trivial, but it is also not so complex that different implementations take too long to write.
The languages we examine here, along with links to the code used, are given in Table 1.
The evaluation itself can cover many aspects of a programming language and the experience of using it. Metrics such as runtime are easy to evaluate, but the ergonomics of using particular languages and the support offered by the language ecosystem for developers are also critical and we comment on these.
## 2 Anti-\(k_{\Upsilon}\) Jet Clustering Algorithm
### Algorithm
The anti-\(k_{\Upsilon}\) clustering algorithm [10; 11] is an infrared and colinear safe jet clustering algorithm, which is robust against soft fragmentation components. We use the Fastjet implementation [12], that proceeds in the following way:
1. A radius parameter, \(R\), is defined (0.4 is typical at the LHC).
2. For each active pseudojet \(A\) (that is, an initial particle or a merged cluster): 1. Considering all other PseudoJets, \(B\), which are closer in geometric distance than \(R\), measure the minimum geometric distance: \[d=\min\bigg{(}\sqrt{\Delta\eta_{AB}^{2}+\Delta\phi_{AB}^{2}}\bigg{)},\] where \(\Delta\eta_{AB}\) and \(\Delta\phi_{AB}\) are the rapidity and azimuthal angle differences between \(A\) and \(B\). If there are no other pseudojets within \(R\), then \(d=R\) for pseudojet \(A\). 2. Define the anti-\(k_{\Upsilon}\) distance, \(d_{ij}\), as \(d_{ij}=d\min(k_{\Upsilon,A}^{-2},k_{\Upsilon,B}^{-2})\) where \(k_{\Upsilon,[A,B]}\) is the transverse momentum of the pseudojet \(\{A,B\}\). If there is no neighbouring pseudojet, \(d_{ij}=d\,k_{\Upsilon,A}^{-2}\).
3. Choose the pseudojet with the lowest \(d_{ij}\).
\begin{table}
\begin{tabular}{l|l}
**Language** & **Repository** \\ \hline C++ (FastJet) & FastJet Website (release 3.4.1) \\ Python (Pure) & GitHub antikt-python \\ Python (Accelerated) & GitHub antikt-python \\ Julia & GitHub JetReconstruction.jl \\ \end{tabular}
\end{table}
Table 1: Code repositories used in this paper. See [9] for exact commits and instructions.
1. If this pseudojet has an active partner, \(B\), merge these two pseudojets to a new pseudojet. 2. If not, this jet is finalised and removed from the active list.
4. Repeat until no pseudojets remain active.
Note that the definition of \(d_{ij}\), so-called anti-\(k_{\mathrm{T}}\), with a negative power favours merging jets with a high transverse momentum first, which provides stability against soft radiation, hence its popularity. (Considering a general metric distance of \(k_{\mathrm{T}}^{2p}\), \(p=-1\) is anti-\(k_{\mathrm{T}}\) merging, \(p=0\) is Cambridge/Achen merging and \(p=1\) is inclusive \(k_{\mathrm{T}}\), [11].)
The algorithm itself has a nice mixture of parallelisation opportunities (pairwise matching of pseudojet candidates) and serial steps (finding the minimum values of \(d\) or \(d_{ij}\)), which is a good test of a non-naive algorithm's performance.
### Algorithm Implementations
We consider two different implementations of the algorithm described above, taken from FastJet [12; 13].
The first is a _plain implementation_ in which, at each step, all jets are considered as possible neighbours of each other. This algorithm has scaling that runs roughly as \(N^{2}\), where \(N\) is the number of initial particle hits (this is an improvement over the most naive scaling which would be \(N^{3}\)[10]). This implementation is fastest for \(\lesssim 30\) particles.
The second is a _tiled implementation_, in which the geometric space \((\eta,\phi)\) is split into tiles of size \(R\). In this way the number of possible neighbours of any particular jet is limited to the jet's tile and to its immediate neighbours, as illustrated in Figure 1. This strategy reduces the amount of work that needs to be done, at the expense of extra bookkeeping of which jets are in each tile. This implementation is fastest for p-p collisions at the LHC.
## 3 Code Implementation Ergonomics
The code versions used are linked to in Table 1. We highlight some specific observations in this section.
Figure 1: In the tiled algorithm implementation \((\eta,\phi)\) space is split into tiles of size \(R\). When a pseudojet needs to rescan for neighbours (red dot) only pseudojets in tiles within the distance \(R\) need to be considered, here shaded in light blue.
### C++, FastJet
The FastJet package [12; 13] is a well maintained code which is widely used in the HEP community. It is code which is of high quality and well scrutinised and tested. The general style of the code is more akin to C than C++, for reasons of minimising abstraction and increasing speed, although templates are used extensively (where any errors are not usually nicely handled by the compiler).
For the tiled implementation, a linked list structure is used, which requires pointers to pointers that are challenging to reason about for the programmer, as illustrated below.
```
//setuptheinitialnearestneighbourinformation vector<Tile>::const_iteratortile; for(tile=_tiles.begin();tile!=_tiles.end();tile++){ //firstdotintonstile for(jetA=tile->head;jetA!=NULL;jetA=jetA->next){ for(jetB=tile->head;jetB!=jetA;jetB=jetB->next){ doubledist=_bj_dist(jetA,jetB); if(dist<jetA->NN_dist){jetA->NN_dist=dist; jetA->NN=jetB;} if(dist<jetB->NN_dist){jetB->NN_dist=dist; jetB->NN=jetA;} } } //thendotfortRHtiles for(Tile*RTile=tile->RH_tiles;RTile!=tile->end_tiles;RTile++){ for(jetA=tile->head;jetA!=NULL;jetA=jetA->next){ for(jetB=(*RTile)->head;jetB!=NULL;jetB=jetB->next){ doubledist=_bj_dist(jetA,jetB); if(dist<jetA->NN_dist){jetA->NN_dist=dist; jetA->NN=jetB;} if(dist<jetB->NN_dist){jetB->NN_dist=dist; jetB->NN=jetA;} } } } } }
```
### Python
#### 3.2.1 Pure Python
Python is renowned for being a high productivity language and implementation of the jet finding algorithms is rather straightforward, with a clear logic. Mutability of classes allows code to be shared between the different implementations. e.g., an update scan for the basic case looks like this:
``` defscan_for_my_nearest_neighbours(jetA:PseudoJet,jets:list[PseudoJet], R2:float): "Retestallotherjetsagainstthetargetjet" jetA.info.nn=None jetA.info.nn_dist=R2
for ijetB, jetB in enumerate(jets): if not jetB.info.active: continue if ijetB == jetA.info.id: continue dist = geometric_distance(jetA, jetB) if dist < jetA.info.nn_dist: jetA.info.nn_dist = dist jetA.info.nn = ijetB jetA.info.akt_dist = antikt_distance(jetA, jets[jetA.info.nm] if jetA.info.nn else None)
Where specifically info is a mix-in class for bookkeeping pseudojets.
While the code for the tiled implementation involves more bookkeeping, it also remains clear.
#### 3.2.2 Accelerated Python
Accelerated Python code, where both numba and numpy are employed brings some added difficulty. Not all operations are easily expressed as numpy array calculations, particularly for dynamic arrays holding active and inactive jets. This necessitated the use of masks, which need to be tracked. In addition, numba jitted functions are very picky on types that can be passed (at least without being explicitly _taught_ how to deal with them), so instead of a structure, functions are called with many individual array elements, leading to complicated call signatures. e.g., the same function as above becomes
@njit defscan_for_my_nearest_neighbours(ijet:int, phi: npt.ArrayLike, rap:npt.ArrayLike, dist:npt.ArrayLike, akt_dist:npt.ArrayLike, nn:npt.ArrayLike, mask:npt.ArrayLike, "Retest all other jets against the target jet" nn[ijet] = -1 dist[ijet] = R2 _dphi = np.pi - np.abs(np.pi - np.abs(phi - phi[ijet])) _drap = rap - rap[ijet] _dist = _dphi*_dphi + _drap*_drap _dist[ijet] = R2 _# Avoid measuring the distance 0 to myself! _dist[mask] = 1e20 _# Don't consider any masked jets_ iclosejet = _dist.argmin() dist[ijet] = _dist[iclosejet] if iclosejet == ijet: nn[ijet] = -1 akt_dist[ijet] = dist[ijet] * inv_pt2[ijet] else: nn[ijet] = iclosejet akt_dist[ijet] = dist[ijet] * (inv_pt2[ijet] if inv_pt2[ijet] < inv_pt2[iclosejet] else inv_pt2[iclosejet]) # As this function is called on new PseudoJets it's possible
that we are now the NN of our NN ifdist[iclosejet] > dist[ijet]: dist[iclosejet] = dist[ijet] nn[iclosejet] = ijet akt_dist[iclosejet] = dist[iclosejet] * (inv_pt2[ijet] ifinv_pt2[ijet] < inv_pt2[iclosejet] elseinv_pt2[iclosejet])
numba also has some surprising omissions from the numpy functions which it can JIT, e.g., array index travelling, that required explicit reimplementation.
### Julia
Julia is gaining in popularity because it is a language that is easy to use. We found numerous nice features that allow code to be clear, e.g., using the broadcast syntax for calculations on arrays is very compact:
```
kt2=(JetReconstruction.pt.(objects).^2).^p
```
Here.^ (raise to power) operates on each member of the pt value of the objects array.
Like the FastJet code, loops can be used without sacrificing speed, so the code checking for new nearest neighbours is
```
#Findsnewnearestneighbourforpseudojeti
#andcrosschecksdistanceforotherpseudojetsbacktoi
#Notethatnndist,near_neighbour,etaandphiare*Vectors* functionupdate_nearest_neighbour_crosscheck!(nndist,near_neighbour, i::Int,from::Int,to::Int,eta,phi,R2) new_nndist=R2 new_nn=i @inbounds@simdforjinfrom:to delta2=dist(i,j,eta,phi) ifdelta2<new_nndist new_nn=j new_nndist=delta2 end ifdelta2<nndist[j] nndist[j]=delta2 near_neighbour[j]=i end end end nndist[i]=new_nndist near_neighbour[i]=nn; end
```
Note there are some optimisations applied here as Julia _macros_, e.g., @simd, which we discuss below. In particular, here we present updated results from those at the conference for the tiled algorithm in Julia from applying the LoopVectorisation package in a key area adding the @turbo macro:
```
find_best(di),n)=begin best=1
```
## 4 Code Performance
The different implementations of the anti-\(k_{\mathrm{T}}\) algorithm were tested on the same benchmark machine, a 64 core AMD EPYC 7302 3.00GHz with 24GB RAM, running CentOS7. The software versions used were gcc 11.3.0, Python 3.11.4 (with numba 0.57.1 and numpy 1.24.4) and Julia 1.9.2. More details on how to reproduce the measurements are given in [9].
Reconstruction of 100 LHC-like pp events2 was run multiple times and the average reconstruction time per event is given in Table 2. These numbers are normalised to the FastJet tiled algorithm performance (which is 324 \(\mu\)s per event on the benchmark machine). Multiple repeats of the benchmark were done and jitter was observed to be extremely low, \(<1\%\), so is not given. In these measurements the time to read the events (in HepMC3 format) and the JIT time for Julia and numba is excluded.
Footnote 2: Hard QCD \(2\to 2\) processes generated with Pythia8 at 13TeV, with a minimum transverse momentum of 20 GeV.
We observe that the benchmark C++ FastJet code, with the tiled algorithm, is one of fastest implementations. The increase in performance for the tiled code, over the plain one, is significant with the events we used, confirming this is both an excellent algorithm and implementation for LHC p-p data.
As expected, the pure Python codes run very slowly in comparison. More surprisingly, the accelerated Python codes have quite poor performance as well. This is due to the fact that not all parts of the algorithm can be accelerated - bookkeeping operations still run in normal Python and become dominant in the overall runtime. This is particularly true of the tiled algorithm, which deliberately reduces the work to be done (which can be parallelised and accelerated) at the cost of more bookkeeping. This significantly hurts the accelerated implementation, which ends up slower than the basic accelerated implementation; it is not even faster than the pure Python tiled implementation code.
Our Julia code exceeds the performance of FastJet code. In the case of the tiled algorithm, as noted in Section 3.3, a _loop vectorisation_ optimisation was applied to the search across all \(d_{ij}\) to find the minimum value, which results in a 15% improved runtime on x86 architectures
\begin{table}
\begin{tabular}{l|c c}
**Implementation** & **Basic Algorithm** & **Tiled Algorithm** \\ \hline C++ (FastJet) & 16.4 & 1.00 \\ Python (Pure) & 504 & 110 \\ Python (Accelerated) & 28.5 & 113 \\ Julia & 2.83 & 0.94 \\ \end{tabular}
\end{table}
Table 2: Relative run times for the reconstruction of 100 13TeV pp events, normalised to the time for FastJetβs tiled algorithm. Results are stable and reproducible on the benchmark machine at \(<1\%\).
cf. without this macro3. In the case of the basic algorithm the Julia code uses a structure of arrays layout, which the compiler can highly optimise; additional benefit is gained from macros like @simd, which allow the compiler to apply further optimisations, gaining an additional 5%.
Footnote 3: On Appleβs M2Pro chip, the advantage for Julia is more significant, with the final Julia code running \(\times 1.45\) faster than FastJet for the tiled implementation cases
There are some comments regarding these optimisations that should be made: the Julia compiler attempts to use SIMD instructions in any case; when using the @simd macro the developer is guaranteeing iterations are safe to reorder and to overlap, and that floating point operations can be reordered; @turbo also replaces some special functions with implementations that can be vectorised better, but may be of lower accuracy. Use of these macros may lead to different numerical results so must be carefully validated (in our case we have checked that they are safe). One advantage in Julia is that these macros, as well as @fastmath, can be used and validated on a case-by-case basis (cf. the C++ compiler options such as -O3 or -ftree-vectorize, which are applied per compilation unit, but more than likely are actually used globally). In addition, the JIT strategy of Julia and Python's numba will automatically target the binary architecture of the machine being used, avoiding portability issues that can hamper C++ compiled binaries on different microarchitectures.
## 5 Conclusions
We have implemented the anti-\(k_{\mathrm{T}}\) algorithm in a number of different languages and examined code ergonomics as well as run time performance. The benchmark C++ code from FastJet is well written, but the hardest to reason on correctness, due to the nature of the language. Python is excellent for code logic and flexibility, but has a very poor run time performance; accelerating with numpy and numba unfortunately takes much of this advantage away, yet still fails to achieve a competitive run time. Julia performs extremely well, with an excellent 'out of the box' run time. The Julia compiler is able to find significant speed-ups and features like broadcast operators help to keep code clean and quick. Further, applying optimisations in Julia through the use of macros is extremely easy for the programmer to exploit and result in Julia having the best performance of all the codes that we tested.
It should be noted that the optimisations found by the Julia compiler could also be applied to the FastJet code to close the gap. However, the authors' experience is that doing this in C++ is considerably more difficult.
Ergonomically, C++ is also the most difficult language to use, with no package manager, no built in profiler, and where templates and memory management remain tricky. The breadth of libraries in C++ is impressive, although managing dependencies is not easy. In Python the situation is far better, albeit that the package managers are not quite standardised (pip vs. conda/mamba). Profiling and debugging when accelerated code is used in Python (which is how Python is used in data intensive science) is not easy, but package support in Python is really excellent. In Julia the ecosystem is very well integrated, with a built in package manager and excellent reproducibility. Julia libraries are not as extensive as for C++ and Python, although the speed of development of new scientific libraries (which is Julia's target community) is picking up quickly and most areas are covered (see the discussion in Eschel et al. [5]). Debugging and profiling in Julia are very well integrated.
We conclude that expanding the use of Julia in high energy physics would be very worthwhile, given its excellent performance and ergonomics. |
2309.08229 | Automated Multi-Drugs Administration During Total Intravenous Anesthesia
Using Multi-Model Predictive Control | In this paper, a multi-model predictive control approach is used to automate
the co-administration of propofol and remifentanil from bispectral index
measurement during general anesthesia. To handle the parameter uncertainties in
the non-linear output function, multiple Extended Kalman Filters are used to
estimate the state of the system in parallel. The best model is chosen using a
model-matching criterion and used in a non-linear MPC to compute the next drug
rates. The method is compared with a conventional non-linear MPC approach and a
PID from the literature. The robustness of the controller is evaluated using
Monte-Carlo simulations on a wide population introducing uncertainties in the
models. Both simulation setup and controller codes are accessible in open
source for further use. Our preliminary results show the potential interest in
using a multi-model method to handle parameter uncertainties. | Bob Aubouin-Pairault, Mirko Fiacchini, Thao Dang | 2023-09-15T07:59:45Z | http://arxiv.org/abs/2309.08229v1 | # Automated Multi-Drugs Administration During Total Intravenous
###### Abstract
In this paper, a multi-model predictive control approach is used to automate the co-administration of propofol and remifentanil from bispectral index measurement during general anesthesia. To handle the parameter uncertainties in the non-linear output function, multiple Extended Kalman Filters are used to estimate the state of the system in parallel. The best model is chosen using a model-matching criterion and used in a non-linear MPC to compute the next drug rates. The method is compared with a conventional non-linear MPC approach and a PID from the literature. The robustness of the controller is evaluated using Monte-Carlo simulations on a wide population introducing uncertainties in the models. Both simulation setup and controller codes are accessible in open source for further use. Our preliminary results show the potential interest in using a multi-model method to handle parameter uncertainties.
_Keywords:_ Closed-loop Anesthesia, Drug Control, Extended Kalman Filter, Multi-Model, Model Predictive Control, Robustness.
## I Introduction
The main task of an anesthesiologist during general anesthesia is to monitor and regulate the administration of intravenous drugs to achieve the desired level of hypnosis and analgesia while maintaining stable physiological signals. With the advent of quick-acting intravenous drugs like propofol and remifentanil, and the use of EEG-based hypnotic indicators such as the bispectral index (BIS), researchers have been exploring the possibility of automating the drug delivery process [1].
The goal of developing a closed-loop method for administering anesthesia drugs is to improve the patient's state evolution and reduce the workload for anesthesiologists. So far, studies have demonstrated the benefits of using closed-loop control for anesthesia drugs [2, 3], but research is ongoing to identify the best and most reliable control method [4]. The task of automating drug dosage during general anesthesia is a complex and ongoing area of research that has been an objective for the control community for over the last two decades. The high level of reliability required, along with the uncertain nature of the system, makes it a difficult task to design a controller. Numerous closed-loop control strategies have been proposed, see surveys [5] and [6] for instance. Due to the lack of reliable measurement of the analgesia level, most of the papers focus on the propofol-BIS SISO system, which is the kind of controller most widely clinically tested. However, the dosage paradigm during a real surgery is much more complex, as the anesthesiologist needs to take into account the syergic effect between remifentanil and propofol and the side effect of those drugs on the hemodynamic system. In this paper, the problem of designing a controller for the MISO system propofol-remifentanil to BIS is addressed.
Multi-model approaches to handle large parameters uncertainties have been well detailed in [7] and [8], and used for drug control in [9] to regulate mean arterial pressure and cardiac output in critical care subjects. To the authors' knowledge, this method has not been exploited for the control of the anesthesia process. However, analogous ideas to deal with the model parametric uncertainties for reducing the inter-patient variability for the SISO system propofol-BIS have been recently considered in [10].
The problem of control design for propofol and remifentanil rates given the BIS measurements has already been studied during the last decade. In [11] and [12] an EPSAC MPC has been designed using a linearized model of drug synergies, and simulations on a small set of patients has shown the superiority of this method compared to an approach with heuristic rules for the injection of remifentanil. In [13] a dual PID along with a heuristic-based approach has been clinically tested with good performance. Work [14] proposes a positive control law allowing real-time tuning of the propofol-remifentanil balance while ensuring stability. In [15], a Reinforcement Learning method has been used to address the challenge of the MISO system control design with simulation testing. The authors of [16] put forward a mid-range controller strategy that leverages the use of remifentanil for short-term and small-scale modulation of the bispectral index (BIS), while relying on propofol for longer-term interventions. This idea has been then formalized in [17] and [18] where an \(H_{\infty}\) and an MPC controller have been respectively tested with clinical trials and simulations. More recently in [19, 20], and [21] the authors have used the idea of fixing the ratio between drug flow rates to propose a PID and an MPC controller. To assess, the robustness of those last controllers, uncertainties have been introduced in the model and Monte-Carlo simulations have been performed.
In this study, a multi-model state estimation method
together with a predictive control strategy is proposed for co-administering propofol and remifentanil using only the BIS as a measured output (MISO system). The method is based on the use of a multi-model parallel implementation of Extended Kalman Filters, followed by a model selection algorithm, and finally, a non-linear MPC to determine the optimal control input based on the selected model. The novelty of this approach resides in a method that can address the uncertainties in the non-linear functions involved in the anesthesia model. This research is intended as a preliminary investigation to demonstrate the feasibility of the control strategy before addressing more complex MIMO systems, where the mean arterial pressure could be used as output for instance. The method is tested on the induction phase which corresponds to the beginning of the anesthesia, when the patient fall asleep. From a control point of view, this is the most challenging part of the anesthesia process since the patient drug reaction is the most uncertain. Compared to the recent literature on the topic, [19, 20] and [21], the method presented in this paper does not assume a fixed ratio between drug flow, the balance between the drugs is managed through the optimization process. The robustness of the method is tested using Monte-Carlo simulations and results are compared to those obtained by the PID controller presented in [19].
The rest of the paper is organized as follows. In Section 2 the standard drug models for anesthesia are recalled along with the associated uncertainties used in the simulations, then in Section 3 the control method is detailed. Finally, Section 4 presents the simulation setup and the associated results. Section 5 provides some concluding remarks.
## II Standard Anesthesia Model
Drug models involved in anesthesia dynamics modelling are usually composed of two parts: the Pharmacokine (PK) and the Pharmacodynamic (PD). The PK models describe the dynamics of drug concentrations in the patient's body whereas the PD ones represent the link between the drug concentrations and a given physiological effect.
### _Compartment Pharmacokine Model_
For pharmacokinetic (PK) models of both propofol and remifentanil, a common approach is to use a four-compartment model. This model divides the body into three physical compartments: blood, muscles, and fat, and a virtual effect site, as illustrated in Fig. 1. The compartments model results in a linear system represented by the following equations:
\[\begin{pmatrix}\dot{x}_{1}\\ \dot{x}_{2}\\ \dot{x}_{3}\\ \dot{x}_{4}\end{pmatrix}= \begin{pmatrix}-(k_{10}+k_{12}+k_{13})&k_{12}&k_{13}&0\\ k_{21}&-k_{21}&0&0\\ k_{31}&0&-k_{31}&0\\ k_{e}&0&0&-k_{e}\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\end{pmatrix}\] \[+\begin{pmatrix}\frac{1}{V_{1}}\\ 0\\ 0\end{pmatrix}u(t) \tag{1}\]
where \(x_{1}(t),x_{2}(t)\), \(x_{3}(t)\), and \(x_{4}(t)\) represent respectively the drug concentration in blood, muscle, fat, and effect site. The coefficients can be determined from the equation (2), except \(k_{e}\) which is not related to physical meaning.
\[\begin{split} k_{10}=\frac{Cl_{1}}{V_{1}},k_{12}=\frac{Cl_{2}}{ V_{1}},k_{13}=\frac{Cl_{3}}{V_{1}},\\ k_{21}=\frac{Cl_{2}}{V_{2}},k_{31}=\frac{Cl_{3}}{V_{3}}\end{split} \tag{2}\]
with \(V_{i}\) and \(Cl_{i}\) (\(i=1,2,3\)) respectively the volume and the clearance rates of each compartment which can be computed from a population-based model as in [22] and [23]. The input \(u(t)\) is the drug infusion rate. Next, the notation \(x_{p}\) and \(x_{r}\) for the states of the compartment model for propofol and remifentanil is used. Also, \(A_{p}\), \(B_{p}\), \(A_{r}\), and \(B_{r}\) are the state and input matrix of both drugs. Finally, both compartment models can be described by the decoupled system:
\[\begin{pmatrix}\dot{x}_{p}\\ \dot{x}_{r}\end{pmatrix}=\begin{pmatrix}A_{p}&0^{4\times 4}\\ 0^{4\times 4}&A_{r}\end{pmatrix}\begin{pmatrix}x_{p}\\ x_{r}\end{pmatrix}+\begin{pmatrix}B_{p}&0^{4\times 1}\\ 0^{4\times 1}&B_{r}\end{pmatrix}\begin{pmatrix}u_{p}\\ u_{r}\end{pmatrix}. \tag{3}\]
### _Pharmacodynamic Model_
The impact of drug concentration on the bispectral index (BIS) is typically modeled using a Hill function. Due to the synergic effect between propofol and remifentanil, the effect can be modeled as a response surface model [24]:
\[BIS(t)=E_{0}\left(1-\frac{U(t)^{\gamma}}{1+U(t)^{\gamma}}\right) \tag{4}\]
with \(E_{0}\) the initial BIS, \(\gamma\) the slope coefficient of the surface and \(U(t)\) the interaction term defined by:
\[U(t)=\frac{x_{p4}(t)}{C_{50r}}+\frac{x_{r4}(t)}{C_{50r}}. \tag{5}\]
In these equations, \(x_{p4}\) and \(x_{r4}\) are the propofol and remifentanil concentrations of the BIS effect-site, \(C_{50p}\) and \(C_{50r}\) are the propofol and remifentanil half-effect concentration for BIS (_i.e._ the concentrations to obtain half of the effect of the drugs).
Finally, the fully discretized model subject to noise can be summarized by the following structure:
Fig. 1: Schemes of the PK compartments model
\[x(k+1) =Ax(k)+Bu(k) \tag{6}\] \[BIS(k) =h(x(k))+w(k)\]
where \(h\) is the non-linear output function from eq. (4)-(5) and \(w\) is the measurement noise.
### _Model Parameters and Uncertainties_
Several studies have been conducted in order to link the patient characteristics (age, height, weight, sex) to the PK parameters. For control purposes, the most widely accepted are the models developed in [25] for propofol and in [23] for remifentanil. To simulate uncertainties in our testing procedure, Monte-Carlo simulations are used with a log-normal distribution for each parameter. The standard deviations used are those given in the papers cited above, nominal values are available in Table I.
For the response surface model, the values from [26] were used, as outlined in Table II.
## III Multi-Model Control
As previously discussed, drug models are characterized by parameters that might vary significantly from patient to patient. It is then necessary to identify such parameters to improve the control performances, mostly when using controllers strongly relying on the model, as for model predictive control employed here. To address this issue, we propose in this section a multi-model approach. Given that the impact of PD variability uncertainty is more significant than PK variability uncertainty [27], the uncertain parameters of the PD system can be considered unknown, and are represented by the vector \(\theta=\begin{pmatrix}C_{50p}&C_{50r}&\gamma\end{pmatrix}\). A method is presented to estimate these parameters using data available in the first part of the surgical operation.
The multi-model approach consists of three parts, as depicted in Fig. 2. First, the states of the PK models are estimated in parallel using a set of EKFs, one for every realization of the vector selected within a grid in the space of the parameters. The grid is designed to reasonably represent the variability of the parameter vector. Next, a vector is chosen using a model-matching criterion and, finally, a non-linear Model Predictive Controller is used to compute the control input to apply.
### _Extended Kalman Filter_
In this section, the basics of the EKF are recalled. EKF is a state estimation method that relies on the linearization of a non-linear model. If we consider the model given in (6) with the non-linear function \(h\) parametrized by \(\theta\), the estimator using the parameter vector \(\theta_{i}\) is given by:
\[H_{i}(k) =\left.\frac{\partial h(x,\theta_{i})}{\partial x}\right|_{x=\hat {x}_{i}(k_{|k-1})}\] \[K_{i}(k) =P_{i}(k_{|k-1})H_{i}^{\top}(k)(H_{i}(k)P_{i}(k_{|k-1})H_{i}^{ \top}(k)+R_{2})^{-1}\] \[\hat{x}_{i}(k_{|k}) =\hat{x}_{i}(k_{|k-1})+K_{i}(k)(y(k)-h(\hat{x}_{i}(k_{|k-1}),\theta _{i}))\] \[P_{i}(k_{|k}) =P_{i}(k_{|k-1})-K_{i}(k)H_{i}(k)P_{i}(k_{|k-1})\] \[\hat{x}_{i}(k+1_{|k}) =A\hat{x}_{i}(k_{|k})+Bu(k)\] \[P_{i}(k+1_{|k}) =AP_{i}(k_{|k})A^{\top}+R_{1}\]
Here the notation \(X(k+1|k)\), respectively \(X(k|k)\) and \(X(k|k-1)\), represents the value of variable X computed at time step \(k+1\) based on the knowledge available at \(k\). The estimated state vector is \(\hat{x}\) and \(P\) is the covariance matrix. \(R_{1}\) and \(R_{2}\) are two constant matrices used to respectively characterize the process perturbations and the measurements noise.
Since negative concentrations are not allowed, a saturation is added to the state estimation expression after the measurement update:
\[\hat{x}(k_{|k})=\max(0,\,\hat{x}(k_{|k})).\]
### _Model Selection_
To choose a model to predict the state trajectory of the system in the future, the model-matching criterion proposed in [8] is used. In order to determine the model matching criterion \(J_{i}\) for the \(i^{th}\) estimator at time \(k\), the following method is used:
* Each Extended Kalman Filter (EKF) generates a state estimate, which is stored;
Fig. 2: Scheme of the MMPC strategy
* A state trajectory \(x(l)\) for \(l\in\{k-N_{c},..k\}\) is formed by using equation (6) with the initial point set to the estimation value previously saved at the time stamp \(k-N_{c}\). \(N_{c}\) is the number of samples in the observation window;
* The resulting trajectory is then compared to the BIS measurement to obtain the prediction error \(\epsilon_{i}(l)=h(x(l),\theta_{i})-y(l)\).
* Finally, the criterion is computed using the following formula: \[J_{i}(k)=\alpha\epsilon_{i}^{2}(k)+\beta\sum_{l=0}^{N_{c}}e^{-\lambda l} \epsilon_{i}^{2}(k-l)\] (7) where \(\alpha\), \(\beta\), and \(\lambda\) are three positive constants used to tune the convergence rate.
To circumvent potential instability in the model selection process, a threshold is employed to identify the optimal model. Specifically, the estimator with the smaller criterion is selected if the difference between its criterion and the criterion of the previously chosen estimator exceeds a pre-determined threshold.
### _Model Predictive Control_
Model Predictive Control is an advanced control method that uses online optimization to obtain optimal control input in presence of constraints on the state and the control input [28]. In this paper, a non-linear MPC parametrized by the parameter vector \(\theta\) is used. The cost of the optimization problem is given by:
\[\begin{split} J=&\sum_{i=1}^{N}(y_{ref}(k)-h(x(k+i ),\theta))^{2}\\ &+\sum_{i=1}^{N_{u}}(u(k+i))^{T}R(u(k+i))^{2}.\end{split} \tag{8}\]
The reference signal to be followed is denoted \(y_{ref}\), while the associated control input values are subject to a cost matrix \(R\), which enables the user to modulate the balance between propofol and remifentanil. \(N\) and \(N_{u}\) are respectively the prediction and the control horizon. As specified in [19], the maximum infusion rate of propofol is \(6.67mg/s\) and that of remifentanil is \(16.67\mu g/s\). Thus, an optimization problem with constraints given by the system dynamics and the bounds on the inputs is obtained. Note that a quartic cost is utilized to achieve a better trade-off between undershooting and rapidity.
In order to ensure the convergence of the system at the desired BIS target despite the presence of uncertainties and disturbances, an integrator is added to the MPC internal reference after the induction phase (after 2 minutes in practice):
\[y_{ref}(k+1)=y_{ref}(k)+k_{i}(BIS_{target}-BIS(k))\]
where \(k_{i}\) is a constant used to tune the convergence speed.
## IV Numerical Simulations
In this section, three controllers are tested on the same induction scenario to obtain a fair comparison. First, the PID from [19], then an EKF estimator associated with a non-linear MPC (NMPC) as described in the sections III-A and III-C with the nominal parameter vector \(\theta\) and, finally, the proposed multi-model predictive controller (MMPC) previously detailed.
### _Controller tuning_
The PID was tuned using a particle swarm optimization over a randomly sampled patient cohort drawn from the distribution detailed in Table II as in [19]. The ratio between propofol and remifentanil rates was set to \(2\) as this is a common value. Additionally, the system's sampling time was set to \(1\) second, following the protocol outlined in the same paper.
For the NMPC and the MMPC, the parameters were tuned by hand on the same patient table. The EKF was tuned first to ensure the convergence of the state estimation. Secondly, the prediction horizon of the MPC has been set to \(1\) minute with a sampling time of \(2\) seconds, then the cost matrix \(R\) was set to get the ratio between drugs and the response time analogous to those of PID. Finally, for the NMPC, the integrator constant \(k_{i}\) was selected to achieve a balance between fast convergence of the system and reduced oscillations.
For the MMPC, the distribution of the parameters \(\theta_{i}\) was tuned to obtain good performance for each real model, even for the more sensitive ones, that are those with smaller \(C_{50}\). For the simulations done in this paper, 45 models have been used in parallel with a grid distribution across the three parameters. To adjust the parameters of the model selector, the speed at which the selection converges during the induction phase was used as an indicator. The window length was chosen to be the same as the one of the MPC (\(N_{c}=30\)) with \(\alpha=0\), \(\beta=1\), \(\lambda=0.05\), and \(\delta=30\).
### _Simulation Setup_
To assess the performances of the controllers, simulations are done with 500 different patients using random uniform sampling to obtain age, sex, height, and weight. Then uncertainties are added to both the PK and PD models with log-normal sampling as described in Section II-C. This simulation is used to test the controllers on a wide range of patient profiles. The performance criteria are those proposed in [29] and also used in [19, 20], and [21]. They are listed below:
* _Time to target_ (TT): time to reach the target BIS interval [45, 55].
* _BIS NADIR_: minimum BIS value reached during the induction phase.
* _Settling time 10_ (ST10), respectively (ST20): time to reach the interval target \(\pm 10\%\), respectively \(20\%\), and stay within this range.
* _Undershoot_ (US): maximum undershoot below a BIS of 45 during the induction phase.
The involved optimization problems are solved using CASADI software [30] with IPOPT solver. The maximum computation time of the proposed solution for one step is 0.14s, which makes it a plausible solution. The whole code to perform the simulations presented in the paper is written in Python and available at [https://github.com/BobAubouin/TIVA_Drug_Control](https://github.com/BobAubouin/TIVA_Drug_Control) and uses [31] to perform all the simulations. Note that the control design method presented in this paper is the second open-source controller for anesthesia (after the one presented in [32]), and it has been shared with the hope that this will lead to more easily reproducible results in the future.
### _Results_
The simulation results are presented in Table. III and Fig. 3. Moreover, the BIS trajectories of the case with the worst undershoot for each controller are shown in Fig. 4 and the associated control input in Fig. 5.
The results of the study indicate that the MPCs exhibit superior performance compared to the PID controllers. Although the PID controllers exhibit faster time to target (TT), this results in a larger undershoot (US) and comparable settling times (ST10 and ST20). On average, the two MPC controllers were found to be comparable, however, the multi-model approach demonstrates its usefulness by reducing the effects of the uncertainties on the output trajectories, as depicted in Fig. 3. Furthermore, this approach shows huge improvements in cases of patients with extreme models as shown by the maximal undershoot values and Fig. 4. These results demonstrate the effectiveness of the multi-model approach in quickly identifying the appropriate PD model for dosing propofol and remifentanil in this scenario.
Nevertheless, the conclusion can be mitigated considering that the PID used for the comparison has been optimized on a patient table while in the more recent paper [20] the authors proposed to optimize the PID for each patient characteristic, and thus for each PK model. Since this controller was longer to implement and to test, though, the one from [19] has been used. In [20] the authors explain that this individualized approach allows the controller to reduce undershoot. However similar conclusions should be obtained with the more recent PID version since it is inherent to this architecture which does not handle the PD uncertainties.
Fig. 4: BIS values for the worst case of each controller (in terms of undershoot) for the three controllers.
Fig. 3: Mean BIS over the 500 patients for the three controllers. The plot is the mean value \(\pm\) standard deviation.
## V Conclusion
In this paper, a new control method for the co-administration of propofol and remifentanil driven by BIS measurement has been proposed. A multi-model predictive controller has been designed and compared to a PID controller and a non-linear MPC using the average model. The simulations done on a random database of \(500\) patients including a high level of uncertainties show the benefit of the multi-model approach for the induction phase.
In the future, the authors will continue this work to assess the controller performance during the maintenance phase and in a noisy environment. It seems that the multi-model approach, which provides the controller with the possibility of learning about the system, can also be effective in rejecting disturbances, however, a noisy environment can slow down the identification process and thus mitigate the results. The final end-point is to extend it to the whole anesthesia regulation problem which includes other system outputs such as hemodynamic signals and analgesic indicators.
|
2303.18061 | Multi-User Data Detection in Massive MIMO with 1-Bit ADCs | We provide new analytical results on the uplink data detection in massive
multiple-input multiple-output systems with 1-bit analog-to-digital converters.
The statistical properties of the soft-estimated symbols (i.e., after linear
combining and prior to the data detection process) have been previously
characterized only for a single user equipment (UE) and uncorrelated Rayleigh
fading. In this paper, we consider a multi-UE setting with correlated Rayleigh
fading, where the soft-estimated symbols are obtained by means of maximum ratio
combining based on imperfectly estimated channels. We derive a closed-form
expression of the expected value of the soft-estimated symbols, which allows to
understand the impact of the specific data symbols transmitted by the
interfering UEs. Building on this result, we design efficient data detection
strategies based on the minimum distance criterion, which are compared in terms
of symbol error rate and complexity. | Amin Radbord, Italo Atzeni, Antti TΓΆlli | 2023-03-31T13:47:55Z | http://arxiv.org/abs/2303.18061v1 | # Multi-user data detection in massive MIMO with 1-bit ADCs
###### Abstract
We provide new analytical results on the uplink data detection in massive multiple-input multiple-output systems with 1-bit analog-to-digital converters. The statistical properties of the soft-estimated symbols (i.e., after linear combining and prior to the data detection process) have been previously characterized only for a single user equipment (UE) and uncorrelated Rayleigh fading. In this paper, we consider a multi-UE setting with correlated Rayleigh fading, where the soft-estimated symbols are obtained by means of maximum ratio combining based on imperfectly estimated channels. We derive a closed-form expression of the expected value of the soft-estimated symbols, which allows to understand the impact of the specific data symbols transmitted by the interfering UEs. Building on this result, we design efficient data detection strategies based on the minimum distance criterion, which are compared in terms of symbol error rate and complexity.
Amin Radbord, Italo Atzeni, and Antti Tolli+ Centre for Wireless Communications, University of Oulu, Finland
Emails: {amin.radbord, italo.atzeni, antti.tolli}@oulu.fi Massive MIMO, 1-bit ADCs, multi-user data detection.
Footnote β : 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
## 1 Introduction
Increasing the capacity of beyond-5G wireless systems will require exploiting the wide bandwidths available in the THz spectrum (0.3-3 THz) [1]. This calls for massive multiple-input multiple-output (MIMO) arrays at the transmitter and/or at the receiver to compensate for the strong pathloss and penetration loss therein. In this regard, fully digital architectures provide highly flexible wideband beamforming and large-scale spatial multiplexing [2]. However, this approach requires a sacrifice in the resolution of the analog-to-digital/digital-to-analog converters (ADCs/DACs), since their power consumption scales linearly with the sampling rate and exponentially with the number of quantization bits [3, 4, 5]. Remarkably, fully digital architectures with low-resolution ADCs (even down to 1 bit) can significantly outperform their hybrid analog-digital counterparts in terms of spectral and energy efficiency [6]. In this respect, 1-bit ADCs/DACs are particularly attractive as they are the simplest and least power consuming data conversion devices [3, 7]. Such a coarse quantization is suitable with very large bandwidths, for which high-order modulations may not be needed.
There is a vast literature on 1-bit quantized massive MIMO, ranging from performance analysis (e.g., [3, 7]) to data detection (e.g., [8, 9]) and precoding (e.g., [10, 11, 12]). In this paper, we broaden prior analytical studies on the uplink data detection in massive MIMO systems with 1-bit ADCs. The statistical properties of the soft-estimated symbols (i.e., after linear combining and prior to the data detection process) have been characterized in our prior works [13, 9, 14] only under a simplified system model with a single user equipment (UE) and uncorrelated Rayleigh fading. However, uncorrelated channel models cannot account for the sparse scattering at high frequencies. In this paper, we consider a more general and realistic multi-UE setting with correlated Rayleigh fading. We assume that the base station (BS) adopts maximum ratio combining (MRC) based on imperfectly estimated channels, where the channel estimation is carried out via the Bussgang linear minimum mean squared error (BLMMSE) estimator [3]. In this context, we derive a closed-form expression of the expected value of the soft-estimated symbols, which is relevant to understand the impact of the specific data symbols transmitted by the interfering UEs. This result is exploited to design efficient data detection strategies based on the minimum distance criterion, which are compared in terms of symbol error rate (SER) and complexity.
## 2 System model
Let us consider a single-cell massive MIMO system where a BS with \(M\) antennas serves \(K\) single-antenna UEs in the uplink. We use \(\mathbf{H}\triangleq[\mathbf{h}_{1},\ldots,\mathbf{h}_{K}]\in\mathbb{C}^{M \times K}\) to denote the uplink channel matrix. Considering a general correlated Rayleigh fading channel model, we have \(\mathbf{h}_{k}\sim\mathcal{CN}(\mathbf{0},\mathbf{C}_{\mathbf{h}_{k}}),\; \forall k\), where \(\mathbf{C}_{\mathbf{h}_{k}}\in\mathbb{C}^{M\times M}\) is the channel covariance matrix of UE \(k\). Furthermore, we define \(\mathbf{h}\triangleq\mathrm{vec}(\mathbf{H})\in\mathbb{C}^{MK}\) and, accordingly, we have \(\mathbf{h}\sim\mathcal{CN}(\mathbf{0},\mathbf{C}_{\mathbf{h}})\) with \(\mathbf{C}_{\mathbf{h}}\triangleq\mathrm{blkdiag}(\mathbf{C}_{\mathbf{h}_{1}}, \ldots,\mathbf{C}_{\mathbf{h}_{K}})\in\mathbb{C}^{MK\times MK}\). For simplicity, and without loss of generality, we assume that all the UEs are subject to the same SNR \(\rho\) during both the channel estimation and the uplink data transmission (see also [9]). Each BS antenna is connected to two 1-bit ADCs, one for the in-phase and one for the quadrature component of the received signal. In this context, we introduce the 1-bit quantization function \(Q(\cdot):\mathbb{C}^{A\times B}\rightarrow\mathcal{Q}\), with \(\mathcal{Q}\triangleq\sqrt{\frac{\rho K+1}{2}}\{\pm i\pm j\}^{A\times B}\) and [4, 9]
\[Q(\mathbf{X})\triangleq\sqrt{\frac{\rho K+1}{2}}\big{(}\mathrm{sgn}( \mathrm{Re}[\mathbf{X}])+j\,\mathrm{sgn}(\mathrm{Im}[\mathbf{X}])\big{)}. \tag{1}\]
### Data Detection
Let \(\mathbf{x}\triangleq[x_{1},\ldots,x_{K}]^{\mathrm{T}}\in\mathbb{C}^{K}\) denote the data symbol vector comprising the data symbols transmitted by the UEs. We assume that \(\mathbf{x}\in\mathcal{S}^{k}\), where \(\mathcal{S}\triangleq\{s_{1},\ldots,s_{L}\}\) represents the set of the \(L\) possible data symbols. For instance, \(\mathcal{S}\) may correspond to the 16-QAM constellation, as considered in Section 4. During the uplink data transmission, all the UEs simultaneously transmit their data symbols, and the signal received at the BS is given by
\[\mathbf{y}\triangleq\sqrt{\rho}\mathbf{H}\mathbf{x}+\mathbf{z}\in\mathbb{C} ^{M} \tag{2}\]
where \(\mathbf{z}\in\mathbb{C}^{M}\) is the additive white Gaussian noise (AWGN) vector with i.i.d. \(\mathcal{CN}(0,1)\) elements. The BS observes the quantized signal
\[\mathbf{r}\triangleq Q(\mathbf{y})\in\mathbb{C}^{M} \tag{3}\]
where the scaling factor in (1) is such that the variance of \(\mathbf{r}\) coincides with that of \(\mathbf{y}\). Then, the BS obtains a soft estimate of \(\mathbf{x}\) via linear combining as
\[\hat{\mathbf{x}} \triangleq[\hat{x}_{1},\dots,\hat{x}_{K}]^{\mathrm{T}} \tag{4}\] \[=\mathbf{V}^{\mathrm{H}}\mathbf{r}\in\mathbb{C}^{K} \tag{5}\]
where \(\mathbf{V}\in\mathbb{C}^{M\times K}\) is the combining matrix. Finally, the data detection process maps each soft-estimated symbol in (4) to one of the possible data symbols in \(\mathcal{S}\).
### Channel Estimation
The combining matrix \(\mathbf{V}\) used in (5) is designed based on the estimated channels. In this paper, the channel estimation is carried out via the BLMMSE estimator [3], which is the state-of-the-art linear estimator with low-resolution ADCs and reduces to the well-known minimum mean squared error estimator in the absence of quantization. Let \(\mathbf{P}\triangleq[\mathbf{p}_{1},\dots,\mathbf{p}_{K}]\in\mathbb{C}^{\tau \times K}\) denote the pilot matrix, where \(P_{u,k}\) represents its (\(u,k\))th element and \(\tau\) is the pilot length. Assuming \(\tau\geq K\), orthogonal pilots among the UEs, and \(|P_{u,k}|^{2}=1,\;\forall u,k\), we have \(\mathbf{P}^{\mathrm{H}}\mathbf{P}=\tau\mathbf{I}_{K}\). During the channel estimation, all the UEs simultaneously transmit their pilots, and the signal received at the BS is given by
\[\mathbf{Y}_{\mathrm{p}}\triangleq\sqrt{\rho}\mathbf{H}\mathbf{P}^{\mathrm{H} }+\mathbf{Z}_{\mathrm{p}}\in\mathbb{C}^{M\times\tau} \tag{6}\]
where \(\mathbf{Z}_{\mathrm{p}}\in\mathbb{C}^{M\times\tau}\) is the AWGN matrix with i.i.d. \(\mathcal{CN}(0,1)\) elements. At this stage, we vectorize (6) as
\[\mathbf{y}_{\mathrm{p}} \triangleq\mathrm{vec}(\mathbf{Y}_{\mathrm{p}}) \tag{7}\] \[=\sqrt{\rho}\mathbf{\tilde{P}}^{*}\mathbf{h}+\mathbf{z}_{\mathrm{ p}}\in\mathbb{C}^{M\tau} \tag{8}\]
with \(\mathbf{\tilde{P}}\triangleq\mathbf{P}\otimes\mathbf{I}_{M}\in\mathbb{C}^{M \tau\times MK}\) and \(\mathbf{z}_{\mathrm{p}}\triangleq\mathrm{vec}(\mathbf{Z}_{\mathrm{p}})\in \mathbb{C}^{M\tau}\). Furthermore, we define
\[\mathbf{C}_{\mathbf{y}_{\mathrm{p}}} \triangleq\mathbb{E}[\mathbf{y}_{\mathrm{p}}\mathbf{y}_{\mathrm{ p}}^{\mathrm{H}}] \tag{9}\] \[=\rho\,\mathbf{\tilde{P}}^{*}\mathbf{C}_{\mathbf{h}}\mathbf{ \tilde{P}}^{\mathrm{T}}+\mathbf{I}_{M\tau}\in\mathbb{C}^{M\tau\times M\tau} \tag{10}\]
and
\[\mathbf{A}_{\mathrm{p}}\triangleq\sqrt{\frac{2}{\pi}(\rho K+1)}\mathrm{Diag}( \mathbf{C}_{\mathbf{y}_{\mathrm{p}}})^{-\frac{1}{2}}\in\mathbb{C}^{M\tau\times M\tau}. \tag{11}\]
The BS observes the quantized signal
\[\mathbf{r}_{\mathrm{p}}\triangleq Q(\mathbf{y}_{\mathrm{p}})\in\mathbb{C}^{M\tau} \tag{12}\]
and obtains the estimate of \(\mathbf{h}\) via the BLMMSE estimator as
\[\hat{\mathbf{h}}\triangleq\sqrt{\rho}\mathbf{C}_{\mathbf{h}}\mathbf{\tilde{P} }^{\mathrm{T}}\mathbf{A}_{\mathrm{p}}\mathbf{C}_{\mathbf{r}_{\mathrm{p}}}^{- 1}\mathbf{r}_{\mathrm{p}}\in\mathbb{C}^{MK} \tag{13}\]
with \(\mathbf{C}_{\mathbf{r}_{\mathrm{p}}}\triangleq\mathbb{E}[\mathbf{r}_{ \mathrm{p}}\mathbf{r}_{\mathrm{p}}^{\mathrm{H}}]\). Finally, the estimate of \(\mathbf{H}\) is given by \(\hat{\mathbf{H}}\triangleq[\hat{\mathbf{h}}_{1},\dots,\hat{\mathbf{h}}_{K}]\), with
\[\hat{\mathbf{h}}_{k}\triangleq\sqrt{\rho}\mathbf{C}_{\mathbf{h}_{k}}\mathbf{ \tilde{p}}_{k}^{\mathrm{T}}\mathbf{A}_{\mathrm{p}}\mathbf{C}_{\mathbf{r}_{ \mathrm{p}}}^{-1}\mathbf{r}_{\mathrm{p}}\in\mathbb{C}^{M} \tag{14}\]
and \(\mathbf{\tilde{p}}_{k}\triangleq\mathbf{p}_{k}\otimes\mathbf{I}_{M}\in \mathbb{C}^{M\tau\times M}\).
## 3 Data Detection Analysis
In our prior work [9], we focused on a single-UE setting with uncorrelated Rayleigh fading and characterized the statistical properties of the soft-estimated symbols. In this paper, we consider a more general and realistic multi-UE setting with correlated Rayleigh fading and provide a closed-form expression of the expected value of the soft-estimated symbols. This result, presented in Section 3.1, allows to understand the impact of the specific data symbols transmitted by the interfering UEs. Furthermore, it can be exploited to design efficient data detection strategies based on the minimum distance criterion, as described in Section 3.2.
### Expectation of the Soft-Estimated Symbols
As in [9], we consider that the MRC receiver is adopted at the BS. Hence, the combining matrix is given by \(\mathbf{V}=\hat{\mathbf{H}}\) and the soft-estimated symbol for UE \(k\) can be expressed as \(\hat{x}_{k}=\hat{\mathbf{h}}_{k}^{\mathrm{H}}\mathbf{r}\) (cf. (5)), with \(\hat{\mathbf{h}}_{k}\) and \(\mathbf{r}\) given in (14) and (3), respectively. Let us define \(\mathbf{C}_{\mathbf{r}_{\mathrm{p}}}\triangleq\mathbb{E}[\mathbf{r}_{ \mathrm{p}}^{\mathrm{H}}]\), which represents the cross-correlation matrix between the quantized signals received during the uplink data transmission and the channel estimation. Moreover, we introduce the function \(\Omega(x)\triangleq\frac{2}{\pi}\arcsin(x)\) and the following preliminary definitions:
\[\alpha_{m} \triangleq\bigg{[}\rho\sum_{k=1}^{K}\mathbf{C}_{\mathbf{h}_{k}}+ \mathbf{I}_{M}\bigg{]}_{m,m}, \tag{15}\] \[\beta_{m} \triangleq\bigg{[}\rho\sum_{k=1}^{K}\mathbf{C}_{\mathbf{h}_{k}}| x_{k}|^{2}+\mathbf{I}_{M}\bigg{]}_{m,m},\] (16) \[\zeta_{m,n,u,v} \triangleq\frac{\rho}{\sqrt{\alpha_{m}\alpha_{n}}}\bigg{[}\sum_{k= 1}^{K}\mathbf{C}_{\mathbf{h}_{k}}^{\mathrm{T}}P_{u,k}P_{v,k}^{*}\bigg{]}_{m,n},\] (17) \[\eta_{m,n,u} \triangleq\frac{\rho}{\sqrt{\alpha_{n}\beta_{m}}}\bigg{[}\sum_{k= 1}^{K}\mathbf{C}_{\mathbf{h}_{k}}x_{k}P_{u,k}\bigg{]}_{m,n}. \tag{18}\]
The following theorem provides a closed-form expression of the expected value of the soft-estimated symbol for UE \(k\) for a given data symbol vector \(\mathbf{x}\). This is denoted by \(\mathsf{E}_{k}\triangleq\mathbb{E}[\hat{x}_{k}]\), where the expectation is taken over \(\mathbf{H}\), \(\mathbf{z}\), and \(\mathbf{z}_{\mathrm{p}}\).
**Theorem 1**.: _Assuming that the MRC receiver is adopted at the BS, for a given data symbol vector \(\mathbf{x}\), the expected value of the soft-estimated symbol for UE \(k\) is given by_
\[\mathsf{E}_{k}=\sqrt{\rho}\mathrm{tr}(\mathbf{C}_{\mathbf{r}_{\mathrm{p}}}^{-1} \mathbf{A}_{\mathrm{p}}\mathbf{\tilde{p}}_{k}^{\mathrm{T}}\mathbf{C}_{\mathbf{h}_{k}} \mathbf{C}_{\mathbf{r}_{\mathrm{p}}}) \tag{19}\]
_where the (\((u-1)M+m,(v-1)M+n\))th element of \(\mathbf{C}_{\mathbf{r}_{\mathrm{p}}}\) can be written as in (21) at the top of the next page and the (\(m,(u-1)M+n\))th element of \(\mathbf{C}_{\mathbf{r}_{\mathrm{p}}}\) can be written as_
\[[\mathbf{C}_{\mathbf{r}_{\mathrm{p}}}]_{m,(u-1)M+n} =(\rho K+1)\Big{(}\Omega\big{(}\mathrm{Re}[\eta_{m,n,u}]\big{)}\] \[\quad+j\,\Omega\big{(}\mathrm{Im}[\eta_{m,n,u}]\big{)}\Big{)}. \tag{20}\]
The proof of Theorem 1 follows similar (and more involved) steps as in [9, App. D]. It is omitted due to the space limitations and will be provided in the extended version of this paper. The expression in (19) for UE \(k\) clearly depends on the specific data symbols transmitted by the interfering UEs. In the following, we build on Theorem 1 to design efficient data detection strategies, which will be compared in Section 4.
### Data Detection Strategies
In this section, we exploit Theorem 1 and the minimum distance criterion to map each soft-estimated symbol in (4) to one of the possible data symbols in \(\mathcal{S}\). In this respect, we present three data detection strategies: 1) exhaustive, 2) heuristic, and 3) genie-aided data detection. In the following, we use \(l_{k}^{*}\in\{1,\ldots,L\}\) to denote the index of the detected symbol for UE \(k\).
* _Strategy 1: exhaustive data detection._ This strategy uses the statistical information of the interfering UEs to detect the symbol for the target UE. Let \(\mathcal{E}_{k}\triangleq\{\mathsf{E}_{k},\forall\mathsf{x}\in\mathcal{S}^{K}\}\) denote the set of the expected values of the soft-estimated symbols for UE \(k\) obtained from all the possible data symbol vectors, with \(|\mathcal{E}_{k}|=L^{K}\). The soft-estimated symbol for UE \(k\) is mapped to one of the elements in \(\mathcal{E}_{k}\) as \[\mathsf{E}_{k}^{*}=\operatorname*{argmin}_{\mathsf{E}_{k}\in\mathcal{E}_{k}}| \hat{x}_{k}-\mathsf{E}_{k}|\] (22) from which \(l_{k}^{*}\) is readily obtained. This strategy amounts to performing an exhaustive search over all the \(L^{K}\) possible values of \(\mathsf{E}_{k}\) in (19). Hence, its complexity increases exponentially with \(K\).
* _Strategy 2: heuristic data detection._ This strategy considers the expected values of the soft-estimated symbols for the target UE averaged over all the possible data symbols transmitted by the interfering UEs. Let \(\mathsf{x}_{-k}\triangleq[x_{1},\ldots,x_{k-1},x_{k+1},\ldots x_{K}]^{\mathsf{ T}}\in\mathbb{C}^{K-1}\) and let \(\mathcal{E}_{k,l}\triangleq\{\mathsf{E}_{k}:x_{k}=s_{l},\forall\mathsf{x}_{- k}\in\mathcal{S}^{K-1}\}\subset\mathcal{E}_{k}\) be the set containing the elements in \(\mathcal{E}_{k}\) corresponding to \(x_{k}=s_{l}\), with \(|\mathcal{E}_{k,l}|=L^{K-1}\). Furthermore, let us define \[\bar{\mathsf{E}}_{k,l}\triangleq\frac{1}{L^{K-1}}\sum_{t\in\mathcal{E}_{k,l}}t\] (23) which represents the average of the expected values of the soft-estimated symbols for UE \(k\) when \(x_{k}=s_{l}\) (see the green markers in Fig. 3). Then, the index of the detected symbol for UE \(k\) is obtained as \[l_{k}^{*}=\operatorname*{argmin}_{l\in\{1,\ldots,L\}}|\hat{x}_{k}-\bar{\mathsf{ E}}_{k,l}|.\] (24) This strategy can be seen as a low-complexity, heuristic version of _Strategy 1_, which reduces the size of the search space from \(L^{K}\) to \(L\).
* _Strategy 3: genie-aided data detection._ This strategy is obtained from _Strategy 1_ by assuming that a genie instantaneously provides the data symbols transmitted by the interfering UEs to detect the symbol for the target UE. Hence, for UE \(k\), \(\mathbf{x}_{-k}\) is assumed to be perfectly known, which reduces the size of the search space from \(L^{K}\) to \(L\). Evidently, this strategy cannot be implemented in practice and is considered only to evaluate how the knowledge of the data symbols transmitted by the interfering UEs impacts the data detection performance for the target UE.
Other practical data detection strategies (e.g., resulting from combining _Strategy 1_ and _Strategy 3_ above) will be considered in the extended version of this paper.
## 4 Numerical Results
In this section, we utilize Theorem 1 and the data detection strategies described in Section 3.2 to evaluate the impact of the specific data symbols transmitted by the interfering UEs in a massive MIMO system with 1-bit ADCs. We assume that the BS is equipped with \(M=128\) antennas and adopts the MRC receiver. The set of data symbols \(\mathcal{S}\) corresponds to the 16-QAM constellation, i.e., \(\mathcal{S}=\frac{1}{\sqrt{10}}\left\{\pm\,1\pm\,j,\pm 1\pm\,j\,3,\pm 3\pm\,j\, 3\pm\,j\,3\right\}\), which is normalized such that \(\frac{1}{L}\sum_{l=1}^{L}\left|s_{l}\right|^{2}=1\). We consider a 2-UE scenario (\(K=2\)) and a 3-UE scenario (\(K=3\)). The channel covariance matrices are generated according to the one-ring channel model [15] with angular spread of \(30^{\circ}\) for each UE and angular separation between the UEs of \(120^{\circ}\) and \(60^{\circ}\) for the 2-UE and 3-UE scenarios, respectively. All the UEs are subject to the same (normalized) pathloss, such that \(\mathsf{tr}(\mathsf{C}_{\mathsf{h}_{k}})=M,\,\forall k\); unless otherwise stated, we consider \(\rho=0\) dB. The channels are estimated as described in Section 2.2 with orthogonal pilots chosen as Zadoff-Chu sequences, which are widely adopted in the 4G LTE and 5G NR standards [16]; unless otherwise stated, we fix \(\tau=61\).
Considering the 3-UE scenario, Fig. 1 provides the scatter plot of the soft-estimated symbols for UE \(1\) when this transmits all the possible data symbols and the interfering UEs transmit fixed data symbols. Here, the soft-estimated symbols (black markers) originate from independent channel and AWGN realizations. We observe that, for each data symbol transmitted by UE \(1\), the mean value of the soft-estimated symbols is in agreement with the corresponding expected value obtained as in (19) (red markers).
Considering the 2-UE scenario, Fig. 2 depicts the expected values of the soft-estimated symbols for UE \(1\) when both UEs transmit all the possible data symbols. Note that there are \(16^{2}=256\) dif
ferent pairs of data symbols transmitted by the two UEs, each corresponding to a different value of \(\mathsf{E}_{k}\) in (19). However, in Fig. 2, only \(3\times 16\) points can be clearly distinguished, which implies that there is significant overlap among many of the \(256\) values of \(\mathsf{E}_{1}\). This stems from the fact that there are three different amplitude levels in the 16-QAM constellation and, for a given data symbol transmitted by UE \(1\), the data symbols with the same amplitude transmitted by UE \(2\) produce nearly the same value of \(\mathsf{E}_{1}\) (as shown in the zoomed window).
Fig. 3 extends the insights of Fig. 2 to the 3-UE scenario. Here, there are \(16^{3}=4096\) different triplets of data symbols transmitted by the three UEs, each corresponding to a different value of \(\mathsf{E}_{k}\) in (19). As in the 2-UE scenario, for a given data symbol transmitted by UE \(1\), the data symbols with the same amplitude transmitted by UE \(2\) and UE \(3\) produce nearly the same value of \(\mathsf{E}_{1}\). Interestingly, the dispersion of such values of \(\mathsf{E}_{k}\) reduces as the pilot length increases, since the channel estimates become more accurate. Moreover, the green markers in Fig. 3 correspond to the average of the expected values of the soft-estimated symbols for UE \(1\) when this transmits a specific data symbol (see _Strategy 2_ in Section 3.2).
Lastly, we evaluate the performance of the data detection strategies described in Section 3.2, which are based on Theorem 1 and the minimum distance criterion. Considering the 2-UE scenario, Fig. 4 plots the SER obtained with the different data detection strategies as a function of the SNR \(\rho\), with \(\tau=31\). In this context, the SER is computed by averaging over \(10^{4}\) independent channel and AWGN realizations, and considering all the possible data symbols. As in the single-UE analysis in [9], we observe that the SER curves feature an optimal SNR operating point: at low SNR, the AWGN is dominant; at high SNR, the soft-estimated symbols corresponding to data symbols with the same phase are hardly distinguishable. In between these regimes, the right amount of AWGN produces a useful scrambling of the 1-bit quantized signals at the \(M\) antennas. As expected, _Strategy 1_ outperforms _Strategy 2_, since the latter corresponds to a heuristic single-UE data detection after averaging over all the possible data symbols transmitted by the interfering UE (see the green markers in Fig. 3). Nonetheless, even _Strategy 2_ yields an acceptable performance at the optimal SNR operating point and beyond, partly due to the additional useful scrambling produced by the interfering UE. Furthermore, _Strategy 3_ outperforms all the other strategies, but _Strategy 1_ is remarkably close at the optimal SNR operating point.
## 5 Conclusions
We studied the uplink data detection in massive MIMO system with 1-bit ADCs considering a multi-UE setting with correlated Rayleigh fading, where the soft-estimated symbols are obtained by means of MRC based on imperfectly estimated channels. We derived a closed-form expression of the expected value of the soft-estimated symbols, which is relevant to understand the impact of the specific data symbols transmitted by the interfering UEs. Building on this result, we designed efficient data detection strategies based on the minimum distance criterion, which are compared in terms of SER and complexity. Motivated by the superior performance of the genie-aided data detection, which requires the knowledge of the data symbols
Figure 4: 2-UE scenario (\(K=2\)) with \(\tau=31\): SER versus the SNR obtained with the data detection strategies presented in Section 3.2.
Figure 3: 3-UE scenario (\(K=3\)) with \(\tau=61\): expected values of the soft-estimated symbols (red markers) and their mean values for UE \(1\) (green markers) when UE \(2\) and UE \(3\) transmit all the possible data symbols from \(\mathcal{S}\).
Figure 2: 2-UE scenario (\(K=2\)) with \(\tau=61\): expected values of the soft-estimated symbols for UE \(1\) when UE \(2\) transmits all the possible data symbols from \(\mathcal{S}\).
transmitted by the interfering UEs, future work will focus on developing practical methods for joint data detection.
|
2309.12517 | Geometric description of some Loewner chains with infinitely many slits | We study the chordal Loewner equation associated with certain driving
functions that produce infinitely many slits. Specifically, for a choice of a
sequence of positive numbers $(b_n)_{n\ge1}$ and points of the real line
$(k_n)_{n\ge1}$, we explicitily solve the Loewner PDE
$$ \dfrac{\partial f}{\partial
t}(z,t)=-f'(z,t)\sum_{n=1}^{+\infty}\dfrac{2b_n}{z-k_n\sqrt{1-t}}$$
in $\mathbb{H}\times[0,1)$. Using techniques involving the harmonic measure,
we analyze the geometric behaviour of its solutions, as $t\rightarrow1^-$. | Eleftherios Theodosiadis, Konstantinos Zarvalis | 2023-09-21T22:43:51Z | http://arxiv.org/abs/2309.12517v1 | # Geometric description of some Loewner chains with infinitely many slits
###### Abstract.
We study the chordal Loewner equation associated with certain driving functions that produce infinitely many slits. Specifically, for a choice of a sequence of positive numbers \((b_{n})_{n\geqslant 1}\) and points of the real line \((k_{n})_{n\geqslant 1}\), we explicitily solve the Loewner PDE
\[\frac{\partial f}{\partial t}(z,t)=-f^{\prime}(z,t)\sum_{n=1}^{+\infty}\frac{2 b_{n}}{z-k_{n}\sqrt{1-t}}\]
in \(\mathbb{H}\times[0,1)\). Using techniques involving the harmonic measure, we analyze the geometric behaviour of its solutions, as \(t\to 1^{-}\).
Key words and phrases:Loewner equation, spirallike functions, harmonic measure 2020 Mathematics Subject Classification: Primary 30C45, 35C05; Secondary 30C85
## 1. Introduction
Given an increasing family of slit domains \((H_{t})_{0\leqslant t<T}\) of the upper half-plane \(\mathbb{H}\), that is \(H_{t}:=\mathbb{H}\backslash\gamma([0,t])\) for a continuous curve \(\gamma:[0,T]\to\overline{\mathbb{H}}\), with \(\gamma(0)\in\mathbb{R}\) and \(\gamma((0,T))\subset\mathbb{H}\), there exists a family of conformal maps \(g_{t}=g(\cdot,t):H_{t}\stackrel{{ onto}}{{\to}}\mathbb{H}\), normalized in such a way that the _hydrodynamic condition_\(g_{t}(z)-z\to 0\), as \(z\to\infty\), is satisfied. Furthermore, an application of the Schwarz reflection principle, shows that \(g_{t}\) has a Laurent expansion at infinity
\[g_{t}(z)=z+\frac{b(t)}{z}+\cdots,\]
for all \(z\in\mathbb{C}\) that lie outside a disc containing \(\gamma([0,t])\cup\gamma([0,t])^{*}\), where \(K^{*}\) denotes the reflection of \(K\) with respect to the real axis. The coefficient \(b(t)>0\) is called the _half-plane capacity_ of \(\gamma([0,t])\), denoted by \(\operatorname{hcap}(\gamma([0,t]))\). Then, the _chordal_ Loewner differential equation reads for the initial value problem
\[\frac{\partial}{\partial t}g_{t}(z)=\frac{b^{\prime}(t)}{g_{t}(z)-\lambda(t)},\quad g_{0}(z)=z, \tag{1.1}\]
for all \(z\in H_{t}\) and \(0\leqslant t<T\), where \(\lambda(t)\) is a continuous real-valued function of \(t\), given as \(\lambda(t)=g_{t}^{-1}(\gamma(t))\), called the _driving function_. In the literature, the half-plane capacity mostly appears as \(b(t)=2t\), which is possible by means of a time reparameterization. See [10] for a detailed derivation of equation (1.1).
In the opposite direction, if we consider the initial value problem (1.1), we then let \(T_{z}\) be the supremum of all \(t\), such that the solution to the equation is well defined and \(g_{t}(z)\in\mathbb{H}\) for all \(t\leqslant T_{z}\). Then, the domain \(H_{t}:=\{z\in\mathbb{H}:T_{z}>t\}\) is simply connected, \(K_{t}:=\mathbb{H}\backslash H_{t}\) is compact and \(\mathbb{H}\backslash K_{t}\) is also simply connected. Moreover, \(g_{t}\) maps \(H_{t}\) conformally onto \(\mathbb{H}\), satisfying the hydrodynamic condition. We refer to \((K_{t})_{t}\), as the _compact hulls_ generated by (1.1); see [5, Chapter 4] for details. It is not true, in general, that an arbitrary continuous function \(\lambda(t)\) produces hulls that are curves. Several authors have studied the relation between a driving function
and the corresponding hulls, showing that Lip-\(\frac{1}{2}\) driving functions, with sufficiently small Lip(\(\frac{1}{2}\))-norm, produce quasi-slit domains (see e.g. [6], [9] and [13]).
Turning back to equation (1.1), one can consider its multiple, finite or infinite, curve version. For instance, as we see in [14], given \(n\) disjoint Jordan curves that emanate from the real line towards the upper half-plane, the corresponding Loewner flow is produced by the driving functions \(\lambda_{1},\ldots,\lambda_{n}\). Furthermore, in [15], the authors show that if the hulls are made up of infinitely many slits \(\Gamma_{n}\) parameterized in \([0,1]\) such that each curve \(\Gamma_{j}(t)\) can be separated from the closure of \(\bigcup_{n\neq j}\Gamma_{j}(t)\) at each time \(t\), by open sets, then the Loewner equation is written as
\[\frac{\partial g}{\partial t}(z,t)=\sum_{n=1}^{+\infty}\frac{b_{n}(t)}{g(z,t) -\lambda_{n}(t)}, \tag{1.2}\]
for \(z\in\mathbb{H}\backslash\bigcup_{n=1}^{+\infty}\Gamma_{n}(t)\) and a.e \(0\leqslant t\leqslant 1\), where \(\sum_{n=1}^{+\infty}b_{n}(t)=\partial_{t}\mathrm{hcap}(\bigcup_{n=1}^{+ \infty}\Gamma_{n}(t))\).
To the authors' best knowledge, explicit solutions to the Loewner equation, which involve infinitely many slits, do not appear in the literature often. In this article, our goal is to present Loewner flows driven by infinitely many driving functions of the form \(\lambda_{n}(t)=k_{n}\sqrt{1-t}\), by solving the corresponding differential equation. Single-slit versions have been studied in [4] and [7], where the authors begin with the driving function \(\lambda(t)=k\sqrt{1-t}\), and show that the geometry of the associated slit varies according to \(k\). In [16], the first author generalizes their result in the multi-slit version, by considering \(n\) driving functions of the form \(k_{j}\sqrt{1-t}\), to find four different geometric possibilities for the resulting slits. More specifically, they either _spiral_ about some point of the upper half-plane, or intersect the real line _non-tangentially_ (each slit by a different angle), _tangentially_ (all slits by the same angle, either \(0\) or \(\pi\)) or _orthogonally_ (all slits by angle \(\frac{\pi}{2}\)). In this article, we shall extend the preceding result in the case of infinitely many slits and shall conclude that the same geometric possibilities occur.
To start with, we set our configuration by choosing a summable sequence \((b_{n})_{n\geqslant 1}\) of positive numbers and a sequence of distinct real points \((k_{n})_{n\geqslant 1}\) ordered in such a way that \(\mathbb{R}\) can be written as a countable union of bounded intervals of the form \([k_{m},k_{m^{\prime}}]\), not containing in their interior any of the \(k_{n}\)'s, and unbounded intervals of the form \((-\infty,k_{m}]\) or \([k_{m^{\prime}},+\infty)\), if such intervals exist, again not containing any other point \(k_{n}\). We formally write the last condition for the sequence \((k_{n})_{n\geqslant 1}\) as
\[\begin{split}&\mathbb{R}=\overline{I_{-}}\cup\bigcup_{j=1}^{+ \infty}\overline{I_{j}}\cup\overline{I_{+}},\ \text{where}\ I_{j}\ \text{are defined for all}\ j\geqslant 1,\text{ such that}\\ &\text{there exists some}\ j^{\prime}\neq j:I_{j}=(k_{j},k_{j^{\prime}}), \ \text{so that}\ \ k_{n}\notin I_{j},\forall n\geqslant 1,\\ &\ I_{-}=(-\infty,\min_{n\geqslant 1}k_{n})\ \text{and}\ I_{+}=(\max_{n\geqslant 1}k_{n},+ \infty),\ \text{with}\ \ k_{n}\notin I_{\pm},\forall n\geqslant 1.\\ &\text{Furthermore, we assume that}\ d:=\inf_{n\geqslant 1}|I_{n}|>0.\end{split} \tag{1.3}\]
Note that the left endpoint of \(I_{j}\), determines its enumeration and note, also, that depending on the choice of the sequence \((k_{n})_{n\geqslant 1}\), either one of the two unbounded sets \(I_{-}\) or \(I_{+}\) might be empty. In particular, if \(I_{+}\) is not empty, then the sequence \((k_{n})_{n\geqslant 1}\) has a maximum, say \(k_{N}\). In this case, the interval \(I_{N}\) as described above ceases to exist, but this omission does not impact our study.
Then, we write equation (1.2) as
\[\frac{\partial g}{\partial t}(z,t)=\sum_{n=1}^{+\infty}\frac{2b_{n}}{g(z,t)-k_ {n}\sqrt{1-t}} \tag{1.4}\]
with \(g(z,0)=z\). The technique to solve this equation is straightforward and involves the transformation \(\hat{g}(z,t)=\frac{g(z,t)}{\sqrt{1-t}}\), which, as we shall see later on, allows us to transform (1.4) into the separable differential equation \(\frac{d\hat{g}}{P_{\mathbb{H}}(\hat{g})}=\frac{dt}{2(1-t)}\), where
\[P_{\mathbb{H}}(z):=z+\sum_{n=1}^{+\infty}\frac{4b_{n}}{z-k_{n}}.\]
It turns out that the geometric properties of \(g\) depend on the nature of the roots of the auxiliary function \(P_{\mathbb{H}}\). We can already see that for each \(m\in\mathbb{N}\) and for \(x\in\mathbb{R}\), \(\lim_{x\to k_{m}^{+}}P_{\mathbb{H}}(x)=+\infty\) and \(\lim_{x\to k_{m}^{-}}P_{\mathbb{H}}(x)=-\infty\), which implies that there exists some \(\lambda_{m}\in I_{m}\), real root of \(P_{\mathbb{H}}\), so that \(P_{\mathbb{H}}^{\prime}(\lambda_{m})<0\). For the sake of clarity, we proceed to the following definition.
**Definition 1**.: We characterize any root \(\rho\in\mathbb{R}\) of \(P_{\mathbb{H}}\), satisfying \(P_{\mathbb{H}}^{\prime}(\rho)<0\), as a _standard_ root of \(P_{\mathbb{H}}\). Via the note above, we may see that each bounded interval \(I_{m}\) contains at least one standard root.
A basic part of our analysis is to decipher what other zeroes might exist. In Section 3, we uncover the properties of \(P_{\mathbb{H}}\), which will allow us to determine exactly the possible additional roots, apart from the standard roots \(\lambda_{m}\) mentioned above. For example, if we consider the case when only finitely many \(b_{n}\)'s are non-zero, then \(P_{\mathbb{H}}\) is a rational function and thus its roots are determined by a polynomial of finite degree. Hence, with the use of the fundamental theorem of algebra, we can distinguish between four cases for the roots, namely one complex root (and its conjugate), or a multiple real root of degree 2 or 3, or distinct real roots; see [16] for details. In the case of infinitely many driving functions, the corresponding auxiliary function \(P_{\mathbb{H}}\) is no longer rational and hence the aforementioned analysis no longer applies. Therefore, we come up with different strategies and we use complex analytic methods to deduce our results. It is our intention, at first, to prove that the same four cases are the only possibilities, as they appear in the proposition below.
**Proposition 2**.: \((1)\) _If there exists a complex root \(\beta\in\mathbb{H}\) of \(P_{\mathbb{H}}\), then \(\beta,\bar{\beta}\) are the unique roots in \(\mathbb{C}\backslash\mathbb{R}\), they are simple and each bounded interval \(I_{j}\) (as described in (1.3)) has exactly one root of \(P_{\mathbb{H}}\), which is the standard root \(\lambda_{j}\). If there exist unbounded intervals, they contain no real roots of \(P_{\mathbb{H}}\)._
\((2)\) _If some interval \(I_{j}\) contains distinct simple real roots of \(P_{\mathbb{H}}\), then \(I_{j}\) has exactly three distinct roots, two of which are standard, if it is bounded, or two distinct roots, one of which is standard, if it is unbounded. Each other interval \(I_{k}\) has exactly one root, which is the standard, if \(I_{k}\) is bounded and none if \(I_{k}\) is unbounded, and \(P_{\mathbb{H}}\) has no complex roots._
\((3)\) _If there exists a multiple real root \(\rho_{0}\) of \(P_{\mathbb{H}}\), lying either in a bounded or in an unbounded interval (assuming that such an unbounded interval exists), then \(P_{\mathbb{H}}\) has only real roots and we have the following cases:_
* _either_ \(\rho_{0}\) _is a double root and each bounded interval_ \(I_{j}\) _has exactly one simple root, the standard root_ \(\lambda_{j}\)_, while if there exist unbounded intervals, they contain no roots (except possibly_ \(\rho_{0}\)_),_
* _or_ \(\rho_{0}\) _is a triple root which can only lie in some bounded interval_ \(I_{m}\) _and coincide with the standard root_ \(\lambda_{m}\)_. Each other bounded interval_ \(I_{j}\) _has exactly one simple root, the standard root_ \(\lambda_{j}\)_, while if there exist unbounded intervals, they contain no roots._
We summarize by presenting the main result of this work. But before that, let us point out that it is useful to consider the inverse functions
\(\mathbb{H}\stackrel{{ onto}}{{\longrightarrow}}H_{t}\). It is then direct to see by equation (1.4), that \(f\) satisfies the PDE in \(\mathbb{H}\times[0,1)\),
\[\frac{\partial f}{\partial t}(z,t)=-f^{\prime}(z,t)\sum_{n=1}^{+\infty}\frac{2 b_{n}}{z-k_{n}\sqrt{1-t}}, \tag{1.5}\]
with initial value \(f(z,0)=z\). We refer to the solution of the preceding equation as the _chordal Loewner flow_. Having established all possibilities for the roots of \(P_{\mathbb{H}}\) in Section 3, we then proceed to Section 4, where we explicitly solve equation (1.5) and visualize the geometry of the flow. In addition, we are going to describe its asymptotic behavior as \(t\to 1^{-}\). This behavior has been studied extensively in the past. However, in this article, we provide another approach towards the aim of tackling the problem. More specifically, we make use of a conformal invariant, the _harmonic measure_. In fact, harmonic measure and its conformally invariant nature prove to be a powerful tool for studying how a trajectory behaves asymptotically.
We are, now, ready to state our main result.
**Theorem 3**.: _Consider a summable sequence of positive numbers \((b_{n})_{n\geq 1}\) and a sequence of distinct real point \((k_{n})_{n\geq 1}\), satisfying condition (1.3). Then, the initial value problem (1.5) admits a unique solution in \(\mathbb{H}\times[0,1)\) and we distinguish the following cases:_
1. _If_ \(P_{\mathbb{H}}\) _has a complex root_ \(\beta\in\mathbb{H}\)_, then the Loewner flow is of the form_ \[f(z,t)=h^{-1}\left((1-t)^{\alpha e^{-i\psi}}h\left((1-t)^{-\frac{1}{2}}z\right) \right),\] _where_ \(h\) _maps the upper half plane onto the complement of infinitely many logarithmic spirals of angle_ \(-\psi\)_, where_ \(\alpha\) _and_ \(\psi\) _depend on the sequences_ \((k_{n})_{n\geq 1}\) _and_ \((b_{n})_{n\geq 1}\)_._
2. _If_ \(P_{\mathbb{H}}\) _has three distinct real roots, in some interval described in (_1.3_), then the Loewner flow is of the form_ \[f(z,t)=h^{-1}\left((1-t)^{\alpha}h\left((1-t)^{-\frac{1}{2}}z\right)\right),\] _where_ \(h\) _is a Schwarz-Cristoffel map of the upper half plane, that maps_ \(\mathbb{H}\) _onto_ \(\mathbb{H}\) _minus infinitely many line segments emanating from the origin._
3. _If_ \(P_{\mathbb{H}}\) _has a multiple root, either double or triple, then the Loewner flow is of the form_ \[f(z,t)=h^{-1}\left(\frac{1}{2}\log(1-t)+h\left((1-t)^{-\frac{1}{2}}z\right) \right),\] _where_ \(h\) _is a univalent map of the upper half plane, that maps_ \(\mathbb{H}\) _onto:_ (a) _either a horizontal half-plane, minus infinitely many half-lines parallel to_ \(\mathbb{R}\)_, extending to the point at infinity from the left, if the root is double,_ (b) _or the complement of infinitely many half-lines parallel to_ \(\mathbb{R}\)_, extending to the point at infinity from the left, if the root is triple._
_Furthermore, for all \(n\geq 1\), we have that the trajectories of the driving functions \(\hat{\gamma}^{(n)}:=\{f(k_{n}\sqrt{1-t},t):\ t\in[0,1)\}\), are smooth curves of \(\mathbb{H}\) starting at \(k_{n}\), that spiral about the point \(\beta\) in case \((1)\), intersect \(\mathbb{R}\) at one of the real roots nontangentially in case \((2)\) and intersect \(\mathbb{R}\) at the multiple root tangentially in case \((3a)\) or orthogonally in case \((3b)\)._
The structure of the article is as follows: In Section 2, we collect the preliminary results that will be needed during the course of the proofs. Then, in Section 3, we will proceed to a complete study of the function \(P_{\mathbb{H}}\) and its nature by examining its roots. Finally, in Section 4, we shall state and prove a series of lemmas and propositions, whose combination will lead to Theorem 3.
## 2. Basic tools and preliminaries
### Theory of conformal mappings
We begin by presenting some basic definitions about spirallike domains, that will be necessary for later. A _logarithmic spiral of angle_\(\psi\in(-\frac{\pi}{2},\frac{\pi}{2})\) in the complex plane, that joins the origin with infinity and passes from some point \(w_{0}\neq 0\), is defined as the curve with parameterization \(S:w=w_{0}\mathrm{exp}(-e^{i\psi}t)\), \(-\infty\leq t\leq+\infty\).
**Definition 4**.: (1) A simply connected domain \(D\), that contains the origin, is said to be \(\psi\)_-spirallike (with respect to \(0\))_, if for any point \(w_{0}\in D\), the logarithmic spiral \(S:w=w_{0}\exp(-e^{i\psi}t)\), \(0\leq t\leq+\infty\), is contained in \(D\).
(2) A univalent function \(f\in H(\mathbb{H})\), with \(f(\beta)=0\) for some \(\beta\in\mathbb{H}\), is said to be \(\psi\)_-spirallike (with respect to \(\beta\))_, if it maps the upper half plane onto a \(\psi\)-spirallike domain \(D\) (with respect to \(0\)).
For more details on spirallike functions, the interested reader may refer to [3, SS2.7]. Note that \(0\)-spirals are half-lines emanating from the origin and extending to infinity. We refer to \(0\)-spirallike domains/functions as _starlike_ domains/functions. The following theorem gives an analytic characterization of spirallike mappings.
**Theorem 5**.: _Let \(f\in H(\mathbb{H})\), with \(f^{\prime}(\beta)\neq 0\) and \(f(z)=0\) if and only if \(z=\beta\). Then, \(f\) is \(\psi\)-spirallike, if and only if_
\[\text{Im}\left(e^{-i\psi}\frac{(z-\beta)(z-\bar{\beta})f^{\prime}(z)}{f(z)} \right)>0,\]
_for all \(z\in\mathbb{H}\)._
Proof.: By [3, Theorem 2.19], a function \(g\in H(\mathbb{D})\), with \(g^{\prime}(0)\neq 0\) and \(g(z)=0\) if and only if \(z=0\), is \(\psi\)-spirallike, if and only if
\[\text{Re}\left(e^{-i\psi}\frac{zg^{\prime}(z)}{g(z)}\right)>0,\]
for all \(z\in\mathbb{D}\). Considering the Mobius transform \(T(z)=\frac{z-\beta}{z-\bar{\beta}}:\mathbb{H}\to\mathbb{D}\) and applying the preceding result to the function \(g:=f\circ T^{-1}\), the desired inequality follows.
Aside from that and returning to our setting, as we mentioned in the introduction, when \(b_{n}\) is non-zero for finitely many \(n\geq 1\), \(P_{\mathbb{H}}\) is a rational function. As we
Figure 1. The geometric behaviour of the tip points \(f(k_{n}\sqrt{1-t},t)\) for each case of Theorem 3.
shall see in Section 4, in order to solve PDE (1.5), we need to apply partial fraction decomposition on \(\frac{1}{P_{\mathbb{H}}}\), which can be done since this is a rational function as well. However, the same method is not applicable when \(b_{n}>0\), for all \(n\geq 1\). For this reason, we are going to require the following theorem, which is an application of the residue calculus (see [8, Section II.9]) for details.
**Theorem 6**.: _[_8_, Theorem II.2.7]_ _Let \(f\) be an analytic function in \(\mathbb{C}\), whose only singularities are poles in the finite plane, say \((\lambda_{n})_{n\geq 1}\), and let \(G_{n}\) be the principal part of \(f\) at each \(\lambda_{n}\), respectively. Let \((L_{n})_{n\geq 1}\) be a sequence of contours which satisfy the following:_
* _each contour_ \(L_{n}\) _contains finitely many poles,_
* \(0\in L_{n}\subset\text{Int}(L_{n+1})\) _for all_ \(n\geq 1\)_,_
* \(r_{n}:=\text{dist}(0,L_{n})\to+\infty\)_._
_If \(\limsup_{n\to\infty}\int_{L_{n}}|f(\zeta)|d\zeta<\infty\), then_
\[f(z)=\sum_{n=1}^{+\infty}G_{n}(z)\]
_and the convergence is uniform on compacta._
### Harmonic Measure
During the course of several proofs, we are going to utilize one conformal invariant, the _harmonic measure_. An introductory presentation of its rich theory may be found in [12]. For the purposes of the present article, we will only review some basic facts. Let \(\Omega\subsetneq\mathbb{C}\) be a domain with non-polar boundary. Let \(E\) be a Borel subset of \(\partial\Omega\). Then, the harmonic measure of \(E\) with respect to \(\Omega\) is exactly the solution of the generalized Dirichlet problem for the Laplacian in \(\Omega\) with boundary function equal to \(1\) on \(E\) and to \(0\) on \(\partial\Omega\backslash E\). For the harmonic measure of \(E\) with respect to \(\Omega\) and for \(z\in\Omega\), we use the notation \(\omega(z,E,\Omega)\). It is known that for a fixed \(z\in\Omega\), \(\omega(z,\cdot,\Omega)\) is a Borel probability measure on \(\partial\Omega\).
As we already mentioned, the harmonic measure is conformally invariant. In addition, it has a very useful monotonicity property. More specifically, let \(\Omega_{1}\subset\Omega_{2}\subsetneq\mathbb{C}\) be two domains with non-polar boundaries and let \(E\subset\partial\Omega_{1}\cap\partial\Omega_{2}\). Then,
\[\omega(z,E,\Omega_{1})\leq\omega(z,E,\Omega_{2}),\quad\text{for all }z\in \Omega_{1}.\]
Furthermore, later on we will need some formulas for harmonic measure in certain particular domains (see [12, p.100]). Let \([a,b]\subset\mathbb{R}\) and let \(z\in\mathbb{H}\). Then,
\[\omega(z,[a,b],\mathbb{H})=\frac{1}{\pi}\arg\left(\frac{z-b}{z-a}\right).\]
As a consequence, given any curve \(\gamma:[0,+\infty)\to\mathbb{H}\) satisfying \(\lim_{t\to+\infty}\gamma(t)=\infty\), it can be easily calculated that \(\lim_{t\to+\infty}\omega(\gamma(t),[a,b],\mathbb{H})=0\).
Finally, for the angular domain \(U_{\alpha,\beta}:=\{w\in\mathbb{C}:\alpha<\arg w<\beta\}\), where \(0\leq\alpha<\beta<2\pi\) (or \(-\pi\leq\alpha<\beta<\pi\)), and for \(z\in U_{\alpha,\beta}\), we have
\[\omega(z,\{w\in\mathbb{C}:\arg w=\alpha\},U_{\alpha,\beta})=\frac{\beta-\arg z }{\beta-\alpha}=1-\omega(z,\{w\in\mathbb{C}:\arg w=\beta\},U_{\alpha,\beta}).\]
**Remark 7**.: Suppose that \(E\subset\partial\mathbb{D}\), where \(\mathbb{D}\) is the unit disk, is a circular arc with endpoints \(a,b\). Then, we know (see e.g. [2, p.155]) that the level set
\[D_{k}=\left\{z\in\mathbb{D}:\omega(z,E,\mathbb{D})=k\right\},\quad k\in(0,1),\]
is a circular arc (or a diameter in case \(k=\frac{1}{2}\) and \(E\) is a half-circle) inside \(\mathbb{D}\) with endpoints \(a,b\) that intersects \(\partial\mathbb{D}\) with angles \(k\pi\) and \((1-k)\pi\).
**Remark 8**.: An important piece of information with regard to harmonic measure is its probabilistic interpretation. In particular, the quantity \(\omega(z,E,\Omega)\) represents the probability of a Brownian motion starting at \(z\) to exit the domain \(\Omega\) for the
first time passing through \(E\). This thought, combined with the previous remark, justifies intuitively our usage of harmonic measure in the study of angles.
**Remark 9**.: In many cases, instead of Borel sets, we use sets of prime ends for the harmonic measure. This is possible through Caratheodory's Theorem concerning the extension of the Riemann mapping theorem to the boundary (see [11, Chapter 9] for details on prime ends and the boundary behavior of conformal mappings). To be more exact, suppose that \(\Omega\subsetneq\mathbb{C}\) is a simply connected domain with non-polar boundary and \(f:\mathbb{D}\to\Omega\) is a corresponding Riemann mapping. Suppose that \(E\subset\partial\mathbb{D}\) is Borel. Then, this set \(E\) corresponds through \(f\) to a set of prime ends \(f(E)\) of \(\Omega\) (if \(\Omega\) is not a Jordan domain, then there might not be an one-to-one correspondence between \(f(E)\) and points of \(\partial\Omega\) that are limit points of \(\lim\limits_{z\to\zeta}f(z)\), \(\zeta\in E\)). Therefore, by the conformal invariance of the harmonic measure and Caratheodory's Theorem, we may write
\[\omega(z,E,\mathbb{D})=\omega(f(z),f(E),\Omega),\quad z\in\mathbb{D}.\]
## 3. Properties of the driving function
Given a summable sequence of non-negative numbers \((b_{n})_{n\geqslant 1}\) and a sequence of real points \((k_{n})_{n\geqslant 1}\), the function
\[P_{\mathbb{H}}(z):=z+\sum_{n=1}^{+\infty}\frac{4b_{n}}{z-k_{n}}\]
is quickly verified to converge locally uniformly in \(\mathbb{C}\backslash\overline{(k_{n})_{n\geqslant 1}}\). As we mentioned previously, the key role with regard to the geometry of the slits is played by the roots of \(P_{\mathbb{H}}\). If the non-zero terms of the sequence \((b_{n})_{n\geqslant 1}\) are finitely many, then \(P_{\mathbb{H}}\) is a rational function. Therefore, we are able to use the fundamental theorem of algebra in order to count its roots and then distinguish the cases when it has complex roots or only real roots. But since we do not have this theorem at our disposal in this case, we need to come up with different techinques. We start with the following lemma.
**Lemma 10**.: _Given the parameters above we have the following expressions:_
\((1)\) _If \(F(z,w):=\frac{P_{\mathbb{H}}(z)}{(z-w)(z-\bar{w})}\), then_
\[F(z,w)=\left(\frac{P_{\mathbb{H}}(w)}{z-w}-\frac{P_{\mathbb{H}}(\bar{w})}{z- \bar{w}}\right)\frac{1}{w-\bar{w}}+\sum_{n=1}^{+\infty}\frac{4b_{n}}{(z-k_{n} )|w-k_{n}|^{2}},\]
_for all \(z,w\in\mathbb{C}\backslash(k_{n})_{n\geqslant 1}\) with \(z\neq w\)._
\((2)\) _If \(H(z,\lambda_{1},\lambda_{2}):=\frac{P_{\mathbb{H}}(z)}{(z-\lambda_{1})(z- \lambda_{2})}\), then_
\[H(z,\lambda_{1},\lambda_{2})=\left(\frac{P_{\mathbb{H}}(\lambda_{1})}{z- \lambda_{1}}-\frac{P_{\mathbb{H}}(\lambda_{2})}{z-\lambda_{2}}\right)\frac{1} {\lambda_{1}-\lambda_{2}}+\sum_{n=1}^{+\infty}\frac{4b_{n}}{(z-k_{n})(\lambda_ {1}-k_{n})(\lambda_{2}-k_{n})},\]
_for all \(z,\lambda_{1},\lambda_{2}\in\mathbb{C}\backslash(k_{n})_{n\geqslant 1}\) with \(z\neq\lambda_{1},\lambda_{2}\) and \(\lambda_{1}\neq\lambda_{2}\)._
\((3)\) _If \(G(z,\lambda):=\frac{P_{\mathbb{H}}(z)}{(z-\lambda)^{2}}\), then_
\[G(z,\lambda)=\frac{P_{\mathbb{H}}^{\prime}(\lambda)}{z-\lambda}+\frac{P_{ \mathbb{H}}(\lambda)}{(z-\lambda)^{2}}+\sum_{n=1}^{+\infty}\frac{4b_{n}}{(z-k_ {n})(\lambda-k_{n})^{2}},\]
_for all \(z,\lambda\in\mathbb{C}\backslash(k_{n})_{n\geqslant 1}\) with \(z\neq\lambda\)._
Proof.: We commence by proving relation (2). By executing consecutive partial fraction decompositions, we have that
\[H(z,\lambda_{1},\lambda_{2})=\frac{z}{(z-\lambda_{1})(z-\lambda_{2 })}+\sum_{n=1}^{+\infty}\frac{\frac{4b_{n}}{z-k_{n}}}{(z-\lambda_{1})(z-\lambda_ {2})}\] \[=\frac{1}{\lambda_{1}-\lambda_{2}}\left(\frac{\lambda_{1}}{z- \lambda_{1}}-\frac{\lambda_{2}}{z-\lambda_{2}}+\sum_{n=1}^{+\infty}\frac{4b_{ n}}{(z-k_{n})(z-\lambda_{1})}-\sum_{n=1}^{+\infty}\frac{4b_{n}}{(z-k_{n})(z- \lambda_{2})}\right)\] \[=\frac{1}{\lambda_{1}-\lambda_{2}}\left[\frac{\lambda_{1}}{z- \lambda_{1}}-\frac{\lambda_{2}}{z-\lambda_{2}}+\sum_{n=1}^{+\infty}\left(\frac {\frac{4b_{n}}{\lambda_{1}-k_{n}}}{\frac{\lambda_{1}-k_{n}}{z-k_{n}}}-\sum_{n= 1}^{+\infty}\left(\frac{\frac{4b_{n}}{\lambda_{2}-k_{n}}}{\frac{\lambda_{2}-k _{n}}{z-k_{n}}}-\frac{\frac{4b_{n}}{\lambda_{2}-k_{n}}}{z-k_{n}}\right)\right]\] \[=\frac{1}{\lambda_{1}-\lambda_{2}}\left(\frac{\lambda_{1}+\sum_{n= 1}^{+\infty}\frac{4b_{n}}{\lambda_{1}-k_{n}}}{z-\lambda_{1}}-\frac{\lambda_{2} +\sum_{n=1}^{+\infty}\frac{4b_{n}}{\lambda_{2}-k_{n}}}{z-\lambda_{2}}\right)+ \frac{1}{\lambda_{1}-\lambda_{2}}\sum_{n=1}^{+\infty}\frac{\frac{4b_{n}}{ \lambda_{2}-k_{n}}-\frac{4b_{n}}{\lambda_{1}-k_{n}}}{z-k_{n}}\]
and the third statement follows. Notice that \(F(z,w)=H(z,w,\bar{w})\) and \(G(z,\lambda)=H(z,\lambda,\lambda)\). Then, relations (1) and (3) follow.
**Definition 11**.: An interval of the form \(I_{n}=(k_{n},k_{n^{\prime}})\) that does not contain any point of the sequence \((k_{n})_{n\geqslant 1}\) is called _bounded interval_ of \(P_{\mathbb{H}}\). If there exists an interval of the form \(I_{-}=(-\infty,k_{n^{\prime}})\) or \(I_{+}=(k_{n^{\prime}},+\infty)\) that does not contain any point of the sequence \((k_{n})_{n\geqslant 1}\), then it is called _left or right unbounded interval_ of \(P_{\mathbb{H}}\), respectively.
The preceding proposition is important, as it reveals the mechanism, according to which, \(P_{\mathbb{H}}\) can have a unique complex root, or a unique multiple real root, or there exists a unique interval of \(P_{\mathbb{H}}\) (bounded or unbounded) that contains three distinct real roots. Indeed, as we shall see in a subsequent corollary, the left summands in the relations \((1)-(3)\) above vanish, when \(w,\lambda,\lambda_{j}\) are roots of \(P_{\mathbb{H}}\) and as a result the imaginary parts of \(F,G\) and \(H\) are negative and positive in the upper half plane and in the lower half plane, respectively, allowing us to deduce uniqueness.
**Lemma 12**.: _A bounded interval of \(P_{\mathbb{H}}\) contains either one or three roots of \(P_{\mathbb{H}}\), counting multiplicity. An unbounded interval of \(P_{\mathbb{H}}\) contains either none or two roots of \(P_{\mathbb{H}}\), counting multiplicity._
Proof.: In a bounded interval of \(P_{\mathbb{H}}\), say \((k_{n},k_{n^{\prime}})\), we observe that \(\lim_{\mathbb{R}\ni x\to k_{n}^{+}}P_{\mathbb{H}}(x)=+\infty\) and \(\lim_{\mathbb{R}\ni x\to k_{n^{\prime}}^{-}}P_{\mathbb{H}}(x)=-\infty\). Thus, there exists some \(\lambda_{n}\in(k_{n},k_{n^{\prime}})\), root of \(P_{\mathbb{H}}\). Because the third derivative of \(P_{\mathbb{H}}\) is always negative in \(\mathbb{R}\), by Rolle's theorem, we can have, counting multiplicity, up to three real roots in \((k_{n},k_{n^{\prime}})\). In particular, if \(P_{\mathbb{H}}\) has exactly two distinct roots, then one of them is necessarily a double root.
Besides, in case there exists a left unbounded interval, then we may compute that \(\lim_{x\to-\infty}P_{\mathbb{H}}(x)=-\infty\), whereas if there exists a right unbounded interval, then \(\lim_{x\to+\infty}P_{\mathbb{H}}(x)=+\infty\). Using similar arguments as above, we may deduce the desired result.
As a direct corollary we may now prove Proposition 2.
Proof of Proposition 2.: (1) Assume that \(\beta\in\mathbb{H}\) is a complex root of \(P_{\mathbb{H}}\). Since the parameters \(b_{n}\) and \(k_{n}\) are real, \(\bar{\beta}\) is also a root. By Lemma 10, the imaginary part of \(F(z,\beta)\) is negative for all \(z\in\mathbb{H}\) and positive for all \(z\in-\mathbb{H}\). This implies that \(\beta,\bar{\beta}\) are simple roots of \(P_{\mathbb{H}}\) and they are the unique non-real roots. Finally, we have that \(F^{\prime}(x,\beta)<0\), for all \(x\in\mathbb{R}\backslash(k_{n})_{n\geqslant 1}\), which means that \(F(\cdot,\beta)\) is decreasing in each interval of \(P_{\mathbb{H}}\) and the result follows.
(2) Assume that there exist two distinct roots \(\lambda_{1},\lambda_{2}\) in some interval of \(P_{\mathbb{H}}\). Again, we have that \(\operatorname{Im}(H(z,\lambda_{1},\lambda_{2}))\neq 0\), for all \(z\in\mathbb{C}\backslash\mathbb{R}\). Hence, \(H\) and by extension \(P_{\mathbb{H}}\) cannot have non-real roots. Since \(\lambda_{1},\lambda_{2}\) lie in the same interval, we
have that \((\lambda_{1}-k_{n})(\lambda_{2}-k_{n})>0\), for all \(n\in\mathbb{N}\). Therefore, \(H^{\prime}(x,\lambda_{1},\lambda_{2})<0\), for all \(x\in\mathbb{R}\backslash(k_{n})_{n\geqslant 1}\) and this completes the proof.
(3) By Lemma 10, the imaginary part of \(G(z,\rho_{0})\) is negative for all \(z\in\mathbb{H}\) and positive for all \(z\in-\mathbb{H}\). Therefore, \(G\) and \(P_{\mathbb{H}}\) have no roots in \(\mathbb{C}\backslash\mathbb{R}\). As before, we have that \(G^{\prime}(x,\rho_{0})<0\) for all \(x\in\mathbb{R}\backslash(k_{n})_{n\geqslant 1}\), which means that \(G(\cdot,\rho)\) is decreasing in each interval of \(P_{\mathbb{H}}\). This monotony implies the desired result.
We recall at this point that by condition (1.3), the terms of \((k_{n})_{n\geqslant 1}\) are chosen in such a way that \(d:=\inf_{n\neq m}|k_{n}-k_{m}|>0\). This allows for the very useful property that for any \(\epsilon\leqslant\frac{d}{4}\), \(P_{\mathbb{H}}\) is convergent uniformly in the domain \(\mathbb{C}\backslash\bigcup_{n\geqslant 1}D(k_{n},\epsilon)\). Under this assumption, we can prove the following result.
**Proposition 13**.: _Consider a sequence of non-negative numbers \((b_{n})_{n\geqslant 1}\) and a sequence of real points \((k_{n})_{n\geqslant 1}\) satisfying condition (1.3) and denote by \((\lambda_{n})_{n\geqslant 1}\) the standard roots of Proposition 2. We distinguish the following cases in accordance with Proposition 2:_
(1) _Let \(\beta\in\mathbb{H}\) be a complex root of \(P_{\mathbb{H}}\) (along with its conjugate \(\bar{\beta}\)) and define \(\psi:=\mathrm{Arg}(P_{\mathbb{H}}^{\prime}(\beta))\). Then,_
\[\frac{1}{P_{\mathbb{H}}(z)}=\frac{\frac{1}{P_{\mathbb{H}}^{\prime}(\beta)}}{z -\beta}+\frac{\frac{1}{P_{\mathbb{H}}^{\prime}(\beta)}}{z-\bar{\beta}}+\sum_ {n=1}^{+\infty}\frac{\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n})}}{z- \lambda_{n}}. \tag{3.1}\]
_Furthermore, we have that \(\psi\in(-\frac{\pi}{2},\frac{\pi}{2})\) and_
\[\sum_{n=1}^{+\infty}\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n})}+\frac{2 \cos(\psi)}{|P_{\mathbb{H}}^{\prime}(\beta)|}=1.\]
(2) _Assume that \(P_{\mathbb{H}}\) has only distinct real roots in some interval of \(P_{\mathbb{H}}\), either three in a bounded one, say \(\lambda_{j}<\rho_{1}<\rho_{2}\), or two in an unbounded, say \(\rho_{1},\rho_{2}\). Then,_
\[\frac{1}{P_{\mathbb{H}}(z)}=\frac{\frac{1}{P_{\mathbb{H}}^{\prime}(\rho_{1})}} {z-\rho_{1}}+\frac{\frac{1}{P_{\mathbb{H}}^{\prime}(\rho_{2})}}{z-\rho_{2}}+ \sum_{n=1}^{+\infty}\frac{\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n})}}{z- \lambda_{n}}. \tag{3.2}\]
_Furthermore,_
\[\sum_{n=1}^{+\infty}\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n})}+\frac{1}{ P_{\mathbb{H}}^{\prime}(\rho_{1})}+\frac{1}{P_{\mathbb{H}}^{\prime}(\rho_{2})}=1.\]
(3a) _Let \(\rho_{0}\in\mathbb{R}\) be a real double root of \(P_{\mathbb{H}}\). Then,_
\[\frac{1}{P_{\mathbb{H}}(z)}=\frac{\frac{2}{P_{\mathbb{H}}^{(2)}(\rho_{0})}}{(z -\rho_{0})^{2}}+\frac{-\frac{2P_{\mathbb{H}}^{(3)}(\rho_{0})}{3(P_{\mathbb{H} }^{(2)}(\rho_{0}))^{2}}}{z-\rho_{0}}+\sum_{n=1}^{+\infty}\frac{\frac{1}{P_{ \mathbb{H}}^{\prime}(\lambda_{n})}}{z-\lambda_{n}}. \tag{3.3}\]
_Furthermore,_
\[-\frac{2P_{\mathbb{H}}^{(3)}(\rho_{0})}{3\left(P_{\mathbb{H}}^{(2)}(\rho_{0}) \right)^{2}}+\sum_{n=1}^{+\infty}\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n}) }=1\]
(3b) _Let \(\rho_{0}\in\mathbb{R}\) be a real triple root of \(P_{\mathbb{H}}\). Then, \(\rho_{0}=\lambda_{m}\) for some \(m\in\mathbb{N}\) and_
\[\frac{1}{P_{\mathbb{H}}(z)}=\frac{\frac{6}{P_{\mathbb{H}}^{(3)}(\rho_{0})}}{(z -\rho_{0})^{3}}+\frac{-\frac{3P_{\mathbb{H}}^{(4)}(\rho_{0})}{2\left(P_{ \mathbb{H}}^{(3)}(\rho_{0})\right)^{2}}}{(z-\rho_{0})^{2}}+2\frac{\left(\frac{P_ {\mathbb{H}}^{(4)}(\rho_{0})}{4!}\right)^{2}-\frac{P_{\mathbb{H}}^{(3)}(\rho_ {0})P_{\mathbb{H}}^{(5)}(\rho_{0})}{3!}}{\left(\frac{P_{\mathbb{H}}^{(3)}(\rho_ {0})}{3!}\right)^{3}(z-\rho_{0})}+\sum_{n\neq m}\frac{\frac{1}{P_{\mathbb{H}}^ {\prime}(\lambda_{n})}}{z-\lambda_{n}}. \tag{3.4}\]
_Furthermore,_
\[2\frac{\left(\frac{P_{\mathbb{H}}^{(4)}(\rho_{0})}{4!}\right)^{2}-\frac{P_{ \mathbb{H}}^{(3)}(\rho_{0})P_{\mathbb{H}}^{(5)}(\rho_{0})}{3!5!}}{\left(\frac{P_ {\mathbb{H}}^{(3)}(\rho_{0})}{3!}\right)^{3}}+\sum_{n\neq m}\frac{1}{P_{\mathbb{ H}}^{\prime}(\lambda_{n})}=1.\]
Proof.: We first establish the representation formulas \((\ref{eq:1})-(\ref{eq:2})\). Assume, without loss of generality, that the sequence \((k_{n})_{n\geq 1}\) accumulates at both \(+\infty\) and \(-\infty\). So, we can extract a subsequence \((k_{n^{1}_{n}})_{n\geq 1}\) that increases to \(+\infty\). Let \(R_{n}\) be the middle points of the intervals \(I_{m_{n}^{1}}\) (see condition (1.3)) and observe that \((R_{n})_{n\geq 1}\) is increasing to \(+\infty\). We, now, construct a sequence of rectangles \((L_{n})_{n\geq 1}\) in the following manner: first we consider \(L_{n}\) to be the square of center \(0\) with sides parallel to the axes, passing from the point \(R_{n}\). If \(-R_{n}\) happens to be the middle point of an interval \((k_{m_{n}^{2}},k_{m_{n}^{2}})\), we leave \(L_{n}\) as is. If not, we transform \(L_{n}\) into a rectangle passing from the points \(R_{n},\pm iR_{n}\) and \(-R_{n}-\epsilon_{n}\) (or \(-R_{n}+\epsilon_{n}\)), where \(\epsilon_{n}<\frac{d}{2}\) is picked in such a way that \(|\zeta-k_{m}|\geq\frac{d}{2}\), for all \(\zeta\in L_{n}\) and every \(m\geq 1\). We then have that \(L_{n}\subset\operatorname{Int}(L_{n+1})\) and the perimeter of \(L_{n}\) is less than \(8R_{n}+2\epsilon_{n}\), for all \(n\geq 1\). We may assume that for each case described above, the additional roots \(\beta,\bar{\beta}\) or \(\rho_{j}\), lie in \(\operatorname{Int}(L_{1})\) and \(R_{1}>\sigma:=\frac{2}{d}\sum_{n=1}^{+\infty}4b_{n}\). We then have that \(|P_{\mathbb{H}}(\zeta)|=\left|\zeta+\sum_{m=1}^{+\infty}\frac{4b_{m}}{\zeta-k _{m}}\right|\geq R_{n}-\frac{d}{2}-\sigma\), for all \(\zeta\in L_{n}\). This implies that \(\limsup_{n\to+\infty}\int_{L_{n}}\frac{1}{|P_{\mathbb{H}}(\zeta)|}d\zeta|<+\infty\). For every \(n\geq 1\), recall that \(\lambda_{n}\) is a simple root of \(P_{\mathbb{H}}\), hence a simple pole \(\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n})}\) with residue \(\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n})}\). For the cases (1) and (2), the roots \(\beta,\bar{\beta}\) and \(\rho_{j}\), \(j=1,2\), are also simple poles of \(\frac{1}{P_{\mathbb{H}}^{\prime}}\), while in cases (3a) and (3b), the root \(\rho_{0}\) is a pole of second or third order, respectively. Therefore, writing down the principal part of \(\frac{1}{P_{\mathbb{H}}^{\prime}}\) around each pole, we apply Theorem 6 and we conclude relations \((\ref{eq:1})-(\ref{eq:2})\). The convergence is locally uniform.
Next, we shall establish the second part of (1). Using Lemma 10, we observe that \(\operatorname{Im}F(\beta,\beta)=\operatorname{Im}\frac{|P_{\mathbb{H}}^{ \prime}(\beta)|e^{iy}}{2!\operatorname{Im}\beta}<0\) and hence \(\cos(\psi)>0\). To prove the last statement, we have that
\[\frac{iy}{P_{\mathbb{H}}(iy)}=\frac{1}{P_{\mathbb{H}}^{\prime}(\beta)}\frac{ iy}{iy-\beta}+\frac{1}{P_{\mathbb{H}}^{\prime}(\bar{\beta})}\frac{iy}{iy-\bar{ \beta}}+\sum_{n=1}^{+\infty}\frac{1}{P_{\mathbb{H}}^{\prime}(\lambda_{n})}\frac {iy}{iy-\lambda_{n}},\]
for all \(y>0\). At first, we need to show that the \((1/P_{\mathbb{H}}^{\prime}(\lambda_{n}))_{n\geq 1}\) is summable. Taking the real parts in the preceding relation, we have that
\[\sum_{n=1}^{+\infty}\frac{1}{|P_{\mathbb{H}}^{\prime}(\lambda_{n})|}\frac{y^{2 }}{y^{2}+\lambda_{n}^{2}}=\operatorname{Re}\left(\frac{1}{P_{\mathbb{H}}^{\prime }(\beta)}\frac{iy}{iy-\bar{\beta}}+\frac{1}{P_{\mathbb{H}}^{\prime}(\bar{\beta})} \frac{iy}{iy-\bar{\beta}}-\frac{iy}{P_{\mathbb{H}}(iy)}\right)=:\Phi(y)>0\]
for all \(y\geq 0\). Aiming for a contradiction, we assume that \(\sum_{n=1}^{+\infty}\frac{1}{|P_{\mathbb{H}}^{\prime}(\lambda_{n})|}=+\infty\), we then consider some natural \(N\geq 1\), such that its \(N\)-th partial sum is larger than \(M:=2\max_{y\geq 0}\Phi(y)<\infty\). Then, we choose \(y_{0}>0\), so that \(\frac{y_{0}^{2}}{y_{0}^{2}+\lambda_{n}^{2}}\geq\frac{1}{2}\), for all \(n=1,\ldots,N\). As a result,
\[\Phi(y_{0})\geq\sum_{n=1}^{N}\frac{1}{|P_{\mathbb{H}}^{\prime}(\lambda_{n})|} \frac{y_{0}^{2}}{y_{0}^{2}+\lambda_{n}^{2}}\geq\frac{1}{2}\sum_{n=1}^{N}\frac{1}{ |P_{\mathbb{H}}^{\prime}(\lambda_{n})|}>\frac{M}{2}\geq\Phi(y_{0}).\]
This contradiction shows that the series is convergent, and finally, letting \(y\to+\infty\), by the uniform convergence of the series, we deduce the second statement of (1). The other cases follow similarly, so we omit their proof.
In the last proposition, we made four different assumptions regarding the roots of \(P_{\mathbb{H}}\). Obviously, each of these cases occurs depending on the choice of the sequences
\((b_{n})_{n\geqslant 1}\) and \((k_{n})_{n\geqslant 1}\). The following proposition states that for every such choice, one of these four cases can turn up and they are mutually exclusive, thus, exactly one of the four holds.
**Proposition 14**.: _Given a sequence of real points \((k_{n})_{n\geqslant 1}\) and a summable sequence of positive numbers \((b_{n})_{n\geqslant 1}\), then only one of the possible cases of Proposition 2 can occur. Therefore, either \(P_{\mathbb{H}}\) has a complex root or one of the intervals of \(P_{\mathbb{H}}\) has three roots counting multiplicity, if the interval is bounded, or two roots counting multiplicity if it is unbounded._
Proof.: Assume that \(P_{\mathbb{H}}\) has no complex roots. We will prove that there exists an interval (bounded or unbounded) that contains three or two roots counting multiplicity. Assume on the contrary that this is not true and hence each bounded interval \(I_{n}\) contains only the standard root \(\lambda_{n}\) and each unbounded (if it exists) contains no roots. Then, as in the proof of Proposition 13, we have that \(\frac{1}{P_{\mathbb{H}}(z)}=\sum_{n=1}^{+\infty}\frac{1}{P_{\mathbb{H}}^{\prime }(\lambda_{n})}\frac{1}{z-\lambda_{n}}\), which implies that
\[1=\lim_{y\to+\infty}\frac{iy}{P_{\mathbb{H}}(iy)}=\sum_{n=1}^{+\infty}\frac{1 }{P_{\mathbb{H}}^{\prime}(\lambda_{n})}<0,\]
a contradiction. Therefore, there exists some interval that contains more than one root and by Proposition 2, this is the unique interval that will contain three roots, if it is bounded, or two if it is unbounded, counting multiplicity.
On the other hand, if \(P_{\mathbb{H}}\) has a complex root, then by Proposition 2, this root along with its conjugate are the sole complex roots and each bounded interval of \(P_{\mathbb{H}}\) has exactly one real root, whereas if there exists an unbounded one, it has no real roots.
## 4. The chordal Loewner flow
Recall, now, that our purpose is to solve PDE (1.5) and describe its solutions geometrically. For this, we consider the corresponding ODE
\[\frac{dw}{dt}(z,t)=\sum_{n=1}^{+\infty}\frac{2b_{n}}{w(z,t)-k_{n}\sqrt{1-t}} \tag{4.1}\]
with initial value \(w(z,0)=z\). This becomes, using the transform \(v=(1-t)^{-\frac{1}{2}}w\),
\[\frac{dv}{dt}(z,t)=\frac{1}{2(1-t)}\left(v(z,t)+\sum_{n=1}^{+\infty}\frac{4b_ {n}}{v(z,t)-k_{n}}\right)=\frac{P_{\mathbb{H}}(v(z,t))}{2(1-t)}. \tag{4.2}\]
This is a separable differential equation and therefore, the initial value problem (1.5) can be solved by integrating the equation
\[\frac{1}{P_{\mathbb{H}}(v)}dv=\frac{1}{2(1-t)}dt. \tag{4.3}\]
For this reason, fix some \(z_{0}\in\mathbb{H}\) and consider the function \(\phi(z):=\int_{z_{0}}^{z}\frac{d\zeta}{P_{\mathbb{H}}(\zeta)}\), which is well defined and analytic, since \(\mathbb{H}\) is simply connected. The function \(\phi\) is going to play the role of the primitive of \(\frac{1}{P_{\mathbb{H}}}\). Looking at relations \(\eqref{eq:1}-\eqref{eq:2}\), the following lemma will be necessary for later.
**Lemma 15**.: _Let \((A_{n})_{n\geqslant 1}\) be an absolutely summable sequence of complex numbers and let \((\lambda_{n})_{n\geqslant 1}\) be a sequence of real points. Fix some \(z_{0}\in\mathbb{H}\). Then, the function_
\[\psi(z):=\sum_{n=1}^{+\infty}A_{n}\log\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}\]
_is analytic in \(\mathbb{H}\) and continuous in \(\overline{\mathbb{H}}\backslash\overline{(\lambda_{n})_{n\geqslant 1}}\)._
Proof.: It suffices to prove that the series converges uniformly in a compact set \(K\subset\overline{\mathbb{H}}\backslash\overline{(\lambda_{n})_{n\geqslant 1}}\). Let \(R>|z_{0}|\) such that \(K\subset D(0,R)\) and let \(d:=\operatorname{dist}(K,\overline{(\lambda_{n})_{n\geqslant 1}})\). Clearly \(d>0\). For all those \(n\geqslant 1\) such that \(|\lambda_{n}|\leqslant R\) and for \(z\in K\), we have that
\[\frac{d}{|z_{0}|+R}\leqslant\left|\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}} \right|\leqslant\frac{|z|+|\lambda_{n}|}{|z_{0}-\lambda_{n}|}\leqslant\frac{2R }{\operatorname{Im}z_{0}}.\]
Thus, there exists some \(M_{1}(K)>0\) such that \(\left|\log\left|\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}\right|\right|\leqslant M _{1}(K)\). On the other hand, for all those \(n\geqslant 1\) such that \(|\lambda_{n}|>R\), we have that
\[\left|\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}\right|=\left|\frac{z-z_{0}+z_{0 }-\lambda_{n}}{z_{0}-\lambda_{n}}\right|\leqslant 1+\frac{R+|z_{0}|}{R-|z_{0}|}= \frac{2R}{R-|z_{0}|}.\]
Now, if \(R<|\lambda_{n}|\leqslant 2R\), then \(\left|\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}\right|\geqslant\frac{d}{|z_{0}| +2R}\) whereas if \(|\lambda_{n}|>2R\), we have that \(\left|\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}\right|\geqslant\frac{|\lambda_ {n}|-R}{|\lambda_{n}|+R}>\frac{1}{3}\). In total, we may find some \(M_{2}(K)>0\), such that \(\left|\log\left|\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}\right|\right|\leqslant M _{2}(K)\). Letting \(M>\max\{M_{1}(K),M_{2}(K)\}\), we have that
\[\left|\sum_{n=1}^{+\infty}A_{n}\log\frac{z-\lambda_{n}}{z_{0}- \lambda_{n}}\right| \leqslant\sum_{n=1}^{+\infty}|A_{n}|\left|\log\left|\frac{z- \lambda_{n}}{z_{0}-\lambda_{n}}\right|\right|+\sum_{n=1}^{+\infty}|A_{n}| \left|\arg\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}\right|\] \[\leqslant(M+2\pi)\sum_{n=1}^{+\infty}|A_{n}|<+\infty\]
and the desired result follows.
The usefulness of the preceding lemma lies in the fact that depending on the choice of the parameters, the series \(\sum_{n=1}^{+\infty}A_{n}\log(z-\lambda_{n})\) might not be convergent. We overcome this problem by fixing some \(z_{0}\) and using the function \(\psi\) above. In order to study equation (1.5), we distinguish the four possible cases of Proposition 13. According to Proposition 14, all cases can arise and hence we study each case separate in the upcoming subsections. We fix some arbitrary point \(z_{0}\in\mathbb{H}\), according to Lemma 15.
### Spirals
At first, assume that \(P_{\mathbb{H}}\) has a complex root \(\beta\in\mathbb{H}\). Utilizing Proposition 13, equation (4.3) may be written as
\[\left(\frac{1}{v-\beta}+\frac{e^{2i\psi}}{v-\beta}-\sum_{n=1}^{+\infty}\frac {\left|\frac{P_{\mathbb{H}}^{\prime}(\beta)}{P_{\mathbb{H}}^{\prime}(\lambda_ {n})}\right|e^{i\psi}}{v-\lambda_{n}}\right)dv=\frac{P_{\mathbb{H}}^{\prime} (\beta)}{2(1-t)}dt.\]
\begin{table}
\begin{tabular}{|l|l|} \hline \multicolumn{2}{|c|}{**Roots of \(P_{\mathbb{H}}\)**} & \multicolumn{2}{|c|}{**Geometry of the slits**} \\ \hline \(P_{\mathbb{H}}\) has a complex root \(\beta\in\mathbb{H}\). & \(\gamma_{n}\) spirals about \(\beta\), \(\forall n\geqslant 1\). \\ \hline \(P_{\mathbb{H}}\) has a double root \(\rho_{0}\in\mathbb{R}\). & \(\begin{array}{l}\gamma_{n}\text{ intersects }\mathbb{R}\text{ tangentially at }\\ \rho_{0},\,\forall n\geqslant 1\). \\ \end{tabular} \\ \hline \(P_{\mathbb{H}}\) has a triple root \(\rho_{0}\in\mathbb{R}\). & \(\begin{array}{l}\gamma_{n}\text{ intersects }\mathbb{R}\text{ orthogonally at }\\ \rho_{0},\,\forall n\geqslant 1\). \\ \end{tabular} \\ \hline \(P_{\mathbb{H}}\) has distinct real roots and \(\exists\rho_{1}\in\mathbb{R}\), satisfying \(P_{\mathbb{H}}^{\prime}(\rho_{1})>0\). & \(\begin{array}{l}\gamma_{n}\text{ intersects }\mathbb{R}\text{ non-tangentially }\\ \text{at some }\rho_{1},\,\forall n\geqslant 1\). \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 1. All options for the roots of \(P_{\mathbb{H}}\) (left column) and the corresponding behaviour of the slits (right column).
Setting \(\alpha_{n}:=\left|\frac{P_{\mathbb{H}}^{\prime}(\beta)}{P_{\mathbb{H}}(\lambda_{n} )}\right|\), it is straightforward by Proposition 13 that the sequence \(\left(-a_{n}e^{i\psi}\right)_{n\geqslant 1}\) is absolutely summable. Hence, through a slight reformulation of Lemma 15, the infinite product \(\prod_{n=1}^{+\infty}(\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}})^{-\alpha_{n}e^ {i\psi}}\) is well defined. Therefore, the preceding equation gives the implicit solution \(h(v(z,t))=(1-t)^{-\frac{P_{\mathbb{H}}^{\prime}(\beta)}{2}}h(z)\), where
\[h(z)=e^{\phi(z)}=\frac{z-\beta}{z_{0}-\beta}\left(\frac{z-\bar{\beta}}{z_{0}- \bar{\beta}}\right)^{e^{2i\psi}}\prod_{n=1}^{+\infty}\left(\frac{z-\lambda_{n} }{z_{0}-\lambda_{n}}\right)^{-\alpha_{n}e^{i\psi}} \tag{4.4}\]
for all \(z\in\mathbb{H}\).
**Proposition 16**.: _The function \(h\) defined by (4.4) is analytic and univalent in the upper half-plane. In particular, \(h\) is a \(\psi\)-spirallike function of \(\mathbb{H}\), with \(\psi\in(-\frac{\pi}{2},\frac{\pi}{2})\)._
Proof.: As before, by Lemma 15, the infinite product converges locally uniformly, therefore analyticity follows directly. In general, if a sequence of analytic functions converges locally uniformly, then so does the sequence of the derivatives. Hence, differentiating \(h\), we interchange limit and derivative and then apply Proposition 13 to get
\[\frac{h^{\prime}(z)}{h(z)}=\frac{1}{z-\beta}+\frac{e^{2i\psi}}{z-\bar{\beta}}+ \sum_{n=1}^{+\infty}\frac{-\alpha_{n}e^{i\psi}}{z-\lambda_{n}}=\frac{P_{ \mathbb{H}}^{\prime}(\beta)}{P_{\mathbb{H}}(z)}\]
for all \(z\in\mathbb{H}\). Using Lemma 10 and keeping in mind that \(\operatorname{Im}(F(z,\beta))<0\) for all \(z\in\mathbb{H}\), we have that
\[\operatorname{Im}\left(e^{-i\psi}\frac{(z-\beta)(z-\bar{\beta})h^{\prime}(z)} {h(z)}\right)=\operatorname{Im}\left(\frac{|P_{\mathbb{H}}^{\prime}(\beta)|}{ F(z,\beta)}\right)>0,\]
for all \(z\in\mathbb{H}\). As a consequence, according to Theorem 5, \(h\) is a \(\psi\)-spirallike (and by extension univalent) function of the upper half-plane.
Using the preceding proposition, the univalence of \(h\) shows that
\[v(z,t)=h^{-1}\left((1-t)^{-\frac{P_{\mathbb{H}}^{\prime}(\beta)}{2}}h(z) \right),\]
for all \(z\in\mathbb{H}\). In other words, returning back to \(w=(1-t)^{\frac{1}{2}}v\) and then to \(f=w^{-1}\), we have that the PDE (1.5) is satisfied by the function
\[f(z,t)=h^{-1}\left((1-t)^{\frac{P_{\mathbb{H}}^{\prime}(\beta)}{2}}h\left((1-t )^{-\frac{1}{2}}z\right)\right),\]
for all \(z\in\mathbb{H}\) and \(t\in[0,1)\).
**Remark 17**.: Note at this point that \(f\) is independent of the choice of \(z_{0}\in\mathbb{H}\). Indeed, if we denote by \(\phi_{z_{0}}(z)=\xi_{z_{0}}^{z}\frac{1}{P_{\mathbb{H}}(\zeta)}d\zeta\), then we have that \(\phi_{z_{0}}^{\prime}=\phi_{z_{0}^{\prime}}^{\prime}\), for any other choice \(z_{0}^{\prime}\). Hence, \(h_{z_{0}}=ch_{z_{0}^{\prime}}\), for some constant number \(c\). However, due to the conjugation formula above, we have that
\[f_{z_{0}}(z,t): =h_{z_{0}}^{-1}\left((1-t)^{\frac{P_{\mathbb{H}}^{\prime}(\beta)} {2}}h_{z_{0}}\left((1-t)^{-\frac{1}{2}}z\right)\right)\] \[=h_{z_{0}^{\prime}}^{-1}\left((1-t)^{\frac{P_{\mathbb{H}}^{\prime }(\beta)}{2}}h_{z_{0}^{\prime}}\left((1-t)^{-\frac{1}{2}}z\right)\right)=:f_{z_{0 }^{\prime}}(z,t)\]
and thus \(f\) is well defined.
**Lemma 18**.: _The function \(h\) defined by (4.4) maps the upper half-plane onto the complement of infinitely many logarithmic spirals joining the tip points \(h(k_{n})\) to the point at infinity. Moreover, \(h(x)\to\infty\), as \(x\to\infty\)._
Proof.: By Proposition 16, it suffices to find the image of the real line under \(h\). By (4.4), using the elementary trigonometric identities \(e^{2i\psi}+1=2\cos(\psi)e^{i\psi}\) and \(e^{2i\psi}-1=2i\sin(\psi)e^{i\psi}\), a series of straightforward computations shows that for each \(m\geq 1\), \(h(x)=h(k_{m})C\exp\left(e^{i\psi}S_{m}(x)-ie^{i\psi}\sum_{n=1}^{+\infty}a_{n} \mathrm{Arg}(\frac{x-\lambda_{n}}{k_{m}-\lambda_{n}})\right)\), where
\[S_{m}(x)=2\log\left|\frac{x-\beta}{k_{m}-\beta}\right|\cos(\psi)+2\mathrm{Arg} \left(\frac{x-\beta}{k_{m}-\beta}\right)\sin(\psi)-\sum_{n=1}^{+\infty}a_{n} \log\left|\frac{x-\lambda_{n}}{k_{m}-\lambda_{n}}\right|\]
and \(C\in\mathbb{C}\) is some absolute constant. Assume, for the sake of simplicity, that \(C=1\) (see Remark 17). Now, let \(\lambda_{m}\ast\in\mathbb{R}\) be the largest \(\lambda_{j}\), so that \(k_{m}\in(\lambda_{m}\ast,\lambda_{m})\). Restricting this interval, we see that \(h(x)=h(k_{m})\exp(e^{i\psi}S_{m}(x))\), which implies the first part of the lemma.
For the second part, again by straightforward calculations, we have that
\[\log|h(x)| =\cos(\psi)\left(2\log|x-\beta|\cos(\psi)-\sum_{n=1}^{+\infty}a_{ n}\log\left|\frac{x-\lambda_{n}}{z_{0}-\lambda_{n}}\right|\right)\] \[+2\cos(\psi)\sin(\psi)\mathrm{Arg}(x-\beta)+\sin(\psi)\sum_{n=1}^ {+\infty}a_{n}\mathrm{Arg}\left(\frac{x-\lambda_{n}}{z_{0}-\lambda_{n}}\right).\]
Recall that the only possible accumulation points of the sequence \((\lambda_{n})_{n\geq 1}\) are \(\pm\infty\). Let us assume, without loss of generality, that it accumulates at both \(+\infty\) and \(-\infty\). We will show that \(|h(x)|\to\infty\), as \(x\to+\infty\). Consider large enough \(x>0\), say \(x>\mathrm{Re}\beta+1\). By the first part of Proposition 13 and the fact the \(\cos(\psi)>0\), we get that
\[\log|h(x)| \approx\log|x-\beta|\left|P_{\mathbb{H}}^{\prime}(\beta)\right|+ \sum_{n=1}^{+\infty}a_{n}\log\left|\frac{x-\beta}{x-\lambda_{n}}(z_{0}- \lambda_{n})\right|\] \[\geq\log|x-\beta|\left|P_{\mathbb{H}}^{\prime}(\beta)\right|+ \sum_{n=1}^{+\infty}a_{n}\log\left|\frac{x-\mathrm{Re}\beta}{x-\lambda_{n}}(z _{0}-\lambda_{n})\right|,\]
where the symbol \(\approx\) denotes comparability. We will prove that the preceding sum is bounded below, therefore the right-hand side tends to infinity. To see this, we split the sum to those indices such that \(\lambda_{n}>\mathrm{Re}\beta+1\) and the rest. It is easy to see that for those indices, we have that \(\left|\frac{x-\mathrm{Re}\beta}{x-\lambda_{n}}\right|\geq\min\{1,\frac{1}{ \lambda_{n}-\mathrm{Re}\beta-1}\}=\frac{1}{\lambda_{n}-\mathrm{Re}\beta-1}\), for large \(n\), since \((\lambda_{n})_{n\geq 1}\) accumulate at \(\infty\). This implies that
\[\sum_{\lambda_{n}\geq\mathrm{Re}\beta+1}a_{n}\log\left|\frac{x-\mathrm{Re} \beta}{x-\lambda_{n}}(z_{0}-\lambda_{n})\right|\geq\sum_{\lambda_{n}\geq \mathrm{Re}\beta+1}a_{n}\log\left|\frac{z_{0}-\lambda_{n}}{\mathrm{Re}\beta+1- \lambda_{n}}\right|.\]
The case for the indices so that \(\lambda_{n}<0\) is easier and one can see that \(\log|\frac{x-\mathrm{Re}\beta}{x-\lambda_{n}}|\) is positive. Combining everything together, we deduce that \(|h(x)|\to\infty\), as \(x\to+\infty\) and similarly when \(x\) tends to \(-\infty\), the result follows.
By Lemma 18 we see that the image of \(h\) is the complement of infinitely many spirals with tip points \(h(k_{n})\), so that \(h(h_{n})\to\infty\), as in Figure 2. One can also show that there exists a _spirallike sector_ of angle \(\psi\) and amplitude \(\frac{1}{\cos(\psi)}\sum_{n=1}^{+\infty}a_{n}\pi\), so that it does not contain any spirals \(\Gamma(s)=h(k_{n})e^{e^{i\psi}s}\), \(s\in\mathbb{R}\) (see Figure 2). Such a sector is defined as the set \(\mathrm{Spir}[\psi,\alpha,\theta_{0}]=\{e^{e^{i\psi}t+i\theta}:t\in\mathbb{R}, \theta\in(\theta_{0}-\alpha,\theta_{0}+\alpha)\}\) (see [1], p. 385). Note that in order to determine the amplitude of the sector, this is done by computing the amplitude of the points of intersection of the boundary spirals \(S_{1}\), \(S_{2}\) of the sector, with the unit circle. Thus, if \(\Gamma(s_{0,n})\in\partial\mathbb{D}\), where \(s_{0,n}=-\frac{1}{\cos(\psi)}\log|h(k_{n})|\), we then let \(\zeta_{1}:=\lim_{k_{n}\to+\infty}\Gamma(s_{0,n})\) and \(\zeta_{2}:=\lim_{k_{n}\to+\infty}\Gamma(s_{0,n})\).
\(\lim_{k_{n}\to-\infty}\Gamma(s_{0,n})\) to be the points \(S_{1}\cap\partial\mathbb{D}\) and \(S_{1}\cap\partial\mathbb{D}\) respectively. It is, now, a matter of straightforward calculations to determine the points \(\zeta_{j}\in\partial\mathbb{D}\), thus the amplitude of the sector.
### Non-tangential intersections
Assume that \(P_{\mathbb{H}}\) has two distinct real roots \(\rho_{1},\rho_{2}\) in some interval of \(P_{\mathbb{H}}\). Relabeling, if necessary, we assume that \(\lambda_{m}<\rho_{1}<\rho_{2}\) if \(\rho_{1},\rho_{2}\) lie in a bounded interval \(I_{m}\), \(\rho_{1}<\rho_{2}\) if they lie in the left unbounded interval, or \(\rho_{2}<\rho_{1}\) if they lie in the right unbounded interval. This way, we ensure that \(P_{\mathbb{H}}^{\prime}(\rho_{1})>0>P_{\mathbb{H}}^{\prime}(\rho_{2})\). For the rest we will cover the first two cases where \(\rho_{1}<\rho_{2}\). Then, the Mobius transform \(T(z)=\frac{z-\rho_{2}}{z-\rho_{1}}\) maps the upper half-plane onto itself. We shall use this fact shortly. For the case where we have a right unbounded interval, thus \(\rho_{2}<\rho_{1}\), the results of this section follow similarly by considering \(-T\). We know that \(P_{\mathbb{H}}^{\prime}(\lambda_{n})<0\) for all \(n\geq 1\). We then work as follows. With the help of Proposition 13, the ODE (4.3) is written as
\[\left(\frac{-1}{v-\rho_{1}}+\frac{\frac{P_{\mathbb{H}}^{\prime}(\rho_{1})}{|P _{\mathbb{H}}^{\prime}(\rho_{2})|}}{v-\rho_{2}}+\sum_{n=1}^{+\infty}\frac{ \frac{P_{\mathbb{H}}^{\prime}(\rho_{1})}{|P_{\mathbb{H}}^{\prime}(\lambda_{n} )|}}{v-\lambda_{n}}\right)dv=\frac{-P_{\mathbb{H}}^{\prime}(\rho_{1})}{2(1-t) }dt.\]
Hence, considering after integration the function
\[h(z)=\frac{(z-\rho_{2})^{b}}{z-\rho_{1}}\prod_{n=1}^{+\infty}\left(\frac{z- \lambda_{n}}{z_{0}-\lambda_{n}}\right)^{a_{n}} \tag{4.5}\]
for all \(z\in\mathbb{H}\), where \(b:=\frac{P_{\mathbb{H}}^{\prime}(\rho_{1})}{|P_{\mathbb{H}}^{\prime}(\rho_{2 })|}>0\) and \(a_{n}:=\frac{P_{\mathbb{H}}^{\prime}(\rho_{1})}{|P_{\mathbb{H}}^{\prime}( \lambda_{n})|}>0\), \(n\geq 1\), we deduce the implicit solution \(h(v(z,t))=(1-t)\frac{P_{\mathbb{H}}^{\prime}(\rho_{1})}{2}h(z)\).
**Proposition 19**.: _The function \(h\) defined by (4.5) is analytic and univalent in the upper half-plane._
Proof.: By Lemma 15, \(h\) is well defined and analytic in the upper half-plane. To prove that it is univalent, at first we observe that \(\frac{h^{\prime}(z)}{h(z)}=\frac{-P_{\mathbb{H}}^{\prime}(\rho_{1})}{P_{ \mathbb{H}}(z)}\). Consider, now, the Mobius transform \(T(z)=\frac{z-\rho_{2}}{z-\rho_{1}}:\mathbb{H}\to\mathbb{H}\). Taking into account the elementary
Figure 2. The image of \(h\), when \(P_{\mathbb{H}}\) has a complex root.
identity \((T^{-1}(z)-\rho_{1})(T^{-1}(z)-\rho_{2})=(\rho_{2}-\rho_{1})z(T^{-1})^{\prime}(z)\), we then deduce that
\[\frac{(h\circ T^{-1})^{\prime}(z)}{h\circ T^{-1}(z)}=\frac{P_{\mathbb{H}}^{\prime }(\rho_{1})}{\rho_{1}-\rho_{2}}\frac{(T^{-1}(z)-\rho_{1})(T^{-1}(z)-\rho_{2})}{ zP_{\mathbb{H}}(T^{-1}(z))}=\frac{P_{\mathbb{H}}^{\prime}(\rho_{1})}{\rho_{1}- \rho_{2}}\frac{1}{zH(T^{-1}(z),\rho_{1},\rho_{2})}\]
for all \(z\in\mathbb{H}\), where \(H\) is given by Lemma 10. Next, we shall show that the above quotient has constant sign in the upper half-plane. It suffices to show that \(\operatorname{Im}(T(z)H(z,\rho_{1},\rho_{2}))<0\) for all \(z\in\mathbb{H}\). But we observe, again by Lemma 10, that \(T(z)H(z,\rho_{1},\rho_{2})=G(z,\rho_{1})\). As a result, since \(\rho_{1}\) is a root and since \(P_{\mathbb{H}}^{\prime}(\rho_{1})>0\), we have that \(\operatorname{Im}G(z,\rho_{1})<0\). Thus, the imaginary part of the logarithmic derivative of \(h\circ T^{-1}\) has constant sign, so it is univalent and this concludes the proof.
**Lemma 20**.: _The function \(h\) maps the upper half-plane onto \(\mathbb{H}\backslash\bigcup_{n=1}^{\infty}[0,h(k_{n})]\), where \((0,h(k_{n})]\subset\mathbb{H}\) is a line segment emanating from the origin to the tip point \(h(k_{n})\). Moreover, we have that \(h(k_{n})\to 0\), as \(n\to+\infty\)._
Proof.: First of all, we observe that the sequences \((\lambda_{n})_{n\geqslant 1}\) and \((k_{n})_{n\geqslant 1}\) do not have any accumulation points in \(\mathbb{R}\). So, there exists some \(\epsilon>0\) sufficiently small such that \(\lambda_{n},k_{n}\notin[\rho_{1}-\epsilon,\rho_{1}+\epsilon]\), for all \(n\in\mathbb{N}\). On the contrary, both \((\lambda_{n})_{n\geqslant 1}\) and \((k_{n})_{n\geqslant 1}\) accumulate at \(-\infty\) and/or \(+\infty\). Therefore, we are interested in the limit \(\lim\limits_{\mathbb{R}\ni x\to\infty}h(x)\).
To calculate it, we consider the function \(g(x)=h(T^{-1}(x))=h(\frac{x\rho_{1}-\rho_{2}}{x-1})\), \(x\in\mathbb{R}\backslash\{1\}\). Obviously, \(\lim\limits_{x\to 1}g(x)=\lim\limits_{x\to\pm\infty}h(x)\). After several simple computations, we find that
\[g(x)=x^{b}\frac{(x-1)^{1-b}}{(\rho_{1}-\rho_{2})^{1-b}}\prod_{n=1}^{+\infty} \left(\frac{\rho_{1}-\lambda_{n}}{z_{0}-\lambda_{n}}\frac{x-T(\lambda_{n})}{x- 1}\right)^{a_{n}}.\]
Observe that the sequence \((T(\lambda_{n}))_{n\geqslant 1}\) is a bounded sequence of real points and beacuse the series \(\sum_{n=1}^{+\infty}a_{n}\log\left|\frac{\rho_{1}-\lambda_{n}}{z_{o}-\lambda_ {n}}\right|\) converges by Lemma 15, taking the absolute value, we write
\[\log|g(x)| =(b-1)\log|\rho_{1}-\rho_{2}|+\sum_{n=1}^{+\infty}a_{n}\log \left|\frac{\rho_{1}-\lambda_{n}}{z_{o}-\lambda_{n}}\right|+b\log|x|\] \[+(1-b-\sum_{n=1}^{+\infty}a_{n})\log|x-1|+\sum_{n=1}^{+\infty}a_ {n}\log|x-T(\lambda_{n})|\,.\]
Now, since \(T(\lambda_{n})\to 1\), there exists some \(N>1\), such that \(|T(\lambda_{n})-1|<\frac{1}{2}\), for all \(n\geqslant N\). So, for all \(x\in(\frac{1}{2},\frac{3}{2})\), we have that \(\log|x-T(\lambda_{n})|\) is negative for all \(n\geqslant N\). Note that \(1-b-\sum_{n=1}^{+\infty}a_{n}=P_{\mathbb{H}}^{\prime}(\rho_{1})>0\), due to Proposition 13. Gathering everything together, we obtain
\[\log|g(x)| =C+b\log|x|+\sum_{n=1}^{N-1}a_{n}\log|x-T(\lambda_{n})|\] \[+P_{\mathbb{H}}^{\prime}(\rho_{1})\log|x-1|+\sum_{n=N}^{+\infty}a _{n}\log|x-T(\lambda_{n})|\to-\infty,\]
as \(x\to 1\). As a result, \(\lim\limits_{x\to 1}g(x)=0\), which leads to \(\lim\limits_{x\to+\infty}h(x)=\lim\limits_{x\to-\infty}h(x)=0\). This implies that \(\lim_{n\to+\infty}h(k_{n})=0\), as well, and we have the desired result.
On another note, let \(M>0\). Then, there exists \(\delta>0\) such that \(|h(x)|<\delta\), for all \(x\in(-\infty,-M]\cup[M,+\infty)\). In addition, due to compactness and the fact that \(\rho_{1}\) is the only singularity of \(h\) in \(\mathbb{R}\), there also exists some \(\delta^{\prime}\) such that \(|h(x)|<\delta^{\prime}\), for all \(x\in[-M,M]\backslash[\rho_{1}-\epsilon,\rho_{1}+\epsilon]\). Picking \(R=\max\{\delta,\delta^{\prime}\}\), we have that \(|z|<R\), for all \(z\in\bigcup_{n=1}^{+\infty}[0,h(k_{n})]\).
Now, regarding the image of \(h\), analyzing \(g(x)\), we find that
\[\mathrm{Arg}(g(x))=C^{\prime}+b\mathrm{Arg}(x)+P^{\prime}_{\mathbb{H}}(\rho_{1}) \mathrm{Arg}(x-1)+\sum_{n=1}^{+\infty}a_{n}\mathrm{Arg}(x-T(\lambda_{n}))\]
for some constant \(C^{\prime}\). As we saw in Remark 17, we may assume that \(C^{\prime}=0\). Because \(T(x)=\frac{x-\rho_{2}}{x-\rho_{1}}\) and because there exists no \(\lambda_{n}\in(\rho_{1},\rho_{2})\), we have that \(T(\lambda_{n})>0\), for all \(n\geqslant 1\) (\(T\) maps the line segment \((\rho_{1},\rho_{2})\) onto the negative line). Therefore, we can see that for \(x<0\),
\[\mathrm{Arg}(g(x))=b\pi+P^{\prime}_{\mathbb{H}}(\rho_{1})\pi+\sum_{n=1}^{+ \infty}a_{n}\pi=\pi P^{\prime}_{\mathbb{H}}(\rho_{1})\left(1-\frac{1}{P^{ \prime}_{\mathbb{H}}(\rho_{2})}-\frac{1}{P^{\prime}_{\mathbb{H}}(\lambda_{n}) }\right)=\pi,\]
through Proposition 13. Thus, \(g\) maps the negative line onto itself. To see the image of the positive line, we consider the angles
\[\theta_{1}:=P^{\prime}_{\mathbb{H}}(\rho_{1})\pi+\sum_{n:T(\lambda_{n})>1}a_{ n}\pi\quad\text{and}\quad\theta_{2}:=\sum_{n:T(\lambda_{n})>1}a_{n}\pi.\]
We then deduce, similarly as before, that \(g\) maps \([0,1]\) onto the union of line segments of the form \([0,h(k_{n})]\), for all \(n\geqslant 1\), so that \(T(\lambda_{n})<1\). Assume without loss of generality that \(T(\lambda_{n})<1\) for infinitely many indices. Otherwise, we only have finitely many such segments, so we comment no further. Those segments, accumulate to the half-line \(L_{\theta_{1}}:=\{xe^{i\theta_{1}}:x\geqslant 0\}\) and by the first part of the proof, the tip points of the segments accumulate at the origin. Similarly, the image of the line \([1,+\infty]\) is the union of the line segments of the form \([0,h(k_{n})]\), for all \(n\geqslant 1\), so that \(T(\lambda_{n})>1\), and the half-line \([0,+\infty]\); recall that \((T(\lambda_{n}))_{n\geqslant 1}\) is bounded, so picking \(n_{0}\geqslant 1\) such that \(T(\lambda_{n_{0}})=\max T(\lambda_{n})\), then \(g([T(\lambda_{n_{0}}),+\infty])=[0,+\infty]\). Again, those segments accumulate to the half-line \(L_{\theta_{2}}:=\{xe^{i\theta_{2}}:x\geqslant 0\}\) with the tip points accumulating at zero (see Figure 3).
The Loewner flow in this case is given by the conjugation formula
\[f(z,t)=h^{-1}\left((1-t)^{-\frac{P^{\prime}_{\mathbb{H}}(\rho_{1})}{2}}\,h \left((1-t)^{-\frac{1}{2}}\,z\right)\right) \tag{4.6}\]
for all \(z\in\mathbb{H}\) and \(0\leqslant t<1\). In order to study the geometry of the slits as \(t\to 1^{-}\), we follow the formula above and thus, we extend the tip points \(h(k_{n})\) to infinity
Figure 3. The image of \(h\), when \(P_{\mathbb{H}}\) has distinct real roots. There exists an angular sector of amplitude \(P^{\prime}_{\mathbb{H}}(\rho_{1})\pi\), that contains no slits.
by applying the mapping \(z\mapsto e^{\frac{P_{1}^{\theta}(\rho_{1})}{2(1-t)}}z\) and then we consider their preimage of \(\{xh(k_{n}):x\geq 1\}\) under \(h\). The following proposition shows that the trajectory of \(f(k_{n}\sqrt{1-t},t)\) collides at the point \(\rho_{1}\in\mathbb{R}\) as \(t\to 1^{-}\), non-tangentially.
**Proposition 21**.: _For each \(n\in\mathbb{N}\), the trace \(\hat{\gamma}_{n}:=\{f(k_{n}\sqrt{1-t},t):[0,1)\}\) is a smooth curve intersecting the real line non-tangentially at the root \(\rho_{1}\)._
Proof.: Fix \(n\in\mathbb{N}\) and assume without loss of generality that \(k_{n}>\rho_{1}\). Consider the curve \(\gamma_{n}:[0,1)\to\mathbb{H}\) with \(\gamma_{n}(t)=f(k_{n}\sqrt{1-t},t)\). Surely \(\lim\limits_{t\to 1}\gamma_{n}(t)=\rho_{1}\). We only need to deal with the angle of the convergence and we are going to do this through the use of harmonic measure. By (4.5), \(h\) maps the upper half-plane \(\mathbb{H}\) conformally onto the simply connected domain \(\Omega:=\mathbb{H}\setminus\bigcup\limits_{j=1}^{+\infty}[0,h(k_{j})]\). In addition, the image of \(\gamma_{n}\) through \(h\) is the ray \(\{re^{i\arg h(k_{n})}:r>|h(k_{n})|\}\) and \(\lim\limits_{t\to+\infty}h(\gamma_{n}(t))=\infty\). It is easy to see that \(h\circ\gamma_{n}\) separates the prime ends of \(\Omega\) into two connected components. The one consists of the prime ends corresponding to \((-\infty,0]\), to \(\bigcup\limits_{j\in\mathbb{N}}\{[0,h(k_{j})]:\arg h(k_{j})>\arg h(k_{n})\}\) and the prime ends corresponding to \([0,h(k_{n})]\) defined by crosscuts with arguments larger than \(\arg h(k_{n})\). The other is the complement. We denote them by \(\partial\Omega^{+}\) and \(\partial\Omega^{-}\), respectively (see Figure 4).
Our first objective is to prove that \(\lim\limits_{t\to 1}\omega(h(\gamma_{n}(t)),\bigcup\limits_{j=1}^{+\infty}[0,h(k_{j}) ],\Omega)=0\). Indeed, by the previous lemma, there exists some \(R>0\) such that \(|z|<R\), for all \(z\in\bigcup\limits_{j=1}^{+\infty}[0,h(k_{j})]\). Therefore, through a new conformal mapping \(g\) of \(\Omega\) onto \(\mathbb{H}\) which fixes \(\infty\) and sends \(h(k_{n})\) to \(0\), we may map a subset of \(\bigcup\limits_{j=1}^{+\infty}[0,h(k_{j})]\) onto the closed segment \([a,b]\subset\mathbb{R}\). Then, by conformal invariance,
\[\lim\limits_{t\to 1}\omega(h(\gamma_{n}(t)),\bigcup\limits_{j=1}^{+\infty}[0,h(k_ {j})],\Omega)\leq\lim\limits_{t\to 1}\omega(g(h(\gamma_{n}(t))),[a,b], \mathbb{H})=0,\]
in view of Subsection 2.1. Combining this with the domain monotonicity property, we have
\[\lim\limits_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{+},\Omega)=\lim \limits_{t\to 1}\omega(h(\gamma_{n}(t)),(-\infty,0],\Omega)\leq\lim \limits_{t\to 1}\omega(h(\gamma_{n}(t)),(-\infty,0],\mathbb{H}).\]
In a similar fashion,
\[\lim\limits_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{-},\Omega)\leq \lim\limits_{t\to 1}\omega(h(\gamma_{n}(t)),[0,+\infty),\mathbb{H}).\]
However, in the last two relations, both the left-hand and right-hand sides add up to \(1\). This implies that the equalities prevail. Consequently,
\[\lim\limits_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{-},\Omega) = \lim\limits_{t\to 1}\omega(h(\gamma_{n}(t)),[0,+\infty), \mathbb{H})\] \[= \lim\limits_{r\to+\infty}\omega(re^{i\arg h(k_{n})},\{w\in \mathbb{C}:\arg w=0\},U_{0,\pi})\] \[= \lim\limits_{r\to+\infty}\frac{\pi-\arg h(k_{n})}{\pi}.\]
Using a Riemann mapping of \(\Omega\) onto the unit disk, the conformal invariance of the harmonic measure and the preservation of the orientation, Remark 7 reveals that the image of \(h\circ\gamma_{n}\) inside the unit disk converges to some point of the unit circle by angle \(\pi(1-\frac{\pi-\arg h(k_{n})}{\pi})=\arg h(k_{n})\). Finally, through a Mobius transformation which preserves angles in the whole complex plane, we return to our initial setting, to see that \(\gamma_{n}\) intersects the real line at \(\rho_{1}\) by angle \(\pi-\arg h(k_{n})\in(0,\pi)\), thus
non-tangentially. Remember that \(h(k_{n})\in\mathbb{H}\) or equivalently \(0<\arg h(k_{n})<\pi\). The difference between the angles in the unit disk and the upper half-plane derives from the fact that in the unit disk we use to count the angle with the help of the tangent, while in the upper half-plane the angle is usually counted by starting from the "positive" semi-axis.
### Tangential intersections
Assume, now, that \(P_{\mathbb{H}}\) has a double root \(\rho_{0}\in\mathbb{R}\). Fix some \(z_{0}\in\mathbb{H}\) and consider the function \(h(z):=\int_{z_{0}}^{z}\frac{d\zeta}{P_{\mathbb{H}}(\zeta)}\), which is well defined and analytic, since \(\mathbb{H}\) is simply connected. Therefore, by (3.3) and due to the uniform convergence on compacta, we have that
\[h(z)=\sum_{n=1}^{+\infty}A_{n}\log\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}+ \left(1-\sum_{n=1}^{+\infty}A_{n}\right)\log(z-\rho_{0})+\frac{B}{z-\rho_{0}}+ \text{const}. \tag{4.7}\]
for all \(z\in\mathbb{H}\) where the parameters \(A_{n}<0\) and \(B>0\) are given by (3.3) and the implicit solution \(h(v(z,t))=-\frac{1}{2}\log(1-t)+h(z)\). The following proposition shows that \(h\) is a well defined analytic and univalent function of the upper half-plane.
**Proposition 22**.: _Let \((A_{n})_{n\geqslant 1}\) be a summable sequence of negative numbers, \((\lambda_{n})_{n\geqslant 1}\) be a sequence of real points and let also \(\rho_{0}\in\mathbb{R}\), \(B\in\mathbb{R}\) and \(C>0\). Fix some \(z_{0}\in\mathbb{H}\). Then, the function_
\[\phi(z)=\sum_{n=1}^{+\infty}A_{n}\log\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}+ (1-\sum_{n=1}^{+\infty}A_{n})\log(z-\rho_{0})+\frac{B}{z-\rho_{0}}+\frac{C}{( z-\rho_{0})^{2}}\]
_is analytic and univalent in \(\mathbb{H}\)._
Proof.: Analyticity follows directly from Lemma 15. To prove that \(h\) is univalent, we precompose with the Mobius transform \(T(z)=\frac{1}{\rho_{0}-z}\). Note that \(T:\mathbb{H}\to\mathbb{H}\). By differentiating, we get that
\[(\phi\circ T^{-1})^{\prime}(z)=\sum_{n=1}^{+\infty}\frac{A_{n}}{z-T(\lambda_{n })}-\frac{1}{z}-B+2Cz\]
Figure 4. The domain \(\Omega\) and its prime ends
which has positive imaginary part for all \(z\in\mathbb{H}\). Hence, \(\phi\circ T^{-1}\) and by extension \(\phi\) are univalent in \(\mathbb{H}\), since the upper half-plane is a convex domain.
**Lemma 23**.: _Assume that \(h\) is given by (4.7). Then, it maps the upper half-plane onto a horizontal half-plane, minus infinitely many horizontal half-lines that extend to infinity from the left. In addition, \(\mathrm{Re}h(x)\to+\infty\), as \(x\to\pm\infty\)._
Proof.: We argue as in Lemma 20. Again, the only accumulation points of the sequences \((\lambda_{n})_{n\geq 1}\) and \((k_{n})_{n\geq 1}\) are \(\pm\infty\).We, now, consider the function \(g(x)=h(T^{-1}(x))=h(\frac{\rho_{0}x-1}{x})\), \(x\in\mathbb{R}\backslash\{0\}\). It is easy to see that \(\lim\limits_{x\to 0}g(x)=\lim\limits_{x\to\pm\infty}h(x)\). After a series of calculations, we see that
\[g(x)=\sum\limits_{n=1}^{+\infty}A_{n}\log\left(\frac{\rho_{0}- \lambda_{n}}{z_{0}-\lambda_{n}}\frac{x-T(\lambda_{n})}{x}\right)+\left(1- \sum\limits_{n=1}^{+\infty}A_{n}\right)\log\left(-\frac{1}{x}\right)-Bx+ \mathrm{const.},\]
where \(B>0\) and \(A_{n}<0\), for every \(n\in\mathbb{N}\). Focusing on the real parts, we are led to
\[\mathrm{Re}g(x) =\sum\limits_{n=1}^{+\infty}A_{n}\log\left|\frac{\rho_{0}- \lambda_{n}}{z_{0}-\lambda_{n}}\frac{x-T(\lambda_{n})}{x}\right|-\left(1-\sum \limits_{n=1}^{+\infty}A_{n}\right)\log|x|-Bx+\mathrm{const.}\] \[=\sum\limits_{n=1}^{+\infty}A_{n}\log\left|\frac{\rho_{0}- \lambda_{n}}{z_{0}-\lambda_{n}}(x-T(\lambda_{n}))\right|-\log|x|-Bx+\mathrm{ const.}\]
We focus on the infinite sum. Since the only possible limit points of \((\lambda_{n})_{n\geq 1}\) are \(\pm\infty\) and since \((T(\lambda_{n}))_{n\geq 1}\) is bounded and converges to \(0\) as \(n\to+\infty\), we have
\[\sum\limits_{n=1}^{+\infty}A_{n}\log\left|\frac{\rho_{0}-\lambda_{n}}{z_{0}- \lambda_{n}}(x-T(\lambda_{n}))\right|\approx\sum\limits_{n=1}^{N}A_{n}\log \left|\frac{\rho_{0}-\lambda_{n}}{z_{0}-\lambda_{n}}(x-T(\lambda_{n}))\right| +\sum\limits_{n=N+1}^{+\infty}A_{n}\log|x|,\]
for some large enough \(N\in\mathbb{N}\), where \(\approx\) denotes comparability for large \(n\). As a result,
\[\mathrm{Re}g(x)\approx\sum\limits_{n=1}^{N}A_{n}\log\left|\frac{\rho_{0}- \lambda_{n}}{z_{0}-\lambda_{n}}(x-T(\lambda_{n}))\right|-\left(1-\sum\limits_ {n=N+1}^{+\infty}A_{n}\right)\log|x|-Bx+\mathrm{const.}\]
Keeping in mind that the sequence \((A_{n})_{n\geq 1}\) is summable with negative terms, we quickly see that \(\lim\limits_{x\to 0}\mathrm{Re}g(x)=+\infty\). Also, taking into account that \(\mathrm{Re}h(\lambda_{n})=+\infty\), we deduce that \(\lim\limits_{x\to+\infty}\mathrm{Re}h(x)=+\infty\).
Finally, to see the image of \(h\), we find \(h(\mathbb{R})\). Without loss of generality, we assume that \((k_{n})_{n\geqslant 1}\) accumulates at both \(\pm\infty\) (and hence so does the sequence \((\lambda_{n})_{n\geqslant 1}\)) and let also \(k_{n_{0}}<\lambda_{n_{0}}<\rho_{0}<k_{n_{o}^{\prime}}\). Now, it is direct to see that
\[\mathrm{Im}h(x)=\sum_{n=1}^{+\infty}A_{n}\mathrm{Arg}(x-\lambda_{n})+(1-\sum_{ n=1}^{+\infty}A_{n})\mathrm{Arg}(x-\rho_{0})+\mathrm{const}. \tag{4.8}\]
Again, for simplicity we take the constant to be zero. For any \(m\geqslant 1\), so that \(\lambda_{m}<\lambda_{n_{0}}\), it is \(\mathrm{Im}h(x)=(1-\sum_{\lambda_{n}\leqslant\lambda_{m}}A_{n})\pi\), for all \(x\in(\lambda_{m},\lambda_{m^{\prime}})\). Similarly for \(\lambda_{m}>\rho_{0}\), we get that for all \(x\in(\lambda_{m},\lambda_{m^{\prime}})\), \(\mathrm{Im}h(x)=\sum_{\lambda_{n}>\lambda_{m}}A_{n}\pi\). To conclude, we have that \(\mathrm{Im}h(x)=(1-\sum_{\lambda_{n}\leqslant\rho_{0}}A_{n})\pi\), in \((\lambda_{n_{0}},\rho_{0})\) and because \(\mathrm{Re}h(x)\to-\infty\), as \(x\to\rho_{0}-\), we deduce that \(h\) maps the interval \((\lambda_{n_{0}},\rho_{0})\) onto the line \(\mathbb{R}+i\sum_{\lambda_{n}\leqslant\rho_{0}}A_{n}\pi\), as in Figure 5.
By the proof of the lemma above, we see that as \(\lambda_{m}\to-\infty\), then \(\mathrm{Im}h(k_{m})=(1-\sum_{\lambda_{n}\leqslant\lambda_{m}}A_{n})\pi\to\pi\) and as \(\lambda_{m}\to+\infty\), then \(\mathrm{Im}h(k_{m})=\sum_{\lambda_{n}>\lambda_{m}}A_{n}\pi\to 0\). Also, \(\mathrm{Re}h(k_{m})\to+\infty\) and hence, we see that there is a horizontal strip of amplitude \(\pi\) that contains no slits, whereas the tip points of the half-lines accumulate at the boundary of this strip from above and below, while also "disappearing" to the right as we see in Figure 5.
Following similar steps as before, the Loewner flow in this case is given by the conjugation formula
\[f(z,t)=h^{-1}\left(\frac{1}{2}\log(1-t)+h\left((1-t)^{-\frac{1}{2}}z\right) \right).\]
Once again we enquire about the convergence of the corresponding trajectories.
**Proposition 24**.: _For each \(n\in\mathbb{N}\), the trace \(\hat{\gamma}_{n}:=\{f(k_{n}\sqrt{1-t},t):t\in[0,1)\}\) is a smooth curve intersecting the real line tangentially at the root \(\rho_{0}\). In particular, either all curves converge to \(\rho_{0}\) by angle \(0\) or all of them converge by angle \(\pi\)._
Proof.: Again, we will utilize the harmonic measure. Towards this goal, fix \(n\in\mathbb{N}\) and consider the curve \(\gamma_{n}:[0,1)\to\mathbb{H}\) with \(\gamma_{n}(t)=f(k_{n}\sqrt{1-t},t)\). As before, \(\lim_{t\to 1^{-}}\gamma_{n}(t)=\rho_{1}\). This time, \(h\) maps the upper half-plane \(\mathbb{H}\) onto a horizontal half-plane minus a sequence of horizontal half-lines stretching to \(\infty\) in the positive direction. To be more formal, set \(L_{j}=\{z\in\mathbb{C}:\mathrm{Re}z\geqslant\mathrm{Re}h(k_{j}),\mathrm{Im}z= \mathrm{Im}h(k_{j})\}\). In our first case, we assume that there exists some \(a>\mathrm{Im}h(k_{j})\), for all \(j\in\mathbb{N}\) such that \(\Omega:=h(\mathbb{H})=\{z\in\mathbb{C}:\mathrm{Im}z<a\}\backslash\bigcup_{j=1}^ {+\infty}L_{j}\). By the previous lemma, we may see that there exists \(R\in\mathbb{R}\) such that \(\mathrm{Re}h(k_{j})>R\), for all \(j\in\mathbb{N}\). Arguing as before, the curve \(\gamma_{n}\) separates the prime ends of \(\Omega\) into two connected components \(\partial\Omega^{+}\) and \(\partial\Omega^{-}\), where \(\partial\Omega^{+}\) consists of the prime ends corresponding to the horizontal line \(L:=\{z\in\mathbb{C}:\mathrm{Im}z=a\}\), the prime ends corresponding to \(\bigcup_{j\in\mathbb{N}}\{L_{j}:\mathrm{Im}h(k_{j})>\mathrm{Im}h(k_{n})\}\) and the prime ends corresponding to the half-line \(L_{n}\) defined by crosscuts with imaginary parts larger than \(\mathrm{Im}h(k_{n})\). Naturally, \(\partial\Omega^{-}\) consists of all the remaining prime ends. Our objective is to prove that \(\lim_{t\to 1^{-}}\omega(h(\gamma_{n}(t)),\partial\Omega^{+},\Omega)=1\). An important note towards this direction is the fact that \(h\circ\gamma_{n}([0,1))=\{z\in\mathbb{C}:\mathrm{Re}z<\mathrm{Re}h(k_{n}), \mathrm{Im}z=\mathrm{Im}h(k_{n})\}\), with \(\lim_{t\to 1^{-}}\mathrm{Re}h(\gamma_{n}(t))=-\infty\).
The existence of the real number \(R\) that bounds the real parts of all half-lines, allows us to proceed to the following construction: we may find some point \(\zeta\in L\) and a ray \(A\) emanating from \(\zeta\) such that \(A\backslash\{\zeta\}\subset\Omega\) (an obvious example is a point \(\zeta\) that rests sufficiently to the left and the half-line \(A\) that is perpendicular to \(L\) at \(\zeta\)). Set \(L^{+}=\{z\in L:\mathrm{Re}z\leqslant\mathrm{Re}\zeta\}\) and \(L^{-}=L\backslash L^{+}\). Then, the angular simply connected domain \(U\) bounded by \(L^{+}\) and \(A\) is contained inside \(\Omega\) (for the whole construction see Figure 6). Moreover, it is easy to see that \(U\) eventually contains
the curve \(h\circ\gamma_{n}\) or equivalently there exists some \(t_{0}\in[0,1)\) such that \(h(\gamma_{n}(t))\in U\), for all \(t\in[t_{0},1)\). Since \(\widehat{\sigma}\Omega^{+}\supset L^{+}\) and \(\Omega\supset U\), we are led to
\[\omega(h(\gamma_{n}(t)),\widehat{\sigma}\Omega^{+},\Omega)\geq\omega(h(\gamma_ {n}(t)),L^{+},\Omega)\geq\omega(\gamma_{n}(t),L^{+},U),\]
for all \(t\in[t_{0},1)\). By the construction of the ray \(A\), there exists some \(\beta\in(\pi,2\pi)\) such that
\[\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\widehat{\sigma}\Omega^{+},\Omega) \geq \lim_{t\to 1}\omega(h(\gamma_{n}(t)),L^{+},U)\] \[\geq \lim_{t\to 1}\omega(h(\gamma_{n}(t))-\zeta,\{w\in\mathbb{C}: \arg w=\pi\},U_{\pi,\beta})\] \[= \lim_{t\to 1}\frac{\beta-\arg(h(\gamma_{n}(t))-\zeta)}{\beta-\pi}\] \[= 1,\]
since \(\lim_{t\to 1}\arg h(\gamma_{n}(t))-\zeta=\lim_{t\to 1}\arg h(\gamma_{n}(t))=\pi\). However, the harmonic measure has \(1\) as an upper bound and as a result
\[\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\widehat{\sigma}\Omega^{+},\Omega)=1= \frac{\pi}{\pi}.\]
Using again a Riemann map of \(\Omega\) onto the unit disk, Remark 7 and a Mobius transformation to return to \(\mathbb{H}\), we see that each \(\gamma_{n}\) coverges to \(\rho_{1}\) by angle \(0\) and thus, tangentially. Finally, if at the start, our domain was of the form \(\Omega:=\{z\in\mathbb{C}:\mathrm{Im}z>a\}\backslash\bigcup_{j=1}^{+\infty}L_{j}\), then each curve \(\gamma_{n}\) would converge to \(\rho_{1}\) by angle \(\pi\), again tangentially.
### Orthogonal intersections
In the final case, we assume that \(P_{\mathbb{H}}\) has a triple root \(\rho_{0}\in\mathbb{R}\), which means according to the preliminary analysis, that \(\rho_{0}\) is a double root coinciding with some \(\lambda_{n_{0}}\), hence triple. Integrating in (4.3) and using (3.3), we deduce that \(h(v(z,t))=-\frac{1}{2}\log(1-t)+h(z)\), where
\[h(z)=\sum_{n\neq n_{0}}A_{n}\log\frac{z-\lambda_{n}}{z_{0}-\lambda_{n}}+(1- \sum_{n\neq n_{0}}\log(z-\rho_{0})+\frac{B}{z-\rho_{0}}+\frac{C}{(z-\rho_{0}) ^{2}}. \tag{4.9}\]
where the parameters \(A_{n}<0\), \(B\) and \(C>0\) are given by (3.4). By Lemma 15 and Proposition 22, \(h\) is analytic and univalent in the upper half-plane.
**Lemma 25**.: _Assume that \(h\) is given by (4.9). Then it maps the upper half-plane onto the complement of inifinitely many horizontal half-lines and we have that:_
1. \(\mathrm{Re}h(x)\to+\infty\)_, as_ \(x\to\pm\infty\) _and_
2. _there exists some_ \(Q>0\)_, so that_ \(|\mathrm{Im}h(k_{n})|<Q\)_, for all_ \(n\in\mathbb{N}\)
Figure 6. The sets of Proposition 24
Proof.: The proof of the lemma follows an identical procedure to the lemma of the preceding section, albeit with minor modifications. For the sake of brevity, we skip this proof, but we only note some adjustments. We write, without loss of generality, \(k_{n_{0}}<\lambda_{n_{0}}=\rho_{0}<k_{n_{0}^{\prime}}\). Through the fact that \(\mathrm{Re}h(x)\) equals
\[\sum_{n\neq n_{0}}A_{n}\log\left|\frac{x-\lambda_{n}}{z_{0}-\lambda_{n}}\right| +\frac{(1-\sum_{n\neq n_{0}}A_{n})\log|x-\rho_{0}|\,(x-\rho_{0})^{2}+B(x-\rho_ {0})+C}{(x-\rho_{0})^{2}}\]
we see that \(\mathrm{Re}h(x)\to+\infty\), as \(x\to\rho_{0}\), while by (4.8) we see that \(h((k_{n_{0}},\rho_{0}))\) and \(h((\rho_{0},k_{n_{0}^{\prime}}))\) are the half-lines lying above and below all other half-lines, respectively, as we see in Figure 7.
Again, we see that the strip \(\{0<\mathrm{Im}z<\pi\}\) contains no half-lines, while the tip points accumulate on the boundary of the strip and the point at infinity towards the right. In addition, the Loewner flow is the same as in the previous case. Finally, the following proposition allows us to see that the trajectories of \(f(k_{n}\sqrt{1-t},t)\) collide to the real line, orthogonally at the point \(\rho_{0}\), for all \(n\geq 1\).
**Proposition 26**.: _For each \(n\in\mathbb{N}\), the trace \(\hat{\gamma}_{n}:=\{f(k_{n}\sqrt{1-t},t):t\in[0,1)\}\) is a smooth curve intersecting the real line orthogonally at the root \(\rho_{0}\) (i.e. with angle \(\frac{\pi}{2}\))._
Proof.: Fix \(n\in\mathbb{N}\) and set \(L_{j}=\{z\in\mathbb{C}:\mathrm{Re}z\geq\mathrm{Re}h(k_{j}),\mathrm{Im}z= \mathrm{Im}h(k_{j})\}\). This time, (4.9) dictates that \(\Omega:=h(\mathbb{H})=\mathbb{C}\setminus\bigcup_{j=1}^{+\infty}L_{j}\). In a similar fashion as in the previous cases, we denote by \(\partial\Omega^{+}\) the prime ends of \(\Omega\) corresponding to \(\bigcup_{j\in\mathbb{N}}\{L_{j}:\mathrm{Im}h(k_{j})>\mathrm{Im}h(k_{n})\}\) along with the prime ends corresponding to the half-line \(L_{n}\) and defined by crosscuts with imaginary parts larger than \(\mathrm{Im}h(k_{n})\). Once more, we denote by \(\partial\Omega^{+}\) the set of the remaining prime ends. Recall that \(h\circ\gamma_{n}([0,1))=\{z\in\mathbb{C}:\mathrm{Re}z<\mathrm{Re}h(k_{n}), \mathrm{Im}z=\mathrm{Im}h(k_{n})\}\) and \(\lim_{t\to 1^{-}}\mathrm{Re}h(\gamma_{n}(t))=-\infty\). Our aim is to prove that
\[\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{+},\Omega)=\lim_{t\to 1} \omega(h(\gamma_{n}(t)),\partial\Omega^{-},\Omega)=\frac{1}{2}.\]
By the previous lemma, we see that there exist \(n_{1},n_{2}\in\mathbb{N}\) such that \(\mathrm{Im}h(k_{n_{1}})\leq\mathrm{Im}h(k_{j})\leq\mathrm{Im}h(k_{n_{2}})\), for all \(j\in\mathbb{N}\). So, denote by \(L^{+}\subset\partial\Omega^{+}\) the prime ends corresponding to the half-line \(L_{n_{2}}\) and defined by crosscuts with imaginary parts larger than \(\mathrm{Im}h(k_{n_{2}})\). On the other side. denote by \(L^{-}\subset\partial\Omega^{-}\) the prime ends
Figure 7. The image of \(h\), when \(\rho_{0}\) is a triple root.
corresponding to \(L_{n_{1}}\) and defined by crosscuts with imaginary parts smaller than \(\operatorname{Im}h(k_{n_{1}})\). As in the tangential case, there exists some \(R\in\mathbb{R}\) such that \(\operatorname{Re}h(k_{j})\geq R\), for all \(j\in\mathbb{N}\). Therefore, through the conformal invariance of the harmonic measure and a mapping of \(\Omega\) onto \(\mathbb{H}\) that maps \(h(k_{n})\) to \(0\) and fixes infinity, we may see that
\[\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{+}\backslash L^{+}, \Omega)=\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{-}\backslash L^{-}, \Omega)=0.\]
We will use two auxiliary "Koebe-like" domains to reach the desired conclusions. We set \(\Omega_{1}=\mathbb{C}\backslash L_{n_{1}}\) and \(\Omega_{2}=\mathbb{C}\backslash L_{n_{2}}\). Trivially, both of them are supersets of \(\Omega\). In particular, in the sense of prime ends, \(L^{-}\subset\partial\Omega\cap\partial\Omega_{1}\) and \(L^{+}=\partial\Omega\cap\partial\Omega_{2}\) (see Figure 8). By the domain monotonicity property of harmonic measure,
\[\lim_{t\to 1}\omega(h(\gamma_{n}(t)),L^{-},\Omega_{1})\geq\lim_{t\to 1} \omega(h(\gamma_{n}(t)),L^{-},\Omega)=\lim_{t\to 1}\omega(h(\gamma_{n}(t)), \partial\Omega^{-},\Omega).\]
Through a chain of conformal mappings, we are going to estimate the harmonic measure with regard to the domain \(\Omega_{1}\). As a matter of fact,
\[\lim_{t\to 1}\omega(h(\gamma_{n}(t)),L^{-},\Omega_{1}) = \lim_{t\to 1}\omega(h(\gamma_{n}(t))-h(k_{n_{1}}),L^{-}-h(k_{n_{1}}),\Omega_{1}-h(k_{n_{1}}))\] \[= \lim_{t\to 1}\omega(-h(\gamma_{n}(t))+h(k_{n_{1}}),-L^{-}+h(k_{n_{ 1}}),\mathbb{C}\backslash(-\infty,0])\] \[= \lim_{t\to 1}\omega(\sqrt{-h(\gamma_{n}(t))+h(k_{n_{1}})},\{z \in\mathbb{C}:\arg z=-\frac{\pi}{2}\},-i\mathbb{H})\] \[= \lim_{t\to 1}\frac{\arg\sqrt{-h(\gamma_{n}(t))+h(k_{n_{1}})}+ \frac{\pi}{2}}{\pi}\] \[= \frac{1}{2}.\]
Combining with the inequality above, we get \(\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{-},\Omega)\leq\frac{1}{2}\). Following an almost identical procedure, but this time with the Koebe domain \(\Omega_{2}\), we may prove that \(\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{+},\Omega)\leq\frac{1}{2}\) as well. However, since \(\partial\Omega=\partial\Omega^{+}\cup\partial\Omega^{-}\), it is obvious that
\[\omega(h(\gamma_{n}(t)),\partial\Omega^{-},\Omega)+\omega(h(\gamma_{n}(t)), \partial\Omega^{+},\Omega)=1,\]
for all \(t\in[0,1)\). So, it follows that
\[\lim_{t\to 1}\omega(h(\gamma_{n}(t)),\partial\Omega^{-},\Omega)=\lim_{t\to 1} \omega(h(\gamma_{n}(t)),\partial\Omega^{+},\Omega)=\frac{1}{2}.\]
Lastly, Remark 7 once again implies the desired result.
Figure 8. The sets of Proposition 26
To conclude, the combination of all the lemmas and propositions of this section provides the proof of Theorem 3.
## Acknowledgements
We would like to thank Alan Sola for his advice and careful consideration during the preparation of the current work. We also thank Dimitrios Betsakos for the helpful conversations.
|
2308.16458 | BioCoder: A Benchmark for Bioinformatics Code Generation with Large
Language Models | Pre-trained large language models (LLMs) have significantly improved code
generation. As these models scale up, there is an increasing need for the
output to handle more intricate tasks and to be appropriately specialized to
particular domains. Here, we target bioinformatics due to the amount of domain
knowledge, algorithms, and data operations this discipline requires. We present
BioCoder, a benchmark developed to evaluate LLMs in generating
bioinformatics-specific code. BioCoder spans much of the field, covering
cross-file dependencies, class declarations, and global variables. It
incorporates 1,026 Python functions and 1,243 Java methods extracted from
GitHub, along with 253 examples from the Rosalind Project, all pertaining to
bioinformatics. Using topic modeling, we show that the overall coverage of the
included code is representative of the full spectrum of bioinformatics
calculations. BioCoder incorporates a fuzz-testing framework for evaluation. We
have applied it to evaluate various models including InCoder, CodeGen,
CodeGen2, SantaCoder, StarCoder, StarCoder+, InstructCodeT5+, GPT-3.5, and GPT-
4. Furthermore, we fine-tuned one model (StarCoder), demonstrating that our
training dataset can enhance the performance on our testing benchmark (by >15%
in terms of Pass@K under certain prompt configurations and always >3%). The
results highlight two key aspects of successful models: (1) Successful models
accommodate a long prompt (> 2,600 tokens) with full context, including
functional dependencies. (2) They contain domain-specific knowledge of
bioinformatics, beyond just general coding capability. This is evident from the
performance gain of GPT-3.5/4 compared to the smaller models on our benchmark
(50% vs. up to 25%). Availability and implementation: Code is available at:
https://github.com/gersteinlab/biocoder and https://biocoder-benchmark.
github.io/. | Xiangru Tang, Bill Qian, Rick Gao, Jiakang Chen, Xinyun Chen, Mark Gerstein | 2023-08-31T04:52:58Z | http://arxiv.org/abs/2308.16458v5 | # BioCoder: A Benchmark for Bioinformatics Code Generation with Contextual Pragmatic Knowledge
###### Abstract
Pre-trained large language models have significantly improved code generation. As these models scale up, there is an increasing need for the output to handle more intricate tasks and to be appropriately specialized to particular domains. Bioinformatics provides an important domain. In this field generating functional programs poses additional notable challenges due to the amount of specialized domain knowledge, the need for complicated data operations, and intricate functional dependencies between the operations. Here, we present BioCoder, a benchmark developed to evaluate existing pre-trained models in generating bioinformatics code. In relation to function-code generation, BioCoder covers potential package dependencies, class declarations, and global variables. It incorporates 1026 functions and 1243 methods in Python and Java from GitHub and 253 examples from the Rosalind Project. BioCoder incorporates a fuzz-testing framework for evaluation, and we have applied it to evaluate many models including InCoder, CodeGen, CodeGen2, SantaCoder, StarCoder, StarCoder+, InstructCodeT5+, GPT-3.5, and GPT-4. The results highlight two key aspects of successful models: 1) that they contain specific domain knowledge of bioinformatics (beyond just coding knowledge); 2) that they accommodate a long prompt with full context (i.e. functional dependencies). Our dataset, benchmark, Docker images, and scripts required for testing are all available at [https://github.com/gersteinlab/biocoder](https://github.com/gersteinlab/biocoder).
## 1 Introduction
Large language models (LLMs) have shown great success in code generation (Chen et al., 2021; Chowdhery et al., 2022; Chen et al., 2023; Barke et al., 2023; Li et al., 2023). The landscape of existing coding benchmarks for large language models is largely populated with simple functions, often limited to a handful of lines (Chen et al., 2021; Austin et al., 2021b; Du et al., 2023; Wong et al., 2023). Combined with a significant lack of closed-domain datasets across diverse fields, this landscape highlights the need for a more robust benchmarking system. Although domain-specific datasets, such as DS1000 (Lai et al., 2022) for data science, have emerged, they fall short of adequately addressing specific tasks in fields like bioinformatics. Open-domain alternatives, including HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021b), and APPS (Hendrycks et al., 2021), offer entry-level programming tasks, but their utility is limited as they lack the ability to test more niche, domain-specific code blocks. This shortfall is largely due to a lack of appropriate fine-tuning and context (Muennighoff et al., 2023b); therefore, a more comprehensive, encompassing approach to benchmarking is clearly needed.
To bridge these gaps, we introduce BioCoder (see Figure 1), a benchmark for code generation incorporating 2269 bioinformatics-specific coding problems. Our BioCoder benchmark mainly targets bioinformatics data analysis, which includes tasks such as managing various biological data
formats, understanding processing workflows, and utilizing APIs of various packages. This area encapsulates the majority of daily tasks a bioinformatician encounters in data analysis. Note, however, that BioCoder also touches upon parts of writing bioinformatics software: when tool development intersects with data analysis (see Appendix O for more details with the topic modeling and statistics regarding the overall topic coverage of the dataset). Further expanding the scope of BioCoder, we included an additional 253 questions from the Rosalind project. This project specializes in generating Python functions addressing key bioinformatics topics such as genetic sequencing and DNA/RNA analysis. BioCoder assures the inclusion of all potential external packages and code that could be utilized by the generated program. This consideration extends to the recognition that real-world functions often necessitate managing multiple external function calls and global variable usage; hence, we included all potentially required class declarations in the input. Lastly, we performed ablation studies to determine whether the models are strictly memorizing the solutions rather than being proficient at generating code (see Appendix N).
The key highlights of our work can be outlined as follows:
* We create a new high-quality dataset for code generation, curated from 1,720 bioinformatics repositories referenced in peer-reviewed bioinformatics articles, aiding in the practical and realistic evaluation of code generation tasks.
* We meticulously processed the data, rephrasing more detailed text descriptions, as well as associated comments and specifications, including considerations needed in coding.
* We provide an extendable parsing tool that can extract all pertinent information associated with the target function in expansive projects. This includes import dependencies, potential usage of global variables, possible invocation of other functions along with their signatures, as well as class definitions replete with their attributes and methods. By integrating this contextual and pragmatic knowledge into the input, we can more thoroughly test the abilities of LLMs to use tools and comprehend the entirety of a project.
\begin{table}
\begin{tabular}{l c c c c c c l} \hline \hline \multirow{2}{*}{**Benchmark**} & \multirow{2}{*}{**Num**} & \multirow{2}{*}{**Language**} & \multicolumn{6}{c}{**Data Statistics**} & \multirow{2}{*}{**Scenario**} \\ \cline{3-3} \cline{5-8} & & & \multicolumn{1}{c}{_Test_} & & \multicolumn{1}{c}{_PC._} & \multicolumn{1}{c}{_PL._} & \multicolumn{1}{c}{_C.C._} & \multicolumn{1}{c}{_CL._} \\ \hline HumanEval (2021) & \(164\) & Python & \(7.8\) & \(450.6\) & \(13.7\) & \(180.9\) & \(6.8\) & Code Exercise \\ MBP (2021a) & \(974\) & Python & \(3.1\) & \(78.6\) & \(1.0\) & \(181.1\) & \(6.7\) & Code Exercise \\ APPS (2021) & \(5,000\) & Python & \(21.0\) & \(1743.4\) & \(41.6\) & \(473.8\) & \(21.4\) & Competitions \\ DS-1000 (2022) & \(1,000\) & Python & \(1.6\) & \(879.1\) & \(31.6\) & \(137.4\) & \(5.0\) & Data Science \\ HumanEval-X (2023b) & \(164^{*}\) & Multi & \(7.8\) & \(468.4\) & \(15.5\) & \(264.6\) & \(12.1\) & Multilingual \\ NumpyEval (2022b) & \(101\) & Python & \(3.5\) & \(222.9\) & \(7.0\) & \(29.9\) & \(1.1\) & Public Library \\ TorchDataEval (2022a) & \(50\) & Python & \(1.1\) & \(329.0\) & \(8.6\) & \(50.7\) & \(1.3\) & Private Library \\ \hline BioCoder (public set) & 460 & Multi. & 1000 & 10465.6 & 243.5 & 706.8 & 26.2 & Bioinformatics \\ BioCoder (hidden set) & 2,269 & Multi. & 1000 & 12296.7 & 298.8 & 919.5 & 26.2 & Bioinformatics \\ BioCoder (similar set) & 460 & Multi. & 1000 & 9885.6 & 240.8 & 767.5 & 26.8 & Bioinformatics \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the statistics of BioCoder to previous benchmarks. Num. is benchmark size. P.C. and P.L. (also C.C. and C.L.) define the average count of characters and lines in the prompt (along with the code solution). This table is derived from Zan et al. (2023), please refer to Zan et al. (2023) for a more comprehensive survey.
Figure 1: Overview of the BioCoder benchmark for code generation. BioCoder is designed for challenging, practical bioinformatics scenarios with an extensible evaluation framework.
* We offer a library for code LLMs, similar to Bui et al. (2023), furnishing a seamless interface for both training and inferencing in code generation tasks.
* We provide a fuzzer testing tool capable of scaling to handle substantial datasets. Our benchmark results, derived from 1000 iterations, are particularly reliable, indicating the Pass@K rate.
## 2 Related Work
BioCoder is a code generation benchmark designed for challenging, practical bioinformatics scenarios, offering an extensible testing framework for evaluating the performance of LLMs. We provide a brief overview of the related work in both code generation models and benchmarks.
### Code Generation with LLMs
LLMs have truly demonstrated astounding performances across various domains (Askell et al., 2021; Bai et al., 2022; Biderman et al., 2023; Bommasani et al., 2022; Gao et al., 2022; Patil et al., 2023; Xu et al., 2023; Qin et al., 2023; Zhang et al., 2023a). And LLMs trained with code data have shown promising results in generating code, exhibiting impressive zero-shot performance on several benchmarks (Zhang et al., 2023b; Olausson et al., 2023; Li et al., 2023; Fried et al., 2023; Wang et al., 2021; Allal et al., 2023). A proven strategy to improve model performance involves increasing both the model parameters and the volume of training data (Radford et al., 2019; Brown et al., 2020; Mitchell et al., 2023), while a lot of large-scale LLMs have been developed (Chowdhery et al., 2022; Thoppilan et al., 2022; Hoffmann et al., 2022). These models have proven their code generation prowess (Brown et al., 2020; Chen et al., 2021; OpenAI, 2023), and the field has also seen the release of several open-source code LLMs, such as bilingual GLM-130B (Zeng et al., 2022), CodeGeeX-13B (Zheng et al., 2023a), OctoPack (Muenighoff et al., 2023a), WizardCoder (Luo et al., 2023), SantaCoder (Allal et al., 2023), and StarCoder (Li et al., 2023). Salesforce's CodeGen (Nijkamp et al., 2023b;a), Huawei's PanguCoder (Christopoulou et al., 2022; Shen et al., 2023), Meta's LLaMA (Touvron et al., 2023), and CMU's InCoder model (Fried et al., 2022) also contribute to the field. To adopt code LLMs in real scenarios, researchers have further explored methods to integrate dependencies of relevant code in the prompt (Shrivastava et al., 2023; Zhang et al., 2023a).
### Code Generation Datasets and Benchmarks
Early work on code generation benchmarks used lexical exact match, data flow, and abstract syntax tree (AST) methods. However, these measures proved to be unreliable due to their sensitivity to inconsequential differences in the generated code. In response, execution-based evaluation approaches have become more prevalent (Chen et al., 2021; Athiwardtun et al., 2023; Li et al., 2022; Wang et al., 2022b; Lai et al., 2022; Khlaaf et al., 2022). These approaches execute tests on the generated code to verify its functional correctness, ensuring unbiased evaluations irrespective of implementation method or style variations.
Figure 2: **A diagram** of the BioCoder construction process involving custom GitHub repository cleaning, parsing, and function selection, as well as context and test case creation and a massively dockerized testing framework.
As a result, the field of code generation has seen a burgeoning number of execution-based benchmarks (Table 1) (Yuan et al., 2023; Lee et al., 2023; Pan et al., 2023; Wong et al., 2023; Zan et al., 2023), each presenting unique properties in terms of size, language coverage (Orlanski et al., 2023), complexity (Du et al., 2023; Zhuo, 2023), and practical applicability (Yu et al., 2023). For instance, HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021b) are frequently used code generation benchmarks that consist of 164 and 974 simple Python functions respectively, representing a small sample size. These benchmarks also overlook the multi-language coding scenarios gap, which is partially bridged by benchmarks like HumanEval-X (Zheng et al., 2023b) and MCoNaLa (Wang et al., 2023b). See Zan et al. (2023) for a more comprehensive survey on the previous benchmarks of code generation.
However, all datasets discussed above share the same shortcoming of only benchmarking generic functions, rather than domain-specific ones. DS-1000 (Lai et al., 2022) represents a more domain-specific dataset, featuring 1,000 data science workflows extracted from Python functions. Li et al. (2023) reported that the performance on HumanEval and MBPP benchmarks do not invariably align with those on DS-1000 benchmark. This discrepancy underscores the need for benchmarks that more accurately emulate real-world, domain-specific code generation.
In addition, the context supplied greatly influences the performance of existing LLMs (Wang et al., 2022a). While DS-1000 includes eight packages, it fails to fully reflect a typical coding environment. This gap is partially bridged through benchmarks such as CoderEval (Yu et al., 2023), which incorporate some dependencies and function calls; however, these benchmarks are rudimentary in nature, and once again consist primarily of domain-agnostic functions. As LLMs continue to develop, however, we are now beginning to see repository-level benchmarks that provide a high-amount of context, but these remain new and untried, such as RepoBench (Liu et al., 2023).
Our work shares common ground with CoderEval. Both our approach and CoderEval can evaluate models beyond the simple generation of standalone functions. Given the necessity to handle context-dependent code, both methodologies employ Docker-based testing. However, our approach contrasts with that of CoderEval by placing a specific emphasis on bioinformatics. We ensure each function demands a certain level of domain expertise in bioinformatics. Moreover, our dataset surpasses the scale of CoderEval, which only consists of 230 functions from 43 Python projects and 230 methods from 10 Java projects. In contrast, we source 2,522 functions from over two thousand repositories, offering a broader and more challenging context for code generation tasks. We further compare our benchmark to CoderEval in Appendix H.
## 3 The BioCoder Benchmark
In this section, we introduce four parts of BioCoder: the cleaning and inspection of the dataset, benchmark collection, metrics used, and testing construction.
### Dataset Filtering
Our dataset begins with an initial web scrape of 1,743 bioinformatics-adjacent GitHub repositories (see Figure 2). Specifically, we used the list of 1740 bioinformatics-adjacent repositories in Russell et al. (2018) as the initial base for BioCoder, which contains a curated list of 1720 bioinformatics repositories from the literature. The collection includes code in languages such as C, C++, PHP,
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline & \multicolumn{3}{c}{Public} & \multicolumn{3}{c}{Hidden} & \multicolumn{3}{c}{Similar} \\ \cline{2-10} & Py & Java & Overall & Py & Java & Overall & Py & Java & Overall \\ \hline
**Avg. Comment Lines** & 4.96 & 2.66 & 4.40 & 8.77 & 4.90 & 6.65 & 5.75 & 3.14 & 5.12 \\
**Avg. Tokens of G.T.** & 189.25 & 106.54 & 169.28 & 353.67 & 107.88 & 219.02 & 216.62 & 100.92 & 188.68 \\
**Avg. Lines of G.T.** & 24.30 & 11.10 & 21.11 & 43.28 & 12.19 & 26.25 & 26.50 & 10.32 & 22.59 \\
**Avg. Parameters of G.T.** & 2.39 & 1.70 & 2.23 & 2.92 & 1.25 & 2.00 & 2.48 & 1.10 & 2.15 \\
**Avg. Classes/Function Decl.** & 20.25 & 2.52 & 15.97 & 19.45 & 32.96 & 26.85 & 20.20 & 1.16 & 15.60 \\
**Avg. Global Variables** & 1.90 & - & - & 2.26 & - & - & 1.87 & - & - \\
**Avg. Imports** & 11.91 & 1.52 & 9.40 & 10.37 & 5.00 & 7.43 & 11.63 & 1.16 & 9.10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary statistics for the BioCoder dataset. **G.T.** stands for the ground truth function. βPublic dataβ represents datasets with test cases. βHidden dataβ encompasses a wider array of intricate issues. βSimilar dataβ is a subset of the hidden data, mimicing the distribution of the public data (Appendix Z).
Python, R, Ruby, SQL, Perl, Java, Matlab, and C#, although for now, we only explore Python and Java, with plans to scale up to other languages in the future. Our decision to include Java and Python was based on an empirical investigation into the prevalence of different programming languages across bioinformatics repositories, for a more detailed discussion, please refer to Appendix Q.
Those repositories were then filtered based on popularity and community ratings, as well as a manual round of review, resulting in 28 high-quality, highly domain-specific repositories that are commonly used in the field of bioinformatics. After determining the set of 28 high-quality, highly domain-specific repositories, we then wrote separate custom Python and Java parsers to automatically parse all the selected GitHub repositories. These parsers generated an AST of each code file in each repository and then scraped all the relevant data, including function content, function signature, important imports, and cross-file dependencies for each function in each code file. After parsing all repositories, we were left with a large set of over 20,000 Python functions and over 50,000 Java functions. Given the large baseline of functions, we initiated two rounds of automatic filtering (see Appendix T), resulting in a final count of 1,026 Python functions and 1,243 Java functions (see Table 2). More details on the filtering process can be found in Appendix T.
### Benchmark Construction
BioCoder-Py and BioCoder-JavaFor each function that passes all rounds of filtering in Section 3.1, we manually wrote custom code context, inclusive of necessary imports, cross-file dependencies, and pertinent fuzzer test cases (explained in more detail in Section 3.4). We then crafted custom prompts based on the parsed function data and function summaries, ensuring the inclusion of any necessary imports and cross-file dependencies (see Appendix R). Imports and classes are predefined and included in the context because, as we are testing function-level code generation, we are not prompting the model nor expecting the model to generate the classes it needs to pass the tests. Instead, we are testing the model's ability to extract the pertinent imports and classes from the context to use in the generated function. More prompt statistics can be found in Table 7. Finally, we presented the model with a prompt to generate the function, offering the function signature as a starting point. Examples of the different prompt types can be found in Appendix C. Prompts were generated partly using GPT-3.5, as GPT-3.5 was used to generate function summaries for all the functions in the public dataset. These function summaries were used as part of the prompt in order to efficiently describe the functions. More details about this method can be found in the Appendix F.
BioCoder-Rosalind.To compile the Rosalind portion of the benchmark, we began by scraping the problem descriptions from the Rosalind website, identifying problems with available solutions, and gathering all possible solutions. Subsequently, we developed a custom scraper to assemble ten
Figure 3: Sample prompts for code generation. Our prompts follow the same general outline. First, imports are declared at the top of the prompt, then global variables (if there are any), then function declarations, then class dependencies, and finally, our actual instruction regarding the function to be generated. We end the prompt with the function signature for the LLM to generate to streamline the testing process.
test cases for each Rosalind problem. Using these test cases, we crafted a script to automatically assess whether the available solutions successfully ran against the collected test cases.
Solutions that were successfully executed against all test cases formed the 'golden code' section of the Rosalind benchmark. Each Rosalind benchmark context is custom-made, incorporating the scraped test cases and injecting them into the generated code. The prompts for the Rosalind problems are constructed using the scraped problem descriptions, supplemented with a brief section outlining the context into which the generated code would be integrated. This rigorous filtering process resulted in 253 functions meeting all our criteria. Selected examples for the Rosalind dataset are shown in Appendix D. Statistics of token counts, comment lines per function, and parameters per function can be found in Appendix B.
### Metric
We use the Pass@K metric to measure the functional accuracy (Chen et al., 2021, 2022; Cassano et al., 2023). The metric Pass@K evaluates the efficiency of a model's code generation ability. Specifically, for a given model, this metric quantifies the probability that the model can solve a particular problem. A problem is deemed "solve" if, among the k-function samples produced by the model, at least one sample passes all the provided test cases. The mathematical estimation of Pass@K for a particular problem is articulated as follows: \(\text{Pass@K}:=\operatorname*{\mathbb{E}}_{\text{Problems}}\left[1-\frac{ \binom{n-c}{k}}{\binom{n}{2}}\right]\), where \(n\) is the number of samples generated by the model, and \(c\) is the number of samples that pass all test cases (Chen et al., 2021).
### Testing Framework
Our testing framework starts with a manual review of selected functions, leading to the creation of a context file and a golden code file for each problem (see Figure 3), as discussed in 3.2.
For Python and Java functions, in the context file, we employ a custom syntax to indicate the insertion points for custom randomly generated test cases. Through this syntax, we cater to four types of random generation: integers, floats, strings, and Boolean values. During the runtime, each of these insertion points is replaced with language-specific code to insert a dynamically generated test case. The tester can be run for any number of iterations, depending on the number of fuzzer tests desired.
For Rosalind functions, the process is simpler and more efficient as the functions are less complex. The golden code's output is generated and cached ahead of time. During testing, the tester executes the generated code within the corresponding context, and the output of this execution is compared with the cached golden code output.
For every fuzzer test case and Rosalind test case, we ran the golden output against itself, to ensure that the golden output passes each test with one hundred percent reliability. Furthermore, to ensure system security and test reliability, we ran our tests in Docker environments. We constructed a system using Amazon Web Services, coordinating tasks across multiple nodes to accelerate the process without compromising the validity of the results. After creating a generalized Docker image, equipped with all necessary Python requirements, we summarized our testing framework in Appendix L. We also addressed potential concerns about testing issues due to changes in packages in Appendix Y.
## 4 Models and Results
To test BioCoder, we opted to benchmark StarCoder-15B (Li et al., 2023), StarCoder+-15B (Li et al., 2023), InCoder (Fried et al., 2023), SantaCoder (Allal et al., 2023), CodeGen (6B-mono and 16B-mono) (Nijkamp et al., 2023b), CodeGen2-7B (Nijkamp et al., 2023a), InstructCodeT5+ (Wang et al., 2023a), GPT3.5-Turbo and GPT-4 (OpenAI, 2023) through Azure OpenAI Service. Full details of the model context lengths and model sizes can be found in Table 3. Our prompts were fed directly into the LLMs, and we preserved the output for subsequent analyses. We utilized similar parameters to make model testing consistent across all the tested models. This approach allowed for a more unbiased comparison of how each model performs on our benchmark.
Aiming to accurately represent the performance of the LLM outputs, we implemented basic correction mechanisms to rectify minor syntax and style errors that did not impact functionality. For instance, all StarCoder outputs were appended with a post-script. Consequently, each LLM output was passed
through these correction mechanisms before being sent to the testing framework for evaluation (see Table 4 and 5). Furthermore, to empirically evaluate the hypothesis regarding the efficacy of smaller, specialized LLMs in closed-domain code generation, as opposed to large open-domain pre-trained models like GPT-3.5 and GPT-4, we also fine-tuned StarCoder and documented the resulting performance. Our aim is to use StarCoder as a representative sample of currently popular models. Due to computing restraints, we are unable to fine-tune all the models but we also encourage the contribution from the broader community. We ran inference on HPC clusters with 8x A100 GPUs.
The results in Table 4 and Table 5 align with our initial hypothesis, which proposed that larger models would likely outperform their smaller counterparts. However, the significant performance gap between GPT-3.5, GPT-4, and all other code-generation models was surprising. This stark contrast underscores the crucial role of both the dataset size and parameter size of the base models in accomplishing closed-domain code generation prompts. Java performance went up a lot since the structure is a bit more similar between the training set and testing set. This stark contrast underscores the crucial role of both the dataset size and parameter size of the base models in accomplishing closed-domain code generation prompts. Interestingly, despite the rudimentary nature of our fine-tuning on StarCoder, the results still highlighted a significant improvement compared with the non-fine-tuned model. This stark contrast in performance bolsters our original assertion: achieving success in closed-domain tasks can be realized either through large open-domain LLMs, such as GPT-3.5 and GPT-4, or via fine-tuning smaller models. These smaller models could potentially achieve comparable performance but with significantly reduced computational and memory requirements. Furthermore, Table 4 demonstrates that the performance of models improves with the inclusion of dependencies in prompts, indicating that including dependencies is an important part of promoting.
Without additional training, ChatGPT models performed notably better than other models. Their performance underscores the crucial role of both the dataset scale and model size. That said, the performance of other models (e.g. StarCoder) could be improved by fine-tuning.
## 5 Analysis and Discussion
Looking more closely at the results in Table 4, it is clear that the larger models with more parameters generally perform better than the smaller models. It is well known that the GPT-4 model includes trillions of parameters and dwarfs the other models in this study in both size and performance. However, it is clear that BioCoder remains a challenge as GPT-3.5 and GPT-4, the best model, only achieved an accuracy of slightly under 60%.
Looking at the other models, it is interesting to note that while InstructCodeT5+, CodeGen, and CodeGen2 are all larger than InCoder and SantaCoder, they perform far worse. This is likely due to the former being trained for single-line completions rather than function completion. Furthermore, InstructCodeT5+, CodeGen, and CodeGen2 have relatively small context limits (Mikolov et al., 2013; MOI et al., 2022), which likely hurts their performance. As for the remaining model, SantaCoder notably performs impressively well for being only a roughly 1B parameter model, which is an indication of aggressive fine-tuning on Python code.
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & Context limit & \# Parameters \\ \hline InCoder (Fried et al., 2023) & _2048_ & _6B_ \\ SantaCoder (Allal et al., 2023) & _2048_ & _1.1B_ \\ StarCoder (Li et al., 2023) & _8192_ & _15.5B_ \\ StarCoderPlus (Li et al., 2023) & _8192_ & _15.5B_ \\ InstructCodeT5+ (Wang et al., 2023a) & _2048_ & _16B_ \\ CodeGen-6B (Nijkamp et al., 2023b) & _2048_ & _6B_ \\ CodeGen-16B (Nijkamp et al., 2023b) & _2048_ & _7B*_ \\ CodeGen2 (Nijkamp et al., 2023a) & _2048_ & _7B*_ \\ GPT-3.5-Turbo & _8192_ & _Unknown_ \\ GPT-4 & _8192_ & _Unknown_ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Context length limits and sizes of different code LLMs.
We also note that the context length limit has a very large impact on how different models perform on different prompts. Except for GPT-3.5 and GPT-4, models performed the best on the _Summary Only_ prompt style, likely because of its shorter length. This is especially pronounced for InCoder and SantaCoder, as they both have small context limits of 2,048 tokens. Their Pass@K performance for Python decreases dramatically when switching from short _Summary Only_ prompts to longer _Summary at Top/Bottom_. As shown by the scatterplots in Appendix K, on models with an average Pass@K score of at least 2%, there is an inverse relationship between the number of tokens in the prompt and
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Prompt} & \multicolumn{3}{c}{Java} & \multicolumn{3}{c}{Python} \\ \cline{3-10} & & \multicolumn{2}{c}{Pass@1 Pass@5 Pass@10 Pass@20} & \multicolumn{2}{c}{Pass@1 Pass@5 Pass@10 Pass@20} & \multicolumn{2}{c}{Pass@1 Pass@5 Pass@10 Pass@20} \\ \hline \multirow{4}{*}{InCoder-6B} & _Summary at Top_ & 0 & 0 & 0 & 0 & 0.828 & 2.016 & 3.006 & 4.459 \\ & _Uncommented_ & 0 & 0 & 0 & 0 & 0.032 & 0.159 & 0.318 & 0.637 \\ & _Summary Only_ & 0 & 0 & 0 & 0 & 1.688 & 5.320 & 8.332 & 12.006 \\ & _Necessary Only_ & 0 & 0 & 0 & 0 & 0.032 & 0.159 & 0.318 & 0.637 \\ \hline \multirow{4}{*}{SantaCoder-1.1B} & _Summary at Top_ & 0 & 0 & 0 & 0 & 0.637 & 1.338 & 1.844 & 2.548 \\ & _Uncommented_ & 0 & 0 & 0 & 0 & 0.287 & 0.764 & 0.955 & 1.274 \\ & _Summary Only_ & 0 & 0 & 0 & 0 & 2.965 & 9.848 & 14.227 & 18.181 \\ & _Necessary Only_ & 0 & 0 & 0 & 0 & 0.032 & 0.159 & 0.318 & 0.637 \\ \hline \multirow{4}{*}{StarCoder-15.5B} & _Summary at Top_ & 0 & 0 & 0 & 0 & 3.694 & 13.197 & 19.359 & 24.554 \\ & _Uncommented_ & 0 & 0 & 0 & 0 & 0.318 & 1.062 & 1.591 & 2.548 \\ & _Summary Only_ & 0 & 0 & 0 & 0 & 4.682 & 15.225 & 21.200 & 27.166 \\ & _Necessary Only_ & 0 & 0 & 0 & 0 & 0.127 & 0.603 & 1.123 & 1.911 \\ \hline \multirow{4}{*}{StarCoder-15.5B} & _Summary at top_ & 0 & 0 & 0 & 0 & 5.818 & 16.562 & 21.091 & 27.048 \\ & _Uncommented_ & 0 & 0 & 0 & 0 & 3.312 & 9.073 & 12.574 & 17.536 \\ & _Summary Only_ & 0.200 & 1.000 & 2.000 & 4.000 & 7.295 & 20.838 & 26.143 & 39.570 \\ & _Necessary Only_ & 3.300 & 12.097 & 19.545 & 30.000 & 0.597 & 1.173 & 1.813 & 2.611 \\ \hline \multirow{4}{*}{StarCoder+} & _Summary at Top_ & 0 & 0 & 0 & 0 & 2.675 & 9.133 & 14.019 & 19.650 \\ & _Uncommented_ & 0 & 0 & 0 & 0 & 0.510 & 0.955 & 1.274 & 1.911 \\ & _Summary Only_ & 1.300 & 5.031 & 8.042 & 12.000 & 2.548 & 8.279 & 12.864 & 18.057 \\ & _Necessary Only_ & 0 & 0 & 0 & 0 & 0.127 & 0.457 & 0.609 & 0.637 \\ \hline \multirow{4}{*}{InstructCodeT5+} & _All prompt types_ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \multirow{4}{*}{CodeGen-6B-mono} & _Summary at Top_ & 0 & 0 & 0 & 0.637 & 0.637 & 0.637 & 0.637 \\ & _Uncommented_ & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & _Summary Only_ & 0 & 0 & 0 & 0.637 & 0.637 & 0.637 & 0.637 \\ & _Necessary Only_ & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \multirow{4}{*}{CodeGen-16B-mono} & _Summary at Top_ & 0 & 0 & 0 & 0.637 & 0.637 & 0.637 & 0.637 \\ & _Uncommented_ & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & _Summary Only_ & 0 & 0 & 0 & 0 & 0.637 & 0.637 & 0.637 \\ & _Necessary Only_ & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \multirow{4}{*}{GPT-3.5-Turbo} & _Summary at Top_ & 4.100 & 7.235 & 8.989 & 11.600 & 22.771 & 33.461 & 36.551 & 39.490 \\ & _Uncommented_ & 6.300 & 11.563 & 14.436 & 18.000 & 11.019 & 19.075 & 21.680 & 24.204 \\ & _Summary Only_ & 17.400 & 33.199 & 37.878 & 42.000 & 24.682 & 33.997 & 37.132 & 40.127 \\ & _Necessary Only_ & 43.500 & 52.582 & 53.995 & 55.400 & 28.758 & 39.529 & 44.029 & 47.771 \\ \hline \multirow{4}{*}{GPT-4} & _Summary at top_ & 1.100 & 5.500 & 11.000 & 22.000 & 10.701 & 25.500 & 32.910 & 39.490 \\ & _Uncommented_ & 6.367 & 11.234 & 15.897 & 18.562 & 12.654 & 20.129 & 24.387 & 27.932 \\ \cline{1-1} & _Summary Only_ & 19.483 & 24.721 & 29.634 & 2.543 & 13.172 & 24.578 & 28.394 & 31.938 \\ \cline{1-1} & _Necessary Only_ & **45.011** & **55.350** & **57.616** & **60.000** & **38.439** & **48.491** & **50.619** & **52.229** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-shot and finetuned performance with five prompt versions of BioCoder. For detailed explanations of prompt versions see Appendix I. For all settings, we performed trials twice for Pass@K. Results are in %. We only finetuned StarCoder for 2000 steps, all others are zero-shot results. Additional results can be found in Appendix U.
the Pass@K score. Furthermore, for models such as SantaCoder and GPT models, the performance fell sharply after around 500 tokens.
Focusing on Java's performance, it is clear that most of the publicly available LLMs have not been fine-tuned for Java, resulting in the near 0% Pass@K values. Finally, Rosalind's performance results in Table 5 are roughly in line with Python's performance in Table 4.
Table 6 provides an overview of the error statistics collected from our test runs. For more information about what each error means, see Appendix S. We also have error statistics per model in Appendix V Looking at Appendix S, it appears that the models struggle the most at writing code that will successfully compile or run. The fact that the number of samples of generated code that produced wrong output is relatively small compared to the code that failed to compile or run indicates that if the model is able to generate code that is able to be run, then that code is generally accurate and doesn't produce the wrong output. Therefore, it seems that models have the most trouble generating syntactically correct code rather than understanding the logic required to complete the problems outlined in the prompts. Further discussion on the results of each model can be found in Appendix J.
Despite these challenges, we firmly believe that this dataset holds pivotal importance for benchmarking future models, especially ones with larger context limits. For instance, models like GPT-4 with a 32k context limit, fall comfortably within our token range. Furthermore, other non-code language models, such as Claude, have unlocked context lengths of up to 100k tokens or more.
## 6 Conclusions and Future Work
Our study underscores the challenges in code generation, emphasizing the shortcomings of current models in the face of complex tasks. We present a highly challenging natural language to code tasks, providing input rich with dependencies and imports. Existing models struggle to comprehend the application of these imported toolkics or functions contained in other files. Our tasks are marked by extensive input and a high level of specialization. These programs are closer to real-world scenarios, requiring professional-level code-writing skills, rather than merely catering to beginners. This suggests that the code in question can typically be produced only by professional programmers.
\begin{table}
\begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{**Statistics**} & \multicolumn{1}{c}{} \\ \hline Failure Reason: different output & 8661 \\ Failure Reason: invalid syntax & 117665 \\ Failure Reason: runtime error & 55351 \\ Failure Reason: function timed out & 4 \\ Passed Tests & 7982 \\ \hline \hline \end{tabular}
\begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{**Prompt**} & **Mean** & **Median** & **STDev** \\ \hline Java & 2278.82 & 2599.00 & 1331.81 \\ Python & 2790.75 & 2194.00 & 2539.79 \\ Rosalind & 564.49 & 509.00 & 286.47 \\ Overall & 1510.66 & 812.50 & 1882.80 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Error Distribution.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Model}} & \multirow{2}{*}{Prompt} & \multicolumn{4}{c}{Rosalind} \\ \cline{3-5} & & Pass@1 & Pass@5 & Pass@10 & Pass@20 \\ \hline InCoder & _Description_ & 0.020 & 0.099 & 0.198 & 0.395 \\ \hline SantaCoder & _Description_ & 0.158 & 0.658 & 1.075 & 1.581 \\ \hline StarCoder & _Description_ & 0.534 & 2.042 & 3.228 & 4.743 \\ \hline StarCoderPlus & _Description_ & 0.356 & 1.313 & 1.978 & 2.767 \\ \hline StarCoder (fine-tuned) & _Description_ & 1.623 & 3.109 & 5.328 & 7.036 \\ \hline InstructCodeT5+ & _Description_ & 0.059 & 0.296 & 0.593 & 1.186 \\ \hline CodeGen & _Description_ & 0.692 & 2.088 & 3.055 & 3.953 \\ \hline CodeGen2 & _Description_ & 0.059 & 0.296 & 0.593 & 1.186 \\ \hline GPT-3.5 Turbo & _Description_ & 23.671 & 31.953 & 36.702 & 40.725 \\ \hline GPT-4 & _Description_ & **24.308** & **39.551** & **44.864** & **50.198** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of Rosalind. In this table, we have omitted the percentage symbol (%), although these figures represent the Pass@K in the form of percentages. For all settings, n=20.
As a novel benchmark within the field of bioinformatics, there remains a multitude of areas for future exploration. Currently, we have covered most of the existing models (at the time, August 2023). Additionally, we will move beyond function-level code generation as current models do not have the capacity of the token to generate file-sized code. We included only a few well-established bioinformatics repositories; in future work, without a manpower constraint, we could include additional repositories that span more niche sub-studies of bioinformatics and span across other languages.
## 7 Ethics Statement
We understand the importance of discussing the potential impacts of our specific work. In the context of our benchmark, one potential concern is the accuracy of the benchmark across all data points. There is a risk that the benchmark may optimize incorrect outputs, which users might then use to test their LLMs. This concern is especially significant in research settings deploying incorrect code could lead to inaccurate conclusions, initiating a snowball effect of misinformation. Beyond the immediate implications for LLMs and research outcomes, our benchmark and dataset potentially could be misused. For example, malicious users might use these data to train models that generate harmful biomedical experiments, such as designing dangerous molecules. While our intention is to advance knowledge and use it in a beneficial manner, there must be a level of caution and responsibility in employing the benchmark and dataset we provide.
|
2309.07668 | CoRF : Colorizing Radiance Fields using Knowledge Distillation | Neural radiance field (NeRF) based methods enable high-quality novel-view
synthesis for multi-view images. This work presents a method for synthesizing
colorized novel views from input grey-scale multi-view images. When we apply
image or video-based colorization methods on the generated grey-scale novel
views, we observe artifacts due to inconsistency across views. Training a
radiance field network on the colorized grey-scale image sequence also does not
solve the 3D consistency issue. We propose a distillation based method to
transfer color knowledge from the colorization networks trained on natural
images to the radiance field network. Specifically, our method uses the
radiance field network as a 3D representation and transfers knowledge from
existing 2D colorization methods. The experimental results demonstrate that the
proposed method produces superior colorized novel views for indoor and outdoor
scenes while maintaining cross-view consistency than baselines. Further, we
show the efficacy of our method on applications like colorization of radiance
field network trained from 1.) Infra-Red (IR) multi-view images and 2.) Old
grey-scale multi-view image sequences. | Ankit Dhiman, R Srinath, Srinjay Sarkar, Lokesh R Boregowda, R Venkatesh Babu | 2023-09-14T12:30:48Z | http://arxiv.org/abs/2309.07668v1 | # CoRF : Colorizing Radiance Fields using Knowledge Distillation
###### Abstract
Neural radiance field (NeRF) based methods enable high-quality novel-view synthesis for multi-view images. This work presents a method for synthesizing colorized novel views from input grey-scale multi-view images. When we apply image or video-based colorization methods on the generated grey-scale novel views, we observe artifacts due to inconsistency across views. Training a radiance field network on the colorized grey-scale image sequence also does not solve the 3D consistency issue. We propose a distillation based method to transfer color knowledge from the colorization networks trained on natural images to the radiance field network. Specifically, our method uses the radiance field network as a 3D representation and transfers knowledge from existing 2D colorization methods. The experimental results demonstrate that the proposed method produces superior colorized novel views for indoor and outdoor scenes while maintaining cross-view consistency than baselines. Further, we show the efficacy of our method on applications like colorization of radiance field network trained from 1.) Infra-Red (IR) multi-view images and 2.) Old grey-scale multi-view image sequences.
## 1 Introduction
Colorization is an important and well-studied problem [17, 2, 15, 42] in computer graphics where the objective is to add color to a monochromatic signal. This monochromatic signal can either be obtained from special sensors such as IR sensor or it can be in the form of legacy content. Recently, NeRF-based methods have become popular to generate novel views of a scene while learning the underlying geometry of the 3D scene implicitly using multi-view input images. Our research focuses on a precise scenario: generating colorized novel views in a 3D consistent manner from monochromatic input multi-view images. Fig. 1 illustrates our approach.
Colorization is a well-studied problem in the image [17, 2, 15, 42] and video domain [14, 19, 34]. However, it is not well addressed for the novel view synthesis task. Solving this problem is essential because it requires the radiance field to generate colorized novel views with limited resources i.e., only grey-scale views are available. Colorizing grey-scale multi-view image sequences holds tremendous potential in augmented reality (AR) and virtual reality (VR) applications, especially in restoring legacy content. Also, the proposed approach has applications in other modalities, such as infra-red sensors, which capture shapes and objects in scenes but do not capture color information.
Colorization is an ill-posed problem. Recovering the true color from a grey-scale observation is not trivial. For example, given a grey-scale image of a flower, predicting if the flower is red or blue, or pink is impossible. Hence, given a grey-scale observation, there can be multiple possibilities of color. The objective here is to find a color which looks natural and aesthetically pleasing. Another problem is that the entire image should be colorized consistently maintaining spatial consistency. The color assigned to an object in a scene should not leak into its surrounding. Similarly, the radiance field colorization should be 3D consistent i.e. the color assigned to an object or a region should not change drastically with the change in camera movement. Image and video colorization methods fail to model this aspect during colorization as shown in Fig. 1.
Colorizing monochromatic signals such as black-and-white images have been thoroughly investigated in the literature [17, 2, 42, 15]. Traditional methods solved an objective function to colorize the images using sparse inputs such as scribble [17, 25]. Recently, deep learning methods [10, 45, 2, 42] have been used to solve the colorization task in videos and images and are proven to be very effective. This is because colorization requires a rich understanding of the content of the video, such as the objects, their temporal and spatial relationships, and global temporal context. Deep Learning methods are well-known to have this understanding and learn it for large-scale real-world video datasets.
We can apply image colorization methods to the input grey-scale images and train a radiance field network, but the generated novel views will not be 3D consistent. Similarly, we can apply video colorization methods on the generated novel-view sequence, which may be temporally consistent but does not guarantee 3D consistency as shown in Fig. 1. Another approach is to use generative capability for 3D aware colorized view synthesis using techniques
such as GSN [5], GRAF [29]. These methods suffer from low-quality novel view synthesis and are category specific. Hence, it's impractical to train these methods on multiple scenes for the colorization task as it loses the capability of generating photo-realistic novel views for a single scene.
We propose a distillation-based method based to leverage the existing deep image colorization methods. This strategy incurs no additional cost for training a separate colorization module for the radiance field networks. We divide our training process into two stages. In stage 1, we train a radiance field network on input grey-scale multi-view images. In stage 2, we distill knowledge from a teacher colorization network into the trained radiance field network in stage 1. We also regularize the model using a multi-scale self-regularization technique to mitigate any spatial color inconsistency. We show the effectiveness of our approach on various grey-scale image sequences generated from the existing datasets such as LLFF [20] and Shiny [37]. We also show results on two downstream tasks: 1.) Colorizing multi-view IR images and 2.) Colorizing In-the-wild grey-scale content. Our main contributions are:
* We propose a novel approach _CoRF_ for colorizing radiance field networks to produce 3D consistent colorized novel views from input grey-scale multi-view images.
* We propose a multi-scale self-regularization to reduce spatial inconsistencies.
* We demonstrate our approach on two real-world applications for novel view synthesis: input multi-view IR images and input grey-scale legacy content.
## 2 Related Work
**Image Colorization.** One of the earliest deep-learning-based methods was proposed by [11] which estimates the color of the grey-scale images by jointly learning global and local features through a CNN. [15] trains the model to predict per-pixel color histograms by leveraging pre-trained networks for high and low-level semantics. [43] also colorizes a grey-scale image using a CNN network. GANs have also been used for the image colorization task. [33] uses a generator to produce the chromaticity of an image from a given grey-scale image which is conditioned on semantic cues. GAN methods have good generalization on new images.
Many methods [4, 15, 42, 11] colorize the image automatically i.e. just with a grey-scale input. As there can be multiple plausible colorized images for a grey-scale input, [3, 18, 38, 12] look into generating diverse colorization. Some of these methods use generative priors for diverse colorization. These methods [33, 30, 44] use semantic information for better plausible colorization which is semantically consistent.
Figure 1: (a) Overview of our method. Given input multi-view grey-scale views, the proposed approach βCoRFβ is able to generate colorized views which are 3D consistent. Two colorized novel-views (b) and (e) by I. Image-colorization baseline, II. Video-colorization baseline, and III. our approach on βplaygroundβ scene from LLFF [20] dataset. State-of-the-art colorization baselines generate 3D inconsistent novel-views as shown in zoomed-in regions in (c) and (d).
**Video Colorization.** Compared to image colorization, video colorization is more challenging as it has to color an entire sequence while maintaining temporal consistency along with spatial consistency. [16] introduces an automatic approach for video colorization with self-regularization and diversity without using any label data. [39] presents an exemplar-based method that is temporally consistent and remains similar to the reference image. They use a recurrent framework using semantic correspondence and color propagation from the previous step.
**Knowledge Distillation.**[8] imitated the soft targets generated by a larger network to a smaller network. Since then a lot of work has been done in this area. Some common approaches include distillation based on the activations of hidden layers in the network [7], distillation based on the intermediate representations generated by the network [1], and distillation using an adversarial loss function to match the distributions of activations and intermediate representations in the two networks [36]. This knowledge transfer mitigates the problem of large-scale datasets in real-world problems.
## 3 Method
### Preliminaries
**NeRF.** NeRF [21] represents the implicit 3D geometry of a scene by learning a continuous function whose input is 3D location \(x\) and a viewing direction \(d\) and outputs are color \(c\) and volume density \(\sigma\) which is parameterized by a multi-layer perceptron (MLP) network. During rendering, a ray is cast from the camera center along the viewing direction \(d\) and is sampled at different intervals. Then, the color of the pixel is determined by performing a weighted average of the color at each of the sampled 3D points using volumetric rendering [21] with \(f\). Finally, the MLP is learned by optimizing the squared error between the rendered pixels and the ground truth pixels from multiple input views:
\[L_{photo}=||I(x,y)-f(r)||_{2}^{2} \tag{1}\]
**Hybrid Representations.** Recently, hybrid representations like InstantNGP [22], Plenoxels [6], DVGO [31] have become popular as they use grid-based representation which is much faster than the traditional NeRF representations. We develop upon Plenoxels [6] which represents a 3D scene with sparse voxel grids and learns spherical harmonics and density for each voxel grid. Spherical harmonics are estimated for each of the color channels. For any arbitrary 3D location, density, and spherical harmonics are trilinearly interpolated from the nearby voxels. Plenoxels also use the photometric loss described in NeRF [21] (Eq. 1). Additionally, they also use total variation (TV) regularization on the voxel grid. Final loss function is described as :
\[L_{tot}=L_{recon}+\lambda_{TV}L_{TV} \tag{2}\]
### Overview
Given a set of multi-view grey-scale images of a scene \(X=\{X_{1},...,X_{n}\}\) and corresponding camera poses \(P=\{P_{1},...,P_{n}\}\), we learn a radiance field network \(f_{\theta}\) which predicts density \(\sigma\) and color \(c\) along a camera ray \(r\). To achieve this we propose a two-stage learning framework. Even though the input to the radiance field network is multi-view grey-scale images, we can still learn the underlying geometry and luminance of the scene. This is _"Luma Radiance Field Stage"_ in our method. Next, we distill the knowledge from a colorization network trained on natural images to the learned radiance field network in the previous stage. This is _"Color Distillation Stage"_ in our method. Fig. 2 illustrates the overall pipeline of our method. We discuss_"Luma Radiance Field Stage"_ in Section 3.3 and _"Color Distillation Stage"_ in Section 3.4.
### Luma Radiance Field Stage
We train a neural radiance field network using Plenoxels [6]\(f_{\theta}\) to learn the implicit 3D function of the scene. As our method does not have access to the color image, we take photometric loss w.r.t to the ground-truth greyscale image following Eq. 1. We show that the radiance field network has no issues in learning the grey-scale images, both qualitatively and quantitatively in C.1 in the sup
Figure 2: Overall architecture of our method. First, we train a radiance field network from input multi-view grey-scale images in the βLuma Radiance Field Stageβ. Next, we distill knowledge from a teacher colorization network trained on natural images to the radiance field network trained in the previous stage.
plementary material.
```
Input: Trained Nerf Model on Multi-view Grey-scale images \(f_{\theta}\), colorization teacher network \(\mathcal{T}\) Output: Colorized radiance field network function Loop(for each image i=1,2....N do) \(\mathcal{L}_{i}\leftarrow\phi\) \(I_{i}^{C}\leftarrow\mathcal{T}(X_{i})\). \(I_{i}^{R}\gets f_{\theta}(P_{i})\) \(\mathcal{L}_{i}\leftarrow\mathcal{L}_{i}+\mathcal{L}_{distill}(I_{i}^{C},I_{ i}^{R})\) Update \(f_{\theta}\)
```
**Algorithm 1**Color Distillation Algorithm
### Color Distillation Stage
From the previous stage, we have a trained radiance field \(f_{\theta}\) which has learned the implicit 3D function of the scene but generates grey-scale novel views. However, image colorization is a generative task; which requires a large amount of diverse training images to produce photo-realistic color images. This is difficult to do in the case of radiance field networks because often there are fewer training images per scene. Hence, we strongly believe that the best strategy for colorizing a radiance field network is to distill knowledge from already trained colorization networks trained on a large number of natural images.
We propose a color distillation strategy that transfers color details to a 3D scene parameterized by \(f_{\theta}\) from any image colorization network \(\mathcal{T}\) trained on natural images. More precisely, given a set of multi-view grey-scale images of a scene \(\hat{X}=\{X_{1},...,X_{n}\}\), we pass them through the colorization network \(\mathcal{T}\) to obtain set of colorized images \(I^{C}=\{I_{1}^{C},I_{2}^{C},...,I_{n}^{C}\}\). Corresponding to the camera poses of these images, we obtain rendered images \(I^{R}=\{I_{1}^{R},I_{2}^{R},...,I_{n}^{R}\}\) from the radiance field network trained in the previous stage on \(X\). We convert both \(I_{i}^{C}\) and \(I_{i}^{R}\) to _Lab_ color space and distill knowledge from the color network \(\mathcal{T}\). Then, our distillation loss can be written as :
\[\mathcal{L}_{distill}(I_{i}^{C},I_{i}^{R})=||L_{i}^{C}-L_{i}^{R}||^{2}+||a_{i}^ {C}-a_{i}^{R}||+||b_{i}^{C}-b_{i}^{R}|| \tag{3}\]
To summarize, we minimize MSE loss between the luma channel and use L1 loss for \(a\) and \(b\) channels. MSE loss between luma channels preserves the content of the original grey-scale images and L1 loss on the chroma channels distills information from the colorization network.
**Multi-scale regularization.** As image colorization is done individually on each ground-truth grey-scale image. It often leads to different colorization across multiple views. Hence, we further introduce losses to regularize this inconsistency. In multi-scale regularization, we analyze an image at different scales by constructing image pyramids that correspond to different scales of an image. The lowest level of the pyramid contains the image structure and dominant features while the finer level as the name indicates contains finer features like texture, etc. We create an image pyramid by progressively sub-sampling an image. Then we start color distillation at the coarsest scale as discussed in the previous section. For subsequent scales, we regularize the predicted chroma channels with the prediction from the previous scale. We provide details of this algorithm in Algorithm 2. \(\mathcal{P}_{a}\) and \(\mathcal{P}_{b}\) are placeholders to keep the interpolated predicted chroma channels from the previous scale. We use bilinear interpolation to upsample the chroma channels.
```
Input: Trained Nerf Model \(f_{\theta}\) on multi-view Grey-scale images Output: Colorized Nerf model function Loop(for each image i=1,2....N do) \(\mathcal{L}_{i}\leftarrow\phi\) \(\mathcal{P}_{a}\leftarrow\phi\) \(\mathcal{P}_{b}\leftarrow\phi\) function Loop(for each scale s=1,2....K do) \({}^{s}I_{i}^{C}\gets downsample(I_{i}^{C},s)\). \({}^{s}I_{i}^{R}\gets f_{\theta}(P_{i},s)\) \(\mathcal{L}_{i}\leftarrow\mathcal{L}_{i}+\mathcal{L}_{distill}({}^{s}I_{i}^{C}, {}^{s}I_{i}^{R})\). function If(s!=\(K\)) \(\mathcal{L}_{i}\leftarrow\mathcal{L}_{i}+||\mathcal{P}_{a}-{}^{s}a_{i}^{R}||+|| \mathcal{P}_{b}-{}^{s}b_{i}^{R}||\) \(\mathcal{P}_{a}\gets interpolate({}^{s}a_{i}^{R},2s)\) \(\mathcal{P}_{b}\gets interpolate({}^{s}b_{i}^{R},2s)\) Update \(f_{\theta}\)
```
**Algorithm 2**Color Distillation With Multi-Scale Regularization
### Implementation Details
As described in Section 3.3, we use Plenoxel [6] as our radiance field network representation. We use the suggested setting for the datasets used in our experiments. During the Color Distillation stage, we estimate the loss in _Lab_ color space. We use the deferred back propagation technique proposed by ARF [40] to backpropagate the loss. In this stage, we train only for \(10\) epochs.
## 4 Experiments
In this section, we present quantitative (Section 4.1) and qualitative (Section 4.2) experiments to evaluate our method. Our methods effectiveness was demonstrated with two image colorization teacher networks [43] and [12]. To summarize, our method takes a set of grey-scale posed images of a given scene and learns to generate colorized novel views. We compare our approach with two trivial baselines: 1.) colorize input multi-view grey-scale images
and then train a radiance field network, and 2.) colorize the generated novel-view grey-scale image sequence using a video colorization method. To quantitatively evaluate, we use a cross-view consistency metric using a state-of-the-art optical flow network RAFT [32] used in SNeRF [23] and Stylized-NeRF [9]. Additionally, we conduct a user study to qualitatively evaluate the colorization results. We also present ablations on the critical design choices in our proposed approach in Appendix C.3 in the supplementary material. Finally, we show the effectiveness of our approach on two real-world downstream applications - colorization of radiance field networks trained on 1.) Infra-Red (IR) and 2.)In-the-wild Grey-Scale images. Our experiments show that our distillation approach outperforms the baseline methods, producing colorized novel views while maintaining 3D consistency. Our distillation strategy can be used to achieve 3D consistent colorization of NeRFs by incorporating advancements in image colorization networks. We encourage readers to watch the supplementary video to assess our work better.
**Datasets.** We conduct experiments on two types of real-scenes: i) forward-facing real scenes LLFF [20] and Shiny [37] dataset; and ii) \(360^{\circ}\) unbounded real-scenes Tanks & Temples (TnT) [13] dataset. LLFF [20] dataset provides \(24\) scenes captured using a handheld cellphone, and each scene has \(20-30\) images. The camera poses are extracted through COLMAP [28]. Shiny [37] has \(8\) scenes with multi-view images. Tanks & Temples (TnT) [13] also has \(8\) scenes which are captured in realistic settings with an industry-quality laser scanner for capturing the ground truth. These datasets have a variety in terms of objects, lighting, and scenarios. The supplementary material contains more details about the dataset. For experimentation purposes, we convert the images in the dataset to grey scale using a well-known image-format converter. We use the resolution size per the recommended configuration files in Plenoxel [6].
**Baselines.**We compare CoRF with the following baselines:
1. **Image Colorization \(\rightarrow\) Novel View Synthesis.** : Train Plenoxels [6] on colorized images using state-of-the-art image colorization method [42, 12] on input grey-scale images.
2. **Novel View Synthesis \(\rightarrow\) Video Colorization**: Obtain colorized novel-views by applying state-of-the-art video colorization methods [10, 26] on the novel-view image sequence obtained from the Plenoxel [6] trained on grey-scale multi-view images.
All baselines use the same radiance field representation: Plenoxel [6]. For baseline 1, we use [43] and [12] for colorizing the input views, thus creating two versions for this baseline. Similarly, for baseline 2, we create two versions using DeepRemaster [10] and DeOldify [26]. We did not use image colorization techniques on the rendered grey-scale views because they do not consider temporal and multi-view consistency. Similarly, we did not apply video-colorization techniques to the multi-view grey-scale images because different input views could lead to different sequences for the video-colorization network.
### Qualitative Results
**Image Colorization \(\rightarrow\) Novel View Synthesis.** We compare our method with both versions of this baseline in
Figure 3: **Qualitative results of our method on baselines for βPastaβ and βTruckβ scene.** We display two novel views rendered from different viewpoints, with rows 1 and 3 at the original resolution and rows 2 and 4 zoomed in on the highlighted regions. Even the video-based baselines (columns 2 and 3) exhibit inconsistencies. Note the color change in highlighted regions in βTruckβ scene.
Fig. 4. We generate novel views from two different viewpoints to facilitate a better comparison of the 3D consistency. The baselines exhibit color variation in the "Cake" scene, while our strategy produces results without color variation. Similarly, in the "Leaves" and "Pasta" scenes, color variations can be observed in the highlighted leaf and pasta. We also observe similar 3D consistency in the TnT [13] dataset, as shown in Fig. 4 in the bottom two sets. Our method visually demonstrates better 3D consistency in the generated novel views.
**Novel View Synthesis \(\rightarrow\) Video Colorization.** We compare with the video-colorization-based baseline in Fig. 3 for the "Pasta" scene from LLFF [20] dataset and the "Truck" scene from TnT [13] dataset. The video-based
Figure 4: **Qualitative results of our method with image-colorization baselines.** We display two rows of each scene, each rendered from a different viewpoint. The first four columns depict the original resolution results, while the last four columns show zoomed-in regions of the highlighted areas in the first four columns. The image-based baselines have color inconsistencies in their results, whereas our distillation strategy (columns 3, 4, 7, 8) maintains color consistency across different views.
baseline version results exhibit better consistency than the image-based baseline but still generate inconsistent colorization. Our method preserves consistency due to explicit modeling in 3D. Specifically, we can observe a color change in the plate in the Deoldify [26] baseline version. Similarly, in the "Tanks" scene, we can observe color consistency on the truck body across two views for our method.
**Comparison with NeRF-Stylization methods.** We also compare our method with a popular NeRF-stylization method ARF [41] by giving a color image as a style image. We show results in Fig. 5 and we clearly observe artifacts in results from ARF. The stylization task involves transferring the overall style of one image to another image or video. For instance, a prominent loss function used in stylization work is LPIPS, which primarily penalizes differences in overall texture rather than local color values. On the other hand, the colorization task prioritizes achieving plausible colors, focusing on accurately representing local color values. Hence, stylization works cannot be utilized for the colorization task for radiance fields.
**Novel View Synthesis.** We show additional results in Appendix C.2 of the supplementary material. Our method maintains 3D consistency across all views despite challenging lighting conditions and scenes.
as shown in Fig. 4. Fig. 6 shows the distribution of metrics for the entire novel-view sequence for both teachers in a scene, with our error curve consistently lower and smoother than the baselines, validating our claim of consistency in novel views obtained from our distillation method.
**User Study.**To compare our method with baseline techniques, we provided users with \(12\) colorized sequences from LLFF [20], Shiny [37],Shiny Extended [37] and Tanks & Temples (TnT) [13]. The users were asked to select the scene with the best view consistency and relevant colors without spilling in the neighboring regions. We invited \(30\) participants and asked them to select the best video satisfying the aforementioned criteria. Fig. 7 shows that the proposed distillation method was preferred \(52\%\) of the time indicating the 3D consistency in our method.
## 5 Applications
**Multi-View IR images.** Our method is highly significant for modalities that do not capture color information. One such popular modality is IR images. For this experiment, we obtain data from [24]. This dataset is generated from a custom rig consisting of IR and multi-spectral (MS) sensor and RGB camera. This dataset contains \(16\) scenes and \(30\) views per modality. We show novel views in Fig. 8. We observe that a teacher trained on natural images works well for colorizing the scene. Also, as our approach is invariant with the choice of teacher, we can also use a colorization network that is trained on IR images as a teacher network.
**In-the-wild grey-scale images.** We show a real-world scenario where our approach can be used to restore old videos by colorization. We extract an image sequence from an old video of "Cleveland in 1920s". We extracted the frames from the video and pass them through COLMAP [27] to extract camera poses. Then we use our framework to generate the color novel views from this grey-scale legacy content input. Similarly, we generate novel views for "Mountain" sequence. We can observe in Fig. 9 that our method is able to get 3D consistent novel views for such In-the-wild sequences.
## 6 Conclusion
We present CoRF, a novel method for colorizing radiance field networks trained on input multi-view grey-scale images. A novel distillation framework is proposed, which leverages the pre-trained colorization networks trained on natural images which are more 3D consistent than the baseline methods. We propose a multi-scale self-regularization that prevents de-saturating in color during distillation. Through our experiments, we show that this distillation is invariant of the color teacher network, hence can adapt to advancement in the image colorization domain. Our method outperforms all the baselines both qualitatively and quantitatively. Generated novel views from our approach are more 3D consistent than the baselines. We also conduct a user study in which our method was preferred by the participants. Further, we demonstrate the application of our approach for multi-view IR sensors and legacy image sequences. In future work, we will like to explore real-world applications in more detail.
Figure 8: (Column 1) Input multi-view IR Sequence. (Columns 2 and 3) Colorized multi-views from Our method. Our approach yields consistent novel-views for a different input modality.
Figure 7: User Study. Our result maintains view consistency after colorization and perform better than the baselines.
\begin{table}
\begin{tabular}{c|c c c c} \hline & **Cake** & **Pasta** & **Three Buddha** & **Leaves** \\ \hline
**Ours(RGB)** & 0.034 & 0.027 & 0.023 & 0.021 \\
**Ours(Lab)** & **0.033** & **0.025** & **0.023** & **0.019** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation results show that using the distillation strategy in the βLabβ color space leads to superior cross-view consistency performance across various scenes.
## Appendix A Introduction
We present additional results and other details related to our proposed method : CoRF. We present training details in Appendix B.1. We explain the downstream applications in Appendix B.2 and B.3. We present additional experimental results in Appendix C.
## Appendix B Implementation Details
### Training Details
We use Plenoxels [6] as neural radiance field representation in our experiments. This representation uses a sparse 3D grid based representation with spherical harmonic (SH) coefficients. For the first stage, luma radiance field, we use the default Plenoxel grid recommended for the type of dataset. We use batch-size of 5000 with RMSProp as optimizer. In the first stage, we use both photometric losses and total-variation (TV) loss proposed in the plenoxels [6]. In the distillation stage, first we get the colorized images from the teacher network. In our experiments, we present result with two image-colorization teachers : 1.) Zhang [42] and 2.) Bigcolor [12]. These colorized images are then used in the distillation stage. When distilling color, we convert the colorized image to "Lab" color space.
### Infra-Red Multi-Views
Multi-spectral or Infra-red (IR) sensors are more sensitive to the fine details available in the scene than RGB sensors. Poggi [24] proposed Cross-spectral NeRF (X-NeRF) to model a scene using different spectral sensors. They built a custom rig with a high-resolution RGB camera and two low-resolution IR and MS cameras and captured 16 forward-facing scenes for their experiments. We extracted IR multi-view images and camera poses from the proposed dataset. We naively normalize the IR view between 0 and 1; thus treating it as a grey-scale multi-view input sequence. We then apply our method to colorize this view. Our method is effective in colorizing views from different modalities.
### In-the-wild Grey-Scale Multi-Views
Other than different multi-spectral sensors, there exist lot of in-the-wild grey-scale content either in the form of legacy old videos or monochromatic cameras. We extract these multi-view image sequences and then pass these images through COLMAP [27] to extract camera poses. For legacy grey-scale image sequences, as there are lot of unnecessary artefacts which affects the perfomrance of COLMAP [27], we pass this sequnce through the video restoration method proposed in [35]. We use the extracted camera-pose and grey-scale multi-view image sequence as input for the proposed method and obtain 3D consistent color-views. This downstream task has a lot of application in Augmented-reality(AR)/Virtual Reality (VR).
## Appendix C Experimental Results
### Grey-Scale Novel Views
We present quantitative results for generated grey-scale novel views from "Luma Radiance Field Stage" (Stage 1) in Table 4. We also compare the generated novel-views with the ground-truth grey-scale views in Fig. 10 and 11. We observe that generated novel-views are of good quality. This shows that learning monochromatic signal using a radiance field representation is achievable.
### Ablations
We performed ablation studies on the choice of color space and the impact of multi-scale regularization. However, when distilling color at the original resolution, some areas appeared de-saturated, as seen in the highlighted regions in Fig. 13(a) & (c). To overcome this issue, we employed multi-scale regularization, which mitigated the color de-saturation during the distillation process. This is evident in the improved color on the grass in playground and on top of the cake, as seen in Fig. 13(b) & 8(d). One can observe that a bluish patch is not there with the proposed multi-scale technique. These results demonstrate that our regularization method effectively addresses the color de-saturation problem in the generated views.
|
2309.00107 | Unsupervised evaluation of GAN sample quality: Introducing the TTJac
Score | Evaluation metrics are essential for assessing the performance of generative
models in image synthesis. However, existing metrics often involve high memory
and time consumption as they compute the distance between generated samples and
real data points. In our study, the new evaluation metric called the "TTJac
score" is proposed to measure the fidelity of individual synthesized images in
a data-free manner. The study first establishes a theoretical approach to
directly evaluate the generated sample density. Then, a method incorporating
feature extractors and discrete function approximation through tensor train is
introduced to effectively assess the quality of generated samples. Furthermore,
the study demonstrates that this new metric can be used to improve the
fidelity-variability trade-off when applying the truncation trick. The
experimental results of applying the proposed metric to StyleGAN 2 and StyleGAN
2 ADA models on FFHQ, AFHQ-Wild, LSUN-Cars, and LSUN-Horse datasets are
presented. The code used in this research will be made publicly available
online for the research community to access and utilize. | Egor Sevriugov, Ivan Oseledets | 2023-08-31T19:55:50Z | http://arxiv.org/abs/2309.00107v1 | # Unsupervised evaluation of GAN sample quality:
###### Abstract
Evaluation metrics are essential for assessing the performance of generative models in image synthesis. However, existing metrics often involve high memory and time consumption as they compute the distance between generated samples and real data points. In our study, the new evaluation metric called the "TTJac score" is proposed to measure the fidelity of individual synthesized images in a data-free manner. The study first establishes a theoretical approach to directly evaluate the generated sample density. Then, a method incorporating feature extractors and discrete function approximation through tensor train is introduced to effectively assess the quality of generated samples. Furthermore, the study demonstrates that this new metric can be used to improve the fidelity-variability trade-off when applying the truncation trick. The experimental results of applying the proposed metric to StyleGAN 2 and StyleGAN 2 ADA models on FFHQ, AFHQ-Wild, LSUN-Cars, and LSUN-Horse datasets are presented. The code used in this research will be made publicly available online for the research community to access and utilize.
1
Footnote 1: Skolkovo institute of science and technology
Moscow, Russia 121205
[email protected], [email protected]
## Introduction
Advancements in Generative Adversarial Networks (GANs) [11] have led to a wide range of applications, including image manipulation [14, 15, 16], domain translation [13, 14, 15, 17, 18, 19], and image/video generation [12, 13, 14]. GANs have demonstrated high-quality results in these tasks, as validated by standard evaluation metrics such as Frechet inception distance (FID) [10], kernel inception distance (KID) [15], Precision [16], and recall [17]. These metrics are typically based on clustering real data points using the k-nearest neighbours algorithm. Initially, real images are passed through a feature extractor network to obtain meaningful embeddings, and pairwise distances to other real images are computed for the algorithm. In the evaluation stage, the fidelity of an individual sample is determined by computing its distance to the clusters of real manifold. However, this procedure can be computationally expensive and memory-intensive, particularly for large datasets, as all real embeddings need to be stored.
In order to overcome these challenges, this research introduces a novel metric for evaluating the quality of individual samples. Instead of assessing sample fidelity relative to the real manifold, this metric directly calculates the density of a sample by utilizing only the trained generator. The computation process involves evaluating the model Jacobian, which can be particularly demanding for high-resolution models. To mitigate the memory and time costs associated with this computation, feature extractors are employed to reduce the size of the Jacobian.
Furthermore, the proposed metric function is approximated on a discrete grid using tensor train decomposition. This approximation provides a significant reduction in inference time since the batch of sample scores is only required to compute the tensor decomposition stage. The evaluation procedure simply entails obtaining the value of decomposed tensor at specified indexes.
The proposed metric function also has an application in the sampling procedure, particularly as an enhancement for the truncation trick. The truncation trick [11] operates by sorting samples based on the norm of the input vector, which can be effectively replaced by the TTJac score. This upgrade provides a better trade-off between fidelity and variability compared to the standard technique.
To evaluate the effectiveness of the proposed metric function, standard GAN models like StyleGAN 2 [14] and StyleGAN 2 ADA [14] were considered, using various datasets such as Flickr-Faces-HQ Dataset (FFHQ) [15], AFHQ Wild [18], LSUN Car [16], and LSUN Horse [17]. In summary, the contributions of this paper include:
1. Introducing a new metric for sample evaluation that does not rely on dataset information.
2. Presenting a methodology for effective usage of the proposed metric, involving feature extractors and tensor train approximation.
3. Proposing a metric-based upgrade for the truncation
trick, enabling a better trade-off between fidelity and variability.
## Method
The primary component of a GAN model is the generator network, denoted as \(G\), which generates an image \(x\) from a given latent code (network input) \(z\). Typically, the evaluation of individual sample quality \(x=G(z)\) is performed using a realism score or trucnation trick.
The realism score requires access to a dataset to compute distances for the k-nearest neighbors algorithm, while truncation trick assesses sample fidelity based on the norm of the corresponding input vector. This approach allows for resampling latent codes that lie outside a chosen radius.
In contrast, we propose a metric function that defines the score based on the density of the generator output. We use the generalized change-of-variable formula [1]:
\[\rho(z)=\rho(x)\mathrm{Vol}(J)\]
where \(J=dG(z)/dz\) represents the generator Jacobian. By taking the logarithm and making the appropriate substitution, the final expression for the score function is derived:
\[s(x)=\log\rho(x)=\log(\rho(z))-\log(\mathrm{Vol}(J))\]
where \(\mathrm{Vol}(J)=\frac{1}{2}\log(\det(J^{T}J))=\sum\limits_{i=1}^{N}\log(\sigma _{i}(J))\), \(\sigma_{i}(J)\) represents the \(i\)-th singular value of the Jacobian. Finally score function turns into:
\[s(x)=\log(\rho(z))-\sum\limits_{i=1}^{N}\log(\sigma_{i}(J))\]
At this stage, we considered the image density in pixel representation. Nevertheless, the proposed idea can be applied to an arbitrary representation of the image. This aspect is discussed in the next part.
### Feature Density Scoring
High-resolution images can be generated by high-quality GANs (Generative Adversarial Networks). For instance, the StyleGAN2 model trained on the FFHQ dataset can generate images of size \(1024\times 1024\) with a latent space size of \(512\). However, computing the Jacobian matrix, which contains \(10^{9}\) values in this case, can be challenging in terms of both time and memory consumption.
To mitigate these costs, we propose to use feature extraction. Instead of evaluating sample quality based on pixel values, a score function can be used that assesses the density of features. The score function is defined as:
\[s(x)=\log\rho(f(x))=\log(\rho(z))-\log(\mathrm{Vol}(J))\]
Here, \(J=\frac{df(G(z))}{dz}\) represents the Jacobian matrix, and \(f\) denotes the feature extraction network. In Figure 1 we presented a whole pipeline of our work. The detailed explanations on last step delivered in next section.
In our work, we considered two options for the feature extraction network: VGG19[13] and Dino[1]. VGG networks are based on convolutional layers and have demonstrated high efficiency in classification tasks. They are widely used for extracting meaningful information from image data. On the other hand, Dino is a transformer-based network, which is known to be more accurate but has longer inference times compared to convolution-based networks.
Figure 1: The general pipeline of the presented work involves several steps. Firstly, latent code samples \(z\) are generated from a normal distribution, then the generated latent codes are passed through the generator network, which produces corresponding images \(x\). The VGG feature extractor is employed to extract meaningful features \(f\). After obtaining the features, the computation of feature density is carried out using generalized change of variables formula [1]. Finally, the metric score samples are approximated using the Tensor Train (TT) algorithm.
To compare the performance of these feature extraction networks, we generated 100 images StyleGAN 2 ADA model trained on FFHQ dataset and evaluated the proposed metric. The experiments were conducted on a Tesla V100-SXM2 GPU with 16 GB of memory. Three types of output were considered: X (pixel-based density), VGG (VGG19-based feature density), and Dino (Dino-based feature density).
After conducting a sorting procedure, we selected three images with the lowest, middle, and highest scores for each output type. Figure 2 illustrates that VGG produces results that are comparable to pixel-based and Dino based density. Additionally, computing the VGG features-based density requires significantly less time per sample. For further experiments, we utilized VGG features for metric computation. Table 1 provides a comprehensive comparison of the time consumption for different output types.
### Inference Time Acceleration through Tensor Train
Presented in previous section reduction in inference time is insufficient for the computation of a large number of samples. Currently, the computation of 50,000 scores takes around 2 weeks on a single GPU, even with the use of VGG based features. A potential solution to this problem is to discretize the metric on a grid and compress the resulting tensor using tensor decomposition. The proposed solution pipeline consists of two stages:
1. Score computation for a large number of samples
2. Computation of the logarithm density approximation using the samples obtained from the previous stage
To evaluate a sample \(\hat{x}\) within this pipeline, the following steps can be followed:
1. Find the closest point to the latent code \(\hat{z}\) in the discrete latent space \(z[i_{1},...,i_{d}]\). This can be achieved by minimizing the Euclidean distance between the discrete latent codes and \(\hat{z}\): \[(\hat{i}_{1},...,\hat{i}d)=\arg\min{(i_{1},...,i_{d})}\|z[i_{1},...,i_{d}]- \hat{z}\|\]
2. Compute the score value at this point using the density tensor \(\rho[i_{1},...,i_{d}]\) stored in compressed format: \[s(\hat{x})=\rho[\hat{i}_{1},...,\hat{i}_{d}]\]
### Non uniform grid
One crucial component in our scheme is the grid used for discretization. We found usage of a uniform grid is not suitable for GANs when sampling latent codes from a normal distribution. Certain latent regions may lack sufficient data, posing challenges for computation of tensor train decomposition.
To address this challenge, we opted for a grid where the integrals of the normal density over each grid interval are equal. This approach ensures a uniform distribution of samples along each grid index, effectively working in our case. More details can be found in Appendix A.
\begin{table}
\begin{tabular}{c|c c c} \hline
**Feature extractor** & Original & VGG & Dino \\ \hline
**Time per sample (s)** & 450 & 25 & 90 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison results of time consumption for metric inference. Three types of output density considered: Original - pixel based density, VGG - VGG features based density, Dino - Dino features based density
\begin{table}
\begin{tabular}{c|c c c c} \hline
**Domain** & FFHQ & Wild & Car & Horse \\ \hline
**MSE** & 0.018 & 0.026 & 0.017 & 0.018 \\ \hline \end{tabular}
\end{table}
Table 2: Quantitative evaluation of TT approximation of TTJac score on discrete grid for different domains: FFHQ, AFHQ-Wild, LSUN Car, LSUN Horse
Figure 2: Qualitative comparison of different image features. Three types of output considered: original image (X), VGG19 features (VGG), Dino features (DINO). For each type the sample density was computed. Three images with low, middle, and high scores are presented for each output type.
Calculation of Tensor Train approximationThe TT decomposition of a tensor represents the element at position \([i_{1},...,i_{N}]\) as the product of matrices:
\[T[i_{1},...,i_{N}]=G_{1}[i_{1}]...G_{N}[i_{N}]\]
Here, \(G_{1},...,G_{N}\) are the TT cores. The low-rank pairwise dependency between tensor components allows for an efficient approximation of the proposed metric function in discrete form using well known Tensor Train decomposition [16]. It showed impressive results on different tasks [15, 17, 18, 19] due to the storage efficiency, capturing complex dependencies and fast inference capabilities. Obtaining an element of the tensor stored in TT format requires only \(511\) matrix-by-vector multiplications. It also can be easily accelerated for the batch of elements using multiprocessing tools. Actually the batch size could be enormously large for TT format since it on each step of output computation algorithm stores the matrix of size \((N,r)\) where \(N\) number of elements to compute, \(r\) -rank of decomposition. This aspect has significant impact on use of proposed metric for truncation trick, where we need to evaluate huge number of samples.
However, it should be noted that the computation of samples for core computation is time-consuming. This renders standard iterative methods for tensor train calculation ineffective, as they tend to overfit to given tensor samples and fail to capture underlined dependencies. In such case, it is more suitable to use explicit methods for tensor train decompositions, such as ANOVA decomposition [20]. The authors in [15] have presented an effective method for computing ANOVA decomposition in TT format, which we have found accurate enough for our purposes. See Table 2 where we presented the results of TT approximation for discretized metric in different domains.
Upgraded Truncation TrickThe truncation trick, initially proposed in [1], offers a means to adjust the balance between variability and fidelity. It involves two practical steps: evaluating the norm of generated samples and resampling those with a norm exceeding a specified threshold. This threshold determines the balance between variability and fidelity. By removing samples with high norms, we effectively reduce the number of
Figure 3: Qualitative comparison of metrics for individual image evaluation: TTJac score, Realism score, Rarity score, Truncation Trick. For each metric we presented 8 images with lowest, middle, and highest score values. For the TTJac and Realism scores, we arranged the images in increasing order from low to high scores. Conversely, for the Rarity score and Truncation Trick, we intentionally reversed the order to ensure ease of comparison between the metrics.
samples with low latent code density and, possibly, lower visual quality. This approach can enhance the overall fidelity of the generated samples, albeit at the expense of variability.
Instead of evaluating latent code density, we suggest assessing image feature density using a modern feature extractor such as VGG. In next section, we provide evidence that this replacement criterion achieves a more desirable trade-off between fidelity and variability.
## Experiment
To evaluate the proposed metric, we conducted experiments on various widely-used datasets for image generation, including FFHQ [14], AFHQ-Wild [15], LSUN-Cars [23], and LSUN-Horse [23]. Additionally, we considered the method effectiveness on StyleGAN 2 [14] and StyleGAN 2 ADA [14] models, known for their high performance in generating images in standard domains where data may be limited or noisy.
For the learning process of the metric, we computed \(60k\) samples for the FFHQ dataset and \(30k\) samples for the AFHQ-Wild, LSUN-Cars, and LSUN-Horse datasets. Feature extraction was performed using the VGG19 network [20].
To discretize the metric, we utilized a grid with a size of \(32\). The metric was then approximated by applying ANOVA decomposition of order 1, which was later converted to the tensor train format.
All experiments were carried out using a 3 Tesla V100-SXM2 GPU with 16 GB of memory.
### Comparison with other metrics
To demonstrate the effectiveness of the TTJac score in evaluating individual samples, we compared it with similar metrics such as the Realism score [22], Truncation trick [15], and Rarity score [16]. We randomly selected \(10k\) latent samples and presented the comparison results in Figure 3.
In order to facilitate the comparison process, we reversed the order for the Rarity score and Truncation. The Realism score showed high results in evaluating sample fidelity but sometimes failed on visually appealing images. It is important to note that high realism values often correspond to images with low variability. Similarly, the Truncation trick allows for manipulation of image quality at the expense of variability. However, samples with the highest variability tend to have lower realism compared to other metrics. On the other hand, the Rarity score measures the uniqueness of the given image, effectively identifying images with distinct features, but some of them may appear less realistic.
The proposed TTJac score functions similarly to the realism score but does not require any real data for processing. It effectively extracts samples with high fidelity, reflected by a high score. Conversely, low scores often indicate the presence of visual artifacts. However, the TTJac score does have a limitation - images with high scores tend to have fewer unique features, while visually appealing images can be found among those with low scores.
We also computed the correlation between the presented metrics. The results confirmed quite high similarity between the realism and TTJac scores, as shown in Figure 4. Furthermore, when measuring metrics correlation for samples with the highest and lowest TTJac scores, results of proposed metric becomes even more close to realism score. See Table 3 for confirmation. Thus, the data free metric TTJac is able to filter out very low quality images as effectively as the Realism score using a dataset of real images.
### Fidelity-variability trade-off evaluation
In method section, we discussed the potential use of the proposed metric to enhance the performance of the truncation trick. In this section, we compare the trade-off provided by the standard criteria based on latent code norm, and the
Figure 4: Correlation matrix for metrics measuring individual sample quality: TTJac score, Realism score, Rarity score, and Truncation.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Number** & & & \\
**of border** & **Rarity** & **Realism** & **Truncation** \\
**samples** & & & \\ \hline
3000 & -0.051 & 0.42 & 0.007 \\
2000 & -0.059 & 0.472 & 0.156 \\
1000 & -0.048 & 0.54 & 0.006 \\
500 & -0.048 & 0.584 & 0.049 \\
100 & -0.043 & 0.651 & 0.135 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Quantitative results of correlation between TTJac score and other considered metrics: Rarity score, Realism score, Truncation. Computation was done for certain number of samples representing the higher and lower extremes of the TTJac score - border samples.
TTJac score. To demonstrate the effectiveness, we plotted precision-recall curves for the FFHQ, AFHQ-Wild, LSUN-Car, and LSUN-Horse domains.
For the FFHQ, LSUN Horse, and LSUN Car domains, the use of the TTJac score allows for a better balance between precision and recall compared to the standard tool. In Figure 5, the curve for the TTJac score consistently lies above the curve for the Truncation trick. The benefit is not significant for the AFHQ-Wild domain. This can be attributed to the fact that the GAN model already exhibits a high level of precision in this specific domain. Furthermore, the evaluation of TT approximation accuracy presented the least favorable outcome when compared to other domains. And also on the FFHQ domain, after an improvement in precision by 5%, a decline begins and the curve falls below that for the standard truncation trick. It can be attributed to the presence of an error in the metric approximation. However, for the LSUN-Car domain, the TTJac score proves to be highly effective, providing a significant improvement in precision with negligible loss in recall.
It is important to note that due to the presence of errors in the approximation of the metric score, it is not possible to achieve the maximum possible precision with minimal recall value, as seen in standard precision-recall curves.
Overall, this comparison highlights the potential of the TTJac score in achieving a better trade-off between preci
Figure 5: Quantitative comparison of fidelity-variability trade-off computed using TTJac score and Truncation trick. Four domains were examined: FFHQ, AFHQ-Wild, LSUN Car, LSUN Horse. For each domain \(50k\) samples were generated for precision and recall calculation. The results were averaged along 3 random seeds. The higher - the better.
sion and recall in the evaluated domains, with notable improvements observed in the LSUN-Car domain.
### Domain wise metric evaluation
In this part, we conducted a qualitative evaluation of the TTJac score for various domains. In Figure 6, it is demonstrated that the TTJac score efficiently detects visual artifacts in the evaluated domains.
For the FFHQ domain, the TTJac score assigns low scores to images that do not contain a face or have unrealistic prints. Additionally, images with visually poor backgrounds are also marked with low score values. Similarly, in the LSUN Car domain, images that lack key elements of a car are identified as lower quality by the TTJac score.
The TTJac score exhibits similar behavior in the AFHQ-Wild and LSUN-Horse domains. It effectively detects unrealistic horses, such as instances where two horses are merged into one image. However, in the AFHQ-Wild domain, the metric faces a more challenging situation. While it accurately identifies images where different species are merged, such as an image with half wolf and half lion, the overall fidelity of the images in this domain is quite close. This observation is connected to the fact that the model trained on the AFHQ-Wild dataset has a higher precision compared to the other considered models.
In summary, the TTJac score demonstrates high efficiency in evaluating sample fidelity while sacrificing some variability. It effectively detects visual artifacts and accurately identifies unrealistic elements in the evaluated domains.
## Conclusion
We have proposed a new approach for evaluating image quality without using real data, which is based on the density of meaningful features extracted from the image. A method was also proposed for efficient storage and inference metrics using TT approximation. We compared the TTJac score with other metrics. The TTJac score performs similarly to the realism score. It effectively detects visual artifacts and identifies unrealistic elements in different domains such as FFHQ, AFHQ-Wild, LSUN Car, and LSUN Horse. We also evaluated the trade-off between fidelity and variability using precision-recall curves. The TTJac score showed a better balance, especially in the LSUN Car domain where it significantly improved precision with minimal loss in recall. In qualitative evaluation, the TTJac score successfully detected missing key elements or unrealistic features in images across various domains. Overall, the TTJac score demonstrates high efficiency in evaluating sample fidelity and can
Figure 6: Qualitative comparison of metric evaluation capabilities four domains were examined: FFHQ, AFHQ-Wild, LSUN Car, LSUN Horse. For each domain \(30k\) samples were sorted based on their scores and selected three images to represent the samples with the lowest, middle, and highest score values.
be a valuable tool for assessing image generation models.
|
2309.16765 | Using low-frequency scatter-broadening measurements for precision
estimates of dispersion measures | A pulsar's pulse profile gets broadened at low frequencies due to dispersion
along the line of sight or due to multi-path propagation. The dynamic nature of
the interstellar medium makes both of these effects time-dependent and
introduces slowly varying time delays in the measured times-of-arrival similar
to those introduced by passing gravitational waves. In this article, we present
a new method to correct for such delays by obtaining unbiased dispersion
measure (DM) measurements by using low-frequency estimates of the scattering
parameters. We evaluate this method by comparing the obtained DM estimates with
those, where scatter-broadening is ignored using simulated data. A bias is seen
in the estimated DMs for simulated data with pulse-broadening with a larger
variability for a data set with a variable frequency scaling index, $\alpha$,
as compared to that assuming a Kolmogorov turbulence. Application of the
proposed method removes this bias robustly for data with band averaged
signal-to-noise ratio larger than 100. We report, for the first time, the
measurements of the scatter-broadening time and $\alpha$ from analysis of PSR
J1643$-$1224, observed with upgraded Giant Metrewave Radio Telescope as part of
the Indian Pulsar Timing Array experiment. These scattering parameters were
found to vary with epoch and $\alpha$ was different from that expected for
Kolmogorov turbulence. Finally, we present the DM time-series after application
of the new technique to PSR J1643$-$1224. | Jaikhomba Singha, Bhal Chandra Joshi, M. A. Krishnakumar, Fazal Kareem, Adarsh Bathula, Churchil Dwivedi, Shebin Jose Jacob, Shantanu Desai, Pratik Tarafdar, P. Arumugam, Swetha Arumugam, Manjari Bagchi, Neelam Dhanda Batra, Subhajit Dandapat, Debabrata Deb, Jyotijwal Debnath, A Gopakumar, Yashwant Gupta, Shinnosuke Hisano, Ryo Kato, Tomonosuke Kikunaga, Piyush Marmat, K. Nobleson, Avinash K. Paladi, Arul Pandian B., Thiagaraj Prabu, Prerna Rana, Aman Srivastava, Mayuresh Surnis, Abhimanyu Susobhanan, Keitaro Takahashi | 2023-09-28T18:01:46Z | http://arxiv.org/abs/2309.16765v1 | # Using low-frequency scatter-broadening measurements for precision estimates of dispersion measures
###### Abstract
A pulsar's pulse profile gets broadened at low frequencies due to dispersion along the line of sight or due to multi-path propagation. The dynamic nature of the interstellar medium makes both of these effects time-dependent and introduces slowly varying time delays in the measured times-of-arrival similar to those introduced by passing gravitational waves. In this article, we present a new method to correct for such delays by obtaining unbiased dispersion measure (DM) measurements by using low-frequency estimates of the scattering parameters. We evaluate this method by comparing the obtained DM estimates with those, where scatter-broadening is ignored using simulated data. A bias is seen in the estimated DMs for simulated data with pulse-broadening with a larger variability for a data set with a variable frequency scaling index, \(\alpha\), as compared to that assuming a Kolmogorov turbulence. Application of the proposed method removes this bias robustly for data with band averaged signal-to-noise ratio larger than 100. We report, for the first time, the measurements of the scatter-broadening time and \(\alpha\) from analysis of PSR J1643\(-\)1224, observed with upgraded Giant Metrewave Radio Telescope as part of the Indian Pulsar Timing Array experiment. These scattering parameters were found to vary with epoch and \(\alpha\) was different from that expected for Kolmogorov turbulence. Finally, we present the DM time-series after application of the new technique to PSR J1643\(-\)1224.
keywords: (stars:) pulsars: general - (stars:) pulsars: individual (PSR J1643\(-\)1224) - ISM : general
## 1 Introduction
The precision in the time of arrival (ToA) of a pulsar's radio pulse is determined in part by how bright and sharp the received pulse is. Both of these quantities, namely the signal-to-noise ratio (S/N) and the pulse width, are affected by the propagation of the pulsed signal through the ionised interstellar medium (IISM). The IISM can impose a frequency-dependent delay on the pulses, which, when added together without proper correction, will make the pulse appear smeared. This dispersion is mainly caused by the integrated column density of electrons along the line of sight and is quantified by the Dispersion Measure (DM). In addition, electron density inhomogeneities in the IISM encountered along the line of sight lead to multi-path propagation of radio waves, which also broadens the pulse (Rickett, 1977). This pulse broadening can be mathematically described as a convolution of the intrinsic pulse profile with a pulse broadening function, such as \(\exp(-\phi/\tau_{sc})\), where \(\phi\) is the pulse phase and \(\tau_{sc}\) is the scatter-broadening time scale in the case of a thin scattering screen. Both of these phenomena are time-variable due to the dynamic nature of IISM. This variation induces a slowly varying chromatic time delay in the ToA measurements. The timescale of this stochastic delay is similar to that of the grav
itational wave (GW) signature arising from an isotropic stochastic gravitational wave background (SGWB) formed by the random superposition of GWs emitted by an ensemble of super-massive black hole binaries (Burke-Spolaor et al., 2019). Hence, the wrong characterisation of this chromatic delay, or the individual pulsar chromatic noise, can lead to the false detection of SGWB (Zic et al., 2022).
The measurement and characterization of this IISM noise is therefore crucial for experiments, which use a collection of pulsars to observe the GW signal from SGWB (Srivastava et al., 2023). These experiments are called pulsar timing arrays (PTAs). There are four PTAs, which pool their data as part of the International Pulsar Timing Array consortium (IPTA : Hobbs et al., 2010; Verbiest et al., 2016) : the European Pulsar Timing Array (EPTA : Desivignes et al., 2016; Kramer & Champion, 2013), the Indo-Japanese pulsar timing array (InPTA : Joshi et al., 2018, 2022; Tarafdar et al., 2022), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav : McLaughlin, 2013) and the Parkes Pulsar Timing Array (PPTA : Manchester et al., 2013). Recently, the MeerKAT Pulsar Timing Array (MPTA : Miles et al., 2023; Bailes et al., 2020) and the Chinese Pulsar Timing Array (CPTA: Lee, 2016) have also started pulsar timing experiments.
The estimates of DM in these PTA experiments are usually obtained by quasi-simultaneous/simultaneous or even observations separated by few days, at two or three different observing frequencies (Arzoumanian et al., 2018; Tarafdar et al., 2022). The alignment of the fiducial point of the pulse at different observing frequencies is critical in such measurements. The scatter-broadening can introduce a systematic phase shift in the pulse's fiducial point. In the measurement procedure, this needs to be accounted for to avoid a systematic bias in the measured DMs. Furthermore, slow variations in \(\tau_{\rm sc}\) over long periods of time can introduce corresponding variations in the measured DM values. Lastly, timing events, such as the ones reported in PSR J1713+0747 (Lam et al., 2018; Goncharov et al., 2020; Singha et al., 2021), produce a discontinuity in the Gaussian process DM models, if accompanied by changes in \(\tau_{\rm sc}\). These epoch dependent systematic errors in the DM estimates induce time varying delays in the ToAs, such act as a chromatic noise to SGWB signal. This noise, introduced by scatter-broadening variations, needs to be accounted for a reliable characterisation of the SGWB signal in PTA experiments. The correction of scatter-broadening in order to obtain robust estimates of DMs, and remove this noise is the primary motivation of this study.
The characterisation of scatter-broadening noise can be achieved with wide-band observations of millisecond pulsars (MSPs). Recently, wide-band receivers have been employed by the uGMRT ( 300-500 MHz : Gupta et al., 2017; Tarafdar et al., 2022), by the Parkes radio telescope (Hobbs et al., 2020, 800 - 5000 MHz) and by CHIME (Amiri et al., 2021, 400 - 800 MHz) for higher precision DM measurements. The scatter-broadening noise can be well characterized with such wide-band receivers. However, the dispersive delay due to the IISM varies as \(f^{-2}\), whereas the pulse scatter-broadening evolves as \(f^{-4.4}\) if a Kolmogorov turbulence is assumed in the IISM, where \(f\) is the observational frequency (Rickett, 1977). This makes these propagation effects dominant at frequencies below 800 MHz, necessitating low-frequency measurements. If the scatter-broadening variations estimated from such observations can be removed from the data, robust and precision DM measurements can be obtained even for moderately high DM pulsars. In this paper, we present a new technique to achieve this and evaluate the efficacy of this technique using simulated data as well as data on a pulsar with significant pulse broadening.
The paper is arranged as follows. A new technique to remove the effect of pulse scatter-broadening is described in Section 2. The technique was tested first with simulated data with a known injection of DM and scatter-broadening variations, and the results are presented in Section 3. Results obtained by applying the technique on the InPTA data for PSR J1643\(-\)1224 are discussed in Section 4 followed by our conclusions in Section 5.
## 2 Description of the new technique
Our new technique, which we call DMscat, makes use of the measurements of pulse broadening obtained using data between 300\(-\)500 MHz in order to recover the original pulse shape as best as possible. The procedure used in the technique is shown schematically in Figure 1. The pulse broadening measurements were obtained as follows. We use the frequency-resolved integrated pulse profiles with a chosen number of sub-bands between 300\(-\)500 MHz. The number of sub-bands was selected to obtain at least a 50 S/N pulse profile in each sub-band. Then, a template profile is generated from a high S/N pulse profile by collapsing the data at 1260\(-\)1460 MHz, where the pulse broadening is negligible. Next, this template is convolved with a pulse broadening function, \(\exp(-\phi/\tau_{\rm sc})\). The convolved template is given by:
\[\mathcal{F}(\phi)=a\times s(\phi-b)*\exp(-\phi/\tau_{\rm sc})\,, \tag{1}\]
where \(s(\phi)\) is a high frequency template with amplitude \(a\), \(\phi\) is the pulse phase with peak at phase \(b\), \(\tau_{\rm sc}\) is the pulse broadening time scale and \(*\) denotes convolution. \(\mathcal{F}(\phi)\) is then fitted to the observed pulse profile at each sub-band, keeping \(\tau_{\rm sc}\) as a fitted parameter, by minimizing the sum-of-squares of residuals obtained by subtracting the observed profile from \(\mathcal{F}(\phi)\). This fit is carried out for each sub-band between 300\(-\)500 MHz data obtained using the InPTA observations and provides measurements of \(\tau_{\rm sc}\) as a function of observing frequency. The estimated \(\tau_{\rm sc}\) is then fitted to a power law model of the following form:
\[\tau_{\rm sc}(f)=\tau_{0}f^{\alpha} \tag{2}\]
Here, \(\tau_{0}\) is the pulse broadening at a reference frequency (e.g., 300 MHz) and \(\alpha\) is the frequency scaling index of the scattering medium. This fit provides a measurement of \(\alpha\) for each epoch.
Thereafter, these \(\alpha\) measurements can be used to obtain the pulse profiles without scatter-broadening and therefore, provide more reliable measurements of DM. We use the same high-frequency template convolved with the scattering function, \(\exp(-\phi/\tau_{\rm sc})\) but with the values of \(\tau_{\rm sc}\) estimated from the previous step to obtain a convolved profile, \(T\). The sum-of-squared difference between the convolved profile and the observed scatter-broadened profile is given by:
\[R^{2}=\sum(P_{i}-T_{i})^{2}\,, \tag{3}\]
where \(P_{i}\) and \(T_{i}\) are the \(i\)-th bin amplitudes of the observed scatter-broadened profile and convolved profile respectively. \(R^{2}\) is minimized (least square minimization), keeping \(\tau_{\rm sc}\) fixed to the parameter estimated in the previous step and allowing the amplitude (\(a\)) and peak position (\(b\)) of the convolved pulse profile to vary. The residuals after the fitting are given by:
\[R_{i}=P_{i}-T_{i}. \tag{4}\]
For a good fit, \(R_{i}\) is normally distributed and represents the noise in the profile.
Thus, the template profile, \(s_{i}\), scaled by the amplitude at the fitted position provides a good representation of the pulse profile
without scatter-broadening. This method is applied to all the sub-bands. and the obtained profiles are written back to a new PSRFITS file after adding the residuals, \(R_{i}\) (noise), for each of the sub-bands. These profiles can now be used for estimating the DMs with conventional methods. It is important to note that the main assumption in these steps is that the profile of the pulsar does not evolve with frequency. While this may not hold true for most pulsars, a few of the MSPs monitored by PTAs do not show a strong frequency dependence.
## 3 Tests on Simulated Data
### Simulations
We simulated frequency-resolved PSRFITS (Hotan et al., 2004) files using the parameter file of PSR J1643\(-\)1224 obtained from InPTA DR1 (Tarafdar et al., 2022). The primary objectives of our simulations were:
1. To gain an understanding of the impact of scatter-broadening on the DM estimation. Here, we explored two scenarios: one involved a scattering process characterised by the Kolmogorov turbulence spectrum (\(\alpha=-4.4\)), and the other involved a scattering process with varying \(\alpha\).
2. To validate and assess the efficacy of the DMscat software.
First, a single component pulse profile was simulated by generating a Gaussian placed at the middle of the pulse phase with a chosen width. For a given S/N across the band, the root mean square (RMS) of the required normally distributed noise was obtained by dividing the area under the pulse by the required S/N adjusted by the number of sub-bands. Noise with this RMS was then generated from a random number generator. This noise was added to each sub-band profile after convolving the pulse with the scatter-broadening function as described below. Data were simulated with S/N varying between 10 to 2000 (10, 20, 30, 50, 100, 400 and 2000).
We assumed a thin-screen model of the IISM (Williamson, 1972) to describe the scatter-broadening of the intrinsic pulse from the pulsar. The scattering timescale (\(\tau_{\rm sc}\)) is then calculated using
\[\log(\tau_{\rm sc})=\log(\tau_{ref})+\alpha\times\log(f)-\alpha\times\log(300)\,, \tag{5}\]
where \(f\) is the frequency, and \(\tau_{\rm ref}\) is the pulse broadening at the reference frequency of 300 MHz. As explained later, we used both a constant \(\alpha\) (-4.4) assuming the Kolmogorov spectrum as well as a variable \(\alpha\). The simulated pulse was then convolved with the pulse broadening function, \(\exp(-\phi/\tau_{\rm sc})\) for each sub-band. Next, we generated the required noise for a given S/N as explained earlier and added this to the scattered pulse.
Then, we injected epoch to epoch DM variations using a DM time-series given as below:
\[DM(t)=DM_{0}+\delta DM(t-t_{0})^{3}\,, \tag{6}\]
where \(DM_{0}\) is the fiducial DM at \(t_{0}\), chosen as the first epoch, over an observation interval spanning 10 years, sampled once every month. Three data sets with different amplitudes of DM variations, namely 0.01 (DMe-2), 0.001 (DMe-3), and 0.0001 (DMe-4) \(\rm pc\,cm^{-3}\), were generated. A phase delay corresponding to the simulated DM at a given epoch was calculated with phase predictors using TEMP02 (Hobbs et al., 2006) for each sub-band, and the simulated and scattered pulse was placed at this phase delay by shifting it by the calculated delay. Finally, this frequency resolved simulated data were written to an output PSRFITS file.
For each amplitude of the DM variation, three sets of simulated data were produced. The first set of simulated data had only the DM variation with no scatter-broadening (NS case). In the second set of simulated data, along with the DM variations, we also incorporated scatter-broadening effect with a constant value of the frequency scaling index, \(\alpha=-4.4\), assuming a Kolmogorov turbulence (CS case). The value of \(\tau\) at 300 MHz was chosen to be 0.7 ms. In the third set, along with the DM variations, we incorporated a variation in the frequency scaling index, \(\alpha\) (VS case). Here, we used the measurements of frequency scaling index, \(\alpha\) for PSR J1643\(-\)1224 as the injected \(\alpha\). The value of \(\tau\) at 300 MHz
Figure 1: Schematic diagram of the technique for obtaining pulse profile by removing scatter-broadening using low-frequency observations.
was fixed for all the profiles and scaled accordingly with the frequency. Thus, we simulated 21 data-sets, each with 120 epochs, for the three different cases.
First, the simulated data-sets were used to understand the effect of scatter-broadening on the estimates of DM. Then, our new technique was tested and evaluated on the simulated data for CS and VS cases. The results of these analyses are presented in the following sections.
### Effect of scatter-broadening on DM Estimates
We used DMCalc(Krishnakumar et al., 2021) on these simulated pulsar profiles to estimate the DMs for all the cases. In order to run DMCalc, we selected a high S/N ratio (from the 2000 S/N case) template for each case. The DMs were estimated for the simulated data-sets spanning the range of S/N for all the three cases: NS, CS and VS. The results are presented in Figure 2, where the plots of estimated DMs are shown after subtracting the injected DMs for simulated data-sets with S/N equal to 20 and 400, and the amplitude of DM variations equal to 0.0001. The mean difference between the estimated and the injected DMs over all epochs and its standard deviation are also listed in the third and fifth columns of Table 1, respectively. The DM errors are plotted in the top panel of Figure 4.
As the pulse is without scatter-broadening in the NS case, the estimated DMs were consistent with the injected DMs for the full range of S/N, with the mean difference smaller than the DM error. In the CS and VS cases, where the simulated data-set consists of scatter-broadened pulse, the DMs were estimated with a bias, seen as offsets in Figure 2 and significant mean difference in Table 1. The bias is smaller for the VS case than for the CS case. While these results hold for different S/N, the median uncertainty in the estimated DMs varies with S/N as expected. This variation is shown in Figure 4. While the median DM error increases with decreasing S/N below S/N of 50, the median DM error is almost the same for S/N greater than 50. The median DM error was larger for VS as compared to the CS case for S/N less than 50.
The standard deviation in Table 1 gives an idea of the variability in the DM estimates over all the epochs. The estimated DMs had larger variability for cases with S/N less than 50. Another interesting feature in our results is that the variability was larger for the VS case as compared to the CS case, suggesting a larger fluctuation of DM estimates for pulsars showing variable scatter-broadening with observation epochs. These trends were consistent for all cases of DM variations.
### Testing DMscat on simulated data
We used the simulated data sets in order to demonstrate and test our new method of removing the effect of scatter-broadening on the DM estimates. We tested DMscat on the CS and VS datasets to generate new pulse profiles. First, we compared the recovered profiles for each sub-band against the injected profiles by subtracting the recovered profile from the injected profile. The obtained residuals were normally distributed and consistent with the noise injected in the simulated data demonstrating that the technique works well on the simulated data, particularly for S/N greater than 100. The technique worked for both CS and VS cases for different DM variations.
Then, we used DMCalc on these new profiles to estimate the DMs. The results are shown in Figure 3, where plots of the estimated DMs are shown after subtracting the injected DMs for the simulated data-set with S/N equal to 20 and 400 and amplitude of DM variations equal to 0.0001. The mean difference between the estimated and the injected DMs over all epochs and their standard deviations are also listed in the fourth and sixth columns of Table 1. Broadly, the mean of the estimated DMs for the CS and VS cases were consistent with the ones obtained for the NS case, while the
Figure 3: The difference between the injected and estimated DMs (\(\Delta\) DM) for three cases: no scattering (NS), constant scattering (CS), and variable scattering (VS) for the set of files generated with S/N = 20 (upper panel) and 400 (lower panel) with injected DM variations of the order 0.0001 cm\({}^{-3}\) pc. These measurements were carried out on the simulated data after the application of DMscat.
Figure 2: The difference between injected and estimated DMs (\(\Delta\) DM) for three cases: no scattering (NS), constant scattering (CS), and variable scattering (VS) for the set of files generated with S/N = 20 (upper panel) and 400 (lower panel) with injected DM variations of the order 0.0001 cm\({}^{-3}\) pc. These measurements were carried out on the simulated data before the application of DMscat.
variability, reflected by the standard deviation was larger for CS and VS cases as compared to the NS case.
The estimated and the injected DMs were consistent within the DM errors for both CS and VS cases for all S/N cases, as is evident from Table 1, indicating that the technique is able to recover the injected DMs without the bias seen in the scatter-broadened data. Moreover, the variability of DM estimates over epochs is reduced by about half for S/N above 100, whereas the variability is the same or worse for S/N below 100. This validates DMscat and suggests that the technique will be useful in reducing the scatter-broadening noise for S/N larger than 100.
The two panels of Figure 4 compare the median DM errors before and after the application of DMscat. After the application of DMscat, the median DM error is similar for NS, CS, and VS cases with S/N larger than 100, whereas for data-sets with S/N lower than 100, the median error does not seem to improve. This again suggests that the new technique will work optimally for the datasets with S/N higher than 100.
## 4 Application of DMscat on PSR J1643\(-\)1224
After validating DMscat, we applied this technique on PSR J1643\(-\)1224 data observed with the uGMRT as part of the InPTA observations. PSR J1643\(-\)1224 is a pulsar in the PTA ensemble that exhibits prominent scatter-broadening. This pulsar is observed in the InPTA experiment simultaneously at two different frequency bands, namely Band 3 (300\(-\)500 MHz) and Band 5 (1260\(-\)1460 MHz), using the upgraded Giant Metrewave Radio Telescope (Gupta et al., 2017; Reddy et al., 2017). These simultaneous observations at two different bands allow us to estimate the DMs with high precision. Negligible scatter-broadening is seen in Band 5 data, whereas the pulsar shows significant pulse broadening at Band 3 as can be seen in Fig. 5. We used the observations over two years between 2019 to 2021, which also formed part of InPTA Data Release 1 (InPTA-DR1 : Tarafdar et al., 2022). We only used the data observed with 200 MHz bandwidth (MJD 58781 \(-\)59496). The DM time series of this pulsar, obtained with DMca1c using data without accounting for scatter-broadening, were presented in the InPTA-DR1 and is shown in Figure 8.
First, the Band 5 data for PSR J1643\(-\)1224 were collapsed across the band to obtain a template for the highest S/N epoch (MJD 59032). Band 3 data were collapsed to 16 sub-bands. Then, we obtained the estimates of \(\tau_{\rm{sc}}\) for each of the 16 sub-bands and
Figure 4: The variation of median error in the DM estimation with respect to S/N before (upper panel) and after (lower panel) the application of DMscat.
Figure 5: Plots showing the scatter-broadening in PSR J1643\(-\)1224 in Band 3 (lower panel) and a sharp profile with negligible scattering in Band 5 (upper panel). Here \(\mathcal{A}\) is the amplitude (in arbitrary units) and \(\phi\) is the pulse phase.
Figure 6: Upper panel:The estimated scatter-broadening time (\(\tau_{\rm{sc}}\) at 406 MHz, near the band centre) data for PSR J1643\(-\)1224 is shown as a function of observing epoch. Lower panel: The frequency scaling index (\(\alpha\) over Band 3 is plotted in this figure for PSR J1643\(-\)1224 over 2019 to 2021
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**S/N value**} & \multirow{2}{*}{**Cases**} & \multicolumn{2}{c}{**Mean (10\({}^{-3}\)) cm\({}^{-3}\) pc**} & \multicolumn{2}{c}{**Standard Deviation (10\({}^{-3}\)) cm\({}^{-3}\) pc**} \\ & & Before & After & Before & After \\ \hline \multirow{3}{*}{**10**} & NS & 0.015 & & 0.37 & \\ & CS & 4.8 & -0.46 & 4.5 & 4.6 \\ & VS & 5.7 & -2.4 & 10.5 & 11.6 \\ \hline \multirow{3}{*}{**20**} & NS & -0.10 & & 0.41 & \\ & CS & 4.4 & 0.16 & 1.8 & 1.7 \\ & VS & 2.5 & -0.31 & 2.5 & 2.1 \\ \hline \multirow{3}{*}{**30**} & NS & -0.11 & & 0.30 & \\ & CS & 4.3 & -0.016 & 1.1 & 1.0 \\ & VS & 2.4 & -0.42 & 1.7 & 1.2 \\ \hline \multirow{3}{*}{**50**} & NS & -0.11 & & 0.23 & \\ & CS & 4.3 & -0.0032 & 0.69 & 0.63 \\ & VS & 2.5 & -0.24 & 1.0 & 0.79 \\ \hline \multirow{3}{*}{**100**} & NS & -0.13 & & 0.22 & \\ & CS & 4.5 & -0.0097 & 0.48 & 0.35 \\ & VS & 2.6 & -0.17 & 0.68 & 0.43 \\ \hline \multirow{3}{*}{**400**} & NS & -0.13 & & 0.21 & \\ & CS & 4.5 & -0.026 & 0.46 & 0.23 \\ & VS & 2.8 & -0.11 & 0.51 & 0.25 \\ \hline \multirow{3}{*}{**2000**} & NS & -0.11 & & 0.21 & \\ & CS & 4.5 & -0.018 & 0.48 & 0.22 \\ \cline{1-1} & VS & 2.8 & -0.083 & 0.5 & 0.22 \\ \hline \end{tabular}
\end{table}
Table 1: The values of mean and standard deviations of the differences of injected versus estimated DMs for various cases with different SN for the simulations with DM variation of the order 0.0001.
Figure 7: The figure shows the comparison of the reconstructed profile in Band 3 with respect to the Band 5 template which was used to descatter the profile. The solid curve in yellow indicates the Band 5 template, the red dashed curve shows the Band 3 profile after removing scattering, with the assumption that the profile does not evolve with frequency. The blue dotted curve indicates the residuals, i.e., the difference between the template and the Band 3 profile at every frequency channel.
\(\alpha\) as described in Section 2. These are presented in Figures 6. Significant variations are seen in both the parameters over the 2 year time-scale of the data, which suggests that DM estimates are likely to have a time-varying bias due to scatter-broadening. This, coupled with epoch-dependent time delays due to scatter-broadening itself needs to be accounted for in this pulsar for a meaningful GW analysis. Further, the median frequency scaling index was estimated to be -2.84, which was different from Kolmogorov turbulence (-4.4).
We used the estimates of \(\tau_{\rm sc}\) and \(\alpha\) presented in Figures 6 to remove scatter-broadening in the pulse using DMcat as explained in Section 2. We show a comparison of the Band 3 reconstructed profiles at different frequency channels with the Band 5 template for MJD 59015 in Figure 7. The residuals obtained by subtracting the two profiles at every sub-band are also shown in this figure. Application of the Anderson-Darling test (Anderson and Darling, 1952) shows that these residuals were normally distributed. Therefore, DMscat is able to recover the profile without scatter-broadening.
The resultant PSRFITS files were analysed with DMCalc to estimate the DMs. The estimated DMs after the application of DMscat are shown in Figure 8 along with the DM obtained in InPTA-DR1.
## 5 Conclusions
In this paper, we have demonstrated that the pulse-broadening in pulsar data can affect the estimates of DM using wide-band observations. Using simulated data, we show that a bias is seen in the DM estimates in scatter-broadened data. This bias depends on the spectral index of turbulence. The variability of the DM estimates over different epochs was found to be larger for scattering with a variable \(\alpha\) suggesting that the DM noise estimates may be less reliable for scattering with a variable \(\alpha\). A new technique, DMscat, for removing the pulse-broadening due to multi-path propagation in the IISM is presented in this paper to remove the observed bias. The technique was validated with tests on simulated data, where it was shown that the estimated DMs are consistent with the injected ones. The median DM error on the recovered DMs for the scattered data were shown to be similar to those for the data without scattering for S/N larger than 100. This suggests that the technique will be useful in reducing the scattering noise for S/N larger than 100. The measurements of the frequency scaling index, \(\alpha\), and scatter-broadening time, \(\tau_{\rm sc}\), were presented for the first time for PSR J1643\(-\)1224 observed using the uGMRT as part of the InPTA project. Both \(\alpha\) and \(\tau_{sc}\) were found to vary with observational epochs and \(\alpha\) was measured to be different from that expected for a medium with Kolmogorov turbulence. DMscat was applied to PSR J1643\(-\)1224 to obtain a DM time-series from profiles without pulse-broadening. Thus, we have demonstrated the applicability of DMscat both on simulated data-sets and observed pulsar data under the assumption that there is negligible frequency evolution of profile.
A few pulsars amongst the PTA sample, such as PSRs J1643\(-\)1224 and J1939+2134, show significant DM variations as well as scatter-broadening at low frequencies. While these are bright pulsars with a high potential for precision timing, the variation in the ToA delays due to scattering most likely limits their contribution to a PTA experiment. Typically, such IISM variations are removed from timing residuals by modeling these chromatic noise sources as Gaussian processes (GP). In most of the recent PTA work, the IISM noise is modeled as a DM GP process with a \(\nu^{-2}\) dependence (Srivastava et al., 2023). The presence of scattering can lead to a leakage of the IISM noise in achromatic noise models, which can introduce subtle systematic in decade long PTA data-sets, particularly when the time scale of such chromatic variations are similar to achromatic or deterministic variations. An analysis after the application of DMscat can potentially help in the robust determination of these models at least for PSR J1643\(-\)1224. We intend to carry out such analysis as a follow-up work.
The main limitation of the method is that it may not work when frequency evolution of the profiles is present. Techniques to address this limitation are motivated by this work. Possibilities are a modification of wide-band techniques (Pennucci et al., 2014; Nobleson et al., 2022; Paladi et al., 2023) or CLEAN based techniques (Bhat et al., 2003). Such developments are intended in the near future, which could be tested on the simulated data as well as actual observations.
With the recently announced evidence for GW signal from GWB (Agazie et al., 2023; Antoniadis et al., 2023; Reardon et al., 2023; Xu et al., 2023), the development of methods such as DMscat is important not only to increase the significance of the signal in the upcoming IPTA Data Release 3, but also to constrain the new physics, which can be investigated by this ultra-low wavelength window of GW astronomy.
## Acknowledgements
InPTA acknowledges the support of the GMRT staff in resolving technical difficulties and providing technical solutions for high-precision work. We acknowledge the GMRT telescope operators for the observations. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research, India. JS acknowledges funding from the South African Research Chairs Initiative of the Depart of Science and Technology and the National Research Foundation of South Africa. BCJ acknowledges support from Raja Ramanna Chair (Track - I) grant from the Department of Atomic Energy, Government of India, under project number 12-R&D-TFR-5.02-0700. SD acknowledges the grant T-641 (DST-ICPS). YG acknowledges support from the Department of Atomic Energy, Government of India, under project number 12-R&D-TFR-5.02-0700. TK is supported by the Terada-Torahiko Fellowship and the JSPS Overseas Challenge Program for Young Researchers. AKP is supported by CSIR fellowship Grant number 09/0079(15784)/2022-EMR-I. AmS is supported by CSIR fellowship Grant number 09/1001(12656)/2021-EMR-1 and DST-ICPS T-641. AS is supported by the NANOGrav NSF Physics Frontiers Center (Awards
Figure 8: The DM time series for the InPTA dataset of PSR J1643\(-\)1224 before and after the application of the new technique. Here, dDM is the offset between the estimated and the fiducial DM used to align the template.
No 1430284 and 2020265). KT is partially supported by JSPS KAKENHI Grant Numbers 20H00180, 21H01130, 21H04467 and JPJSBP 120237710, and the ISM Cooperative Research Program (2023-ISMCRP-2046).
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
## Software
DSPSR (van Straten & Bailes, 2011), PSRCHIVE (Hotan et al., 2004), RFIClean (Maan et al., 2021), PNITA (Susobhanan et al., 2021), TEMP02 (Hobbs et al., 2006; Edwards et al., 2006), DMCALC (Krishnakumar et al., 2021), Imfit (Newville et al., 2014), matplotlib (Hunter, 2007), astropy (Price-Whelan et al., 2018).
|
2309.09645 | Scaling the time and Fourier domains to align periodically and their
convolution | This note shows how to align a periodic signal with its the Fourier transform
by means of frequency or time scaling. This may be useful in developing new
algorithms, e.g. for pitch estimation. This note also convolves the signals and
the frequency time convolution is denoted fxt. | Matthew R. Flax, W. Harvey Holmes | 2023-09-18T10:30:16Z | http://arxiv.org/abs/2309.09645v1 | # Scaling the time and Fourier domains to align periodically and their convolution
###### Abstract
This note shows how to align a periodic signal with its the Fourier transform by means of frequency or time scaling. This may be useful in developing new algorithms, for example pitch estimation. This note also convolves the signals and the frequency time convolution is denoted "Kxt".1
Footnote 1: This is a recasting of Flaxβs original version [1]. The following also includes some additional explanatory and other material. However, the effects of noise or uncertain parameters havenβt yet been investigated - however intuition tell us that noise which is localised in the time domain will not be localised in the Fourier domain and vice versa which is an advantage to overcoming certain types of noise.
19 September 2023
## 1 Introduction
Suppose \(x\left(t\right)\) is a periodic signal with period \(t_{p}\) and fundamental frequency
\[f_{p}=1/t_{p} \tag{1}\]
In the frequency domain there will be sinusoidal components at integer multiples of the fundamental \(f_{p}\); i.e. at
\[f=n\,f_{p},\ n\in\mathbb{Z} \tag{2}\]
These correspond to impulses in the Fourier transform \(X\left(f\right)\), defined as
\[X(f)\triangleq\int_{-\infty}^{\infty}x\left(t\right)e^{-j2\pi ft}dt \tag{3}\]
For this signal we can state the following:
_Important fact_: \(X\left(f\right)\) has non-zero frequency domain samples only at multiples of the frequency \(f_{p}\).
_General case_: The non-zero samples at multiples of \(f_{p}\) will generally not be of equal amplitude.
_Special case_: However, if \(x\left(t\right)\) consists of a train of identical impulses at multiples of \(t_{p}\), then \(X\left(f\right)\) also consists of a train of identical impulses spaced at multiples of \(f_{p}\).
The general case of periodic \(x\left(t\right)\) can be reduced to the special case as follows. Suppose the shape of a single period of \(x\left(t\right)\) is
\[h\left(t\right)=\left\{\begin{array}{cc}x\left(t\right),&t\in\left[-t_{p}/2,\,t_{p}/2\right]\\ 0,&t\notin\left[-t_{p}/2,\,t_{p}/2\right]\end{array}\right. \tag{4}\]
and that a train of unit impulses is
\[s\left(t\right)=\sum_{n=-\infty}^{\infty}\delta\left(t+nt_{p}\right) \tag{5}\]
with2
Footnote 2: [http://www.dsprelated.com/freebooks/sasp/Impulse_Trains.html](http://www.dsprelated.com/freebooks/sasp/Impulse_Trains.html)
\[S\left(f\right)=f_{p}\sum_{m=-\infty}^{\infty}\delta\left(f+mf_{p}\right) \tag{6}\]
This is also a train of equal impulses in the frequency domain. This gives us a convolutional representation of \(x\left(t\right)\):
\[x\left(t\right)=s\left(t\right)*h\left(t\right) \tag{7}\]
with
\[X\left(f\right)=S\left(f\right)H\left(f\right)\]
Hence the special case above applies if we compare \(x\left(t\right)\) with \(S\left(f\right)\) instead of \(X\left(f\right)\). That is,
_General case_: If the periodic signal \(x\left(t\right)\) is decomposed as in (7) with \(s\left(t\right)\) consisting of a train of identical impulses at multiples of \(t_{p}\), then \(S\left(f\right)\) is also periodic and consists of a train of identical impulses spaced at multiples of \(f_{p}\). However, \(X\left(f\right)\) still consists of a train of impulses spaced at multiples of \(f_{p}\), but these impulses are not in general identical.
This establishes a sort of duality between the time and frequency domains that is only valid for periodic signals. This fact is exploited in this paper to scale \(S\left(f\right)\) so that it is aligned with \(x\left(t\right)\). Alternatively, a dual scaling can be used to scale \(s\left(t\right)\) so that it is aligned with \(X\left(f\right)\).
As a result it is hoped that we can possibly extract extra information from the signal by comparing the two domains in new ways, for example to enhance pitch estimation.
Theory in the sampled finite length case
In the following we consider only finite length sampled data signals, so that \(X\left(f\right)\) will be the discrete Fourier transform (DFT [2]) of \(x\left(t\right)\). Suppose \(N\) samples are taken at a sampling rate \(f_{s}\) (Hz). These samples are therefore spaced apart at
\[\delta_{t}\triangleq f_{s}^{-1}\left(\text{seconds}\right) \tag{8}\]
and the total duration of the signal is
\[T=\left(N-1\right)\delta_{t}\left(\text{seconds}\right) \tag{9}\]
If we write
\[x_{n}\triangleq x\left(n\delta_{t}\right),\ n=0:N-1 \tag{10}\]
where \(N\) is the total number of samples, then the DFT is defined as the \(N\)-vector \(X\) with the elements
\[X_{k} \triangleq \sum_{n=0}^{N-1}x_{n}e^{-j2\pi kn/N},\ k=0:N-1\]
_Note on frequency scaling_: The index \(k\) represents frequency in the following waya. If \(x\) is the complex sinusoid \(x\left(t\right)=e^{j2\pi ft}\) at frequency \(f\), then \(x_{n}=e^{j2\pi fn\delta_{t}}=e^{j2\pi nf/f_{s}}\) and
Footnote a: There are probably better ways of showing this.
\[X_{k} = \sum_{n=0}^{N-1}e^{j2\pi fn\delta_{t}}e^{-j2\pi kn/N}\] \[= \sum_{n=0}^{N-1}e^{j2\pi n\left(f/f_{s}-k/N\right)ft}\]
The amplitude \(\left|X_{k}\right|\) is maximized atb
Footnote b: Rounding is needed because \(X_{k}\) is only defined for integer values of \(k\).
\[k\approx\text{round}\left(N\frac{f}{f_{s}}\right)\equiv\text{round}\left( \frac{f}{\delta_{f}}\right)\]
which implies that \(k\) may be considered to be a scaled frequency representation. Note that \(k_{\max}=N-1\) corresponds to \(f=\frac{N-1}{N}f_{s}\approx f_{s}\). Also, each increment of \(k\) corresponds to a frequency increment of \(\delta_{f}=f_{s}/N\). These facts help in scaling Matlab frequency plots. See testdft.m.
In the rest of this note we ignore the fact that the period \(t_{p}\) may not really be an exact multiple of \(\delta_{t}\), and assume that it is (however, it may be desirable in
Matlab to enforce this condition). Then the number of samples in a fundamental period is
\[N_{t} = \frac{t_{p}}{\delta_{t}} \tag{11}\] \[= t_{p}f_{s}\]
where \(N_{t}\in\mathbb{Z}\). The only non-zero DFT frequency components are spaced at \(f_{p}\) Hz, given by (1).
_Note_: Because of the sampling, we must assume that the maximum frequency present is \(f_{s}/2\) (the Nyquist frequency) - i.e. we make the assumption that \(f_{s}\) is large enough for there to be no aliasing. That is, from (2) \(nf_{p}\leq f_{s}/2\), \(n\in\mathbb{Z}\), so that the maximum number of harmonics that we can consider in the sample is
\[n_{\max}=\left\lfloor\frac{1}{2}\frac{f_{s}}{f_{p}}\right\rfloor \tag{12}\]
However, the DFT of the sampled signal will have \(N\) Fourier components in the full range \(f\in[0,\,f_{s}]\). These are spaced at intervals of
\[\delta_{f}=\frac{f_{s}}{N}\left(\mathrm{Hz}\right) \tag{13}\]
(Unless \(f_{s}\) is an exact multiple of \(f_{p}\), or equivalently that the period \(t_{p}\) is an exact multiple of \(\delta_{t}\), none of these Fourier samples will exactly coincide with the actual harmonic frequencies \(nf_{p},\ n\in\mathbb{Z}\).)
In the DFT, the harmonics are spaced at intervals of \(N_{f}\) samples, with
\[N_{f} = \frac{f_{p}}{\delta_{f}} \tag{14}\] \[= N\frac{f_{p}}{f_{s}}\]
### Key variables
\begin{tabular}{l l} \(t_{p}=1/f_{p}\) & Signal period (s) \\ \(f_{p}=1/t_{p}\) & Signal frequency (Hz) \\ \(f_{s}\) & Sampling frequency (Hz) \\ \(\delta_{t}=1/f_{s}\) & Sample interval (s) \\ \(\delta_{f}=fs/N\) & Spacing of DFT frequency points (Hz) \\ \(N\) & Number of samples in signal (and in its DFT) \\ \(N_{t}=t_{p}/\delta_{t}=t_{p}f_{s}\) & Number of time samples in a signal period \\ \(N_{f}=f_{p}/\delta_{f}\) & Spacing of harmonics in the DFT (samples) \\ \end{tabular}
### Resampling of the frequency domain signal
We wish to resample to equalize the number of samples between major components in the time and frequency domains. There are two cases, depending on which signal (\(X\left(f\right)\) or \(x\left(t\right)\)) is resampled. In each case we wish to have the same total number \(N\) of samples after resampling, so that they can be compared.
First, we will resample the DFT signal \(X\left(f\right)\) so that the spacing \(N_{f}\) of the harmonics in \(X\left(f\right)\) is the same as the number of samples \(N_{t}\) in a period of \(x\left(t\right)\). That is, we will change \(N_{f}\) to \(N_{f}^{\prime}\triangleq aN_{f}\) such that \(aN_{f}=N_{t}\). Hence the scale factor required is
\[a=\frac{N_{t}}{N_{f}} \tag{15}\]
The total number of frequency samples would then be \(aN\) instead of \(N\). To retain the same total number of samples, this means that the frequency increments must change from \(\delta_{f}\) to \(\delta_{f}^{\prime}\triangleq\frac{\delta_{f}}{a}\). Hence the new frequency increment is
\[\delta_{f}^{\prime} = N_{f}\frac{1}{N_{t}}\delta_{f} \tag{16}\] \[= \frac{f_{p}}{\delta_{f}}\frac{\delta_{t}}{t_{p}}\delta_{f}\] \[= f_{p}^{2}\delta_{t}\] \[= \frac{f_{p}^{2}}{f_{s}}\] \[\equiv \frac{1}{t_{p}^{2}f_{s}} \tag{17}\]
It follows that the range of frequencies in the resampled DFT (still of length \(N\)) will change from \(\left[0,\,f_{s}\right]\) to
\[\left[0,\,(N-1)\,\frac{f_{p}^{2}}{f_{s}}\right] \tag{18}\]
#### 2.2.1 Interpolation of \(X\left(f\right)\)
It will be necessary to interpolate the DFT to produce \(N\) values over the above frequency range. Knowing the index \(n_{\text{end}}\) (in the vector \(X\)) of the new end frequency
\[f_{\text{end}}=\left(N-1\right)\frac{f_{p}^{2}}{f_{s}} \tag{19}\]
is useful when doing the interpolation using interp1.m in Matlab. Allowing for the fact that Matlab indices start from 1 instead of 0, this index is given by \(\frac{n_{\mathrm{end}}-1}{N}=\frac{f_{\mathrm{end}}}{f_{s}}\); i.e.
\[n_{\mathrm{end}} = \frac{Nf_{\mathrm{end}}}{f_{s}}+1 \tag{20}\] \[= N\left(N-1\right)\frac{f_{p}^{2}}{f_{s}^{2}}+1\]
This is the same as Flax's formula [1]
\[\left(N-1\right)Mf+1 = \left(N-1\right)\frac{N}{f_{s}^{2}t_{p}^{2}}+1\] \[= N\left(N-1\right)\frac{f_{p}^{2}}{f_{s}^{2}}+1\]
in Matt's code (fxtEx21.m).
### Resampling of the time domain signal
In this case we will resample \(x\left(t\right)\) so that the number of samples in a period \(N_{t}\) is the same as the number \(N_{f}\) of samples between harmonics in \(X\left(f\right)\). That is, we will change \(N_{t}\) to \(N_{t}^{\prime}\triangleq bN_{t}\) such that \(bN_{t}=N_{f}\). Hence the scale factor required ismy
\[b=\frac{N_{f}}{N_{t}}\equiv 1/a \tag{21}\]
The total number of time samples will then be \(bN\) instead of \(N\). To retain the same total number of samples, this means that the time increments must change from \(\delta_{t}\) to \(\delta_{t}^{\prime}\triangleq\frac{\delta_{t}}{b}\). Hence the new time increment is
\[\delta_{t}^{\prime} = N_{t}\frac{1}{N_{f}}\delta_{t} \tag{22}\] \[= \frac{t_{p}}{\delta_{t}}\frac{\delta_{f}}{f_{p}}\delta_{t}\] \[= t_{p}^{2}\delta_{f}\] \[= \frac{t_{p}^{2}f_{s}}{N}\] \[\equiv \frac{f_{s}}{Nf_{p}^{2}} \tag{23}\]
It follows that the time range in the resampled signal (still of length \(N\)) will change from \(\left[0,\,(N-1)\,\delta_{t}\right]\) to
\[\left[0,\,(N-1)\,\frac{t_{p}^{2}f_{s}}{N}\right] \tag{24}\]
#### 2.3.1 Interpolation of \(x\left(t\right)\)
It will be necessary to interpolate \(x\left(t\right)\) to produce \(N\) values over the above time range. Knowing the index \(m_{\mathrm{end}}\) (in the vector \(x\)) of the new final time
\[t_{\mathrm{end}}=\left(N-1\right)\frac{t_{p}^{2}f_{s}}{N} \tag{25}\]
is useful when doing the interpolation using interp1.m in Matlab. Allowing for the fact that Matlab indices start from 1 instead of 0, this index is given by \(\frac{m_{\mathrm{end}}-1}{N}=\frac{t_{\mathrm{end}}}{N\delta_{t}}\); i.e.
\[m_{\mathrm{end}} = \frac{t_{\mathrm{end}}}{\delta_{t}}+1 \tag{26}\] \[= \left(N-1\right)\frac{t_{p}^{2}f_{s}^{2}}{N}+1\] \[\equiv \frac{N-1}{N}\frac{f_{s}^{2}}{f_{p}^{2}}+1 \tag{27}\]
This is the same as Flax's formula [1]
\[\left(N-1\right)Mt+1 = \left(N-1\right)\frac{f_{s}^{2}t_{p}^{2}}{N}+1\]
in Flax's code (fxtEx21.m).
### Flax's original version [1] (using this new notation)
We wish to rescale \(f_{p}\) so that there are an equivalent number of samples between both the time period and the Fourier harmonics, call the frequency scaling coefficient \(a\), then
\[t_{p} = af_{p} \tag{28}\] \[N_{t}\delta_{t} = aN_{f}\delta_{f}\] \[t_{p}f_{s}\delta_{t} = af_{p}\frac{N}{f_{s}}\delta_{f}\] \[af_{p} = t_{p}\frac{f_{s}^{2}}{N}\frac{\delta_{t}}{\delta_{f}} \tag{29}\]
\[a=\frac{t_{p}^{2}f_{s}^{2}}{N}\frac{\delta_{t}}{\delta_{f}}\]
In the classical signal processing \(a=1\) and there is a well known inverse relationship between time period and harmonic distance (\(t_{p}=f_{p}^{-1}\)), which when combined with Equation 29 yields a constrained relationship between time and frequency sample duration/distance respectively which is
\[t_{p} = \frac{1}{f_{p}} \tag{30}\] \[t_{p} = \frac{\delta_{f}}{\delta_{t}} \frac{N}{f_{s}^{2}t_{p}}\] \[\frac{\delta_{t}}{\delta_{f}} = \frac{N}{f_{s}^{2}t_{p}^{2}} \tag{31}\]
## 3 Conclusion
Prior to this article the only commonly known equivalence between time duration and Fourier distance was the inverse relation (1) between period and frequency for a periodic signal. For sampled data signals this article goes further.
Using the above theory, it is now possible to resample the signals in either the frequency or time domains so that the sample count between Fourier harmonics in the frequency domain is the same as the number of samples in a period in the the time domain.
The possible uses of this theory (for example pitch detection) have still to be explored. An important issue is that the above theory assumes the period \(t_{p}\) is known, which means that in practice this parameter must often first be estimated.
Similarly, the effect of noise or inexact \(t_{p}\) has to be evaluated, as we will rarely have an uncontaminated periodic signal with exactly known \(t_{p}\).
The same approach defined in this article can be used to derive scaling coefficients for any other linear transformation.
|
2309.04361 | Learning from Power Signals: An Automated Approach to Electrical
Disturbance Identification Within a Power Transmission System | As power quality becomes a higher priority in the electric utility industry,
the amount of disturbance event data continues to grow. Utilities do not have
the required personnel to analyze each event by hand. This work presents an
automated approach for analyzing power quality events recorded by digital fault
recorders and power quality monitors operating within a power transmission
system. The automated approach leverages rule-based analytics to examine the
time and frequency domain characteristics of the voltage and current signals.
Customizable thresholds are set to categorize each disturbance event. The
events analyzed within this work include various faults, motor starting, and
incipient instrument transformer failure. Analytics for fourteen different
event types have been developed. The analytics were tested on 160 signal files
and yielded an accuracy of ninety-nine percent. Continuous, nominal signal data
analysis is performed using an approach coined as the cyclic histogram. The
cyclic histogram process will be integrated into the digital fault recorders
themselves to facilitate the detection of subtle signal variations that are too
small to trigger a disturbance event and that can occur over hours or days. In
addition to reducing memory requirements by a factor of 320, it is anticipated
that cyclic histogram processing will aid in identifying incipient events and
identifiers. This project is expected to save engineers time by automating the
classification of disturbance events and increase the reliability of the
transmission system by providing near real time detection and identification of
disturbances as well as prevention of problems before they occur. | Jonathan D. Boyd, Joshua H. Tyler, Anthony M. Murphy, Donald R. Reising | 2023-09-08T14:41:21Z | http://arxiv.org/abs/2309.04361v1 | Learning from Power Signals: An Automated Approach to Electrical Disturbance Identification Within a Power Transmission System
###### Abstract
As power quality becomes a higher priority in the electric utility industry, the amount of disturbance event data continues to grow. Utilities simply do not have the required personnel to analyze each event by-hand. This work presents an automated approach for the analysis of power quality events recorded by digital fault recorders and power quality monitors operating within a power transmission system. The automated approach leverages rule-based analytics to examine the time and frequency domain characteristics of the voltage and current signals, and customizable thresholds are set to categorize each disturbance event. The events analyzed within this work include: various faults, motor starting, and incipient instrument transformer failure. Analytics for fourteen different event types have been developed. The analytics were tested on 160 signal files and yielded an average accuracy of 99%. Continuous, nominal signal data analysis is performed using an approach coined as the cyclic histogram. The cyclic histogram process will be integrated into the digital fault recorders themselves to facilitate detection of subtle signal variations that are too small to trigger a disturbance event and that can occur over the course of hours or days. In addition to reducing memory requirements by a factor of 320, it is anticipated that cyclic histogram processing will aid in identification of incipient events and identifiers. This project is expected to save engineers time by automating the classification of disturbance events as well as increase the reliability of the transmission system by providing near real-time detection and identification of disturbances as well as prevention of problems before they occur.
Digital Fault Recorder (DFR), Power Quality (PQ), Electrical Disturbance, Identification, Machine Learning
## I Introduction
The continued and increasing deployment of "smart" devices (e.g., switches, relays, etc.) within power utility generation, transmission, and distribution infrastructure has led to the recording and storage of an ever-growing amount of event data. Processing and analysis of this event data has been traditionally conducted by power utility personnel using "by-hand" approaches. By-hand approaches rely heavily upon the knowledge, experience, and expertise of the person or persons conducting the analysis and severely limits the number of events that can be analyzed within a given period of time. These limitations are exacerbated when considering that: (i) power utilities are unable to dedicate personnel solely to the task of event processing and analysis as well as (ii) that analysis is often conducted hours if not days after the event has occurred, thus limiting its value.
The work in [1] details a rule-based approach for categorizing Power Quality (PQ) events using the S Transform (ST). The data used in this approach is a mix of simulated data and real-world data from the power system. The Fourier Transform (FT) and the Short-Time Fourier Transform (STFT) have not proven to be effective in extracting unique features of each signal. The Wavelet Transform (WT) has been used as it can extract time and frequency domain characteristics simultaneously, but it is also somewhat vulnerable to noise and computationally expensive. The ST can be thought of as a hybrid between the STFT and WT since it has the time and frequency domain characteristics, but it also uses a variable window length to provide information at different resolutions. The ST has been shown to provide better noise immunity. Finally, categorization of the PQ events was performed using Artificial Neural Networks (ANNs), fuzzy logic, decision trees, and others. The ST contours highlight the distinctive features present within the original PQ event signal, such as a voltage sag. A set of rules is then defined to set the thresholds needed to trigger certain event types. These rules rely heavily upon the knowledge of PQ experts and a data set containing distorted signals is used to determine the corresponding threshold values. The rules are designed to separate the events into three categories: magnitude disturbances, transients, and signal distortion. The tests performed on the signals include positive tests and negative tests for an extra layer of classification. This approach is also very portable to other applications due the normalization of the voltage to one to facilitate use of any voltage level. The results of the work in [1] heavily favor the rule-based ST approach. This approach classified the disturbances with 98% accuracy while a traditional ANN method achieved an accuracy of 92%. The rule-based method can also withstand a considerable level of noise in the signal. One reason for this superior accuracy is that the rule-based approach is more specialized to each type of disturbance than
the ANN approach.
The approach in [2] used a machine learning approach that is augmented through the inclusion of the Kullback-Leibler (KL) divergence measure and standard deviation. The KL divergence is very efficient as it can be applied to a single cycle of the signal. The KL divergence calculates the probabilities that a particular cycle is a member of two or more events. Standard deviation is also used as it is very effective in the detection of PQ disturbances. These two methods are used for each cycle of the disturbed signal and compared with an ideal sinusoidal signal to capture the disturbance. After the detection phase, the classification phase is performed using a Support Vector Machine (SVM) to determine a decision boundary between event types. This method proved very effective in differentiating between events such as voltage sag and swell. However, voltage flicker and swell are more similar than sag and swell, so this approach likely will not function as well. Overall, this method achieved an accuracy of 94.02%.
The approach in [3] provides a novel PQ disturbance classification method. The method extracts features from the cross-correlogram of the PQ disturbances. The positive peak and two adjacent negative peaks are used as the classification features. Those three values are then fed into a fuzzy-based classification system. One drawback to the work in [3] is its use of simulated data that was generated using MATLAB(r), thus classification accuracy may change when real-world data is used. The two types of correlation are cross-correlation and auto-correlation. Cross-correlation measures the strength of similarity between two signals, while auto-correlation is the cross-correlation of a signal with itself. The work in [3] calculates the cross-correlation response between an ideal signal with a disturbed one to detect the disturbance. A fuzzy logic classifier is used to allow for uncertainty in a logic system. The rules in the fuzzy system are designed by human experts, so the system is only as good as those who designed it. The system used in this approach is the Mamdani-type inference system with three inputs and one output. Eighteen linguistic variables are used for the output membership function to determine the PQ event classification. This classifier was tested using seventy MATLAB(r) generated signals and achieved an accuracy of 100%. The accuracy remained 100% even when noise was added to the test signals.
The work presented herein uses a series of algorithms developed in MATLAB(r) R2020b that classify various PQ events into one or more categories. The developed algorithms are rule-based in nature with customizable thresholds based on engineers' expertise. Each PQ event's signal data is stored in a Comma Separated Values (CSV) file-generated by the field device-containing: a time vector, three voltage phases, and three current phases. A MATLAB(r) executable is initiated to read each CSV file into a working directory then categorize them as a particular PQ event type or types. The latter accounts for the case of multiple PQ event types occurring and being recorded within the same CSV file. A CSV file is then generated with the classification results as well as analytic outputs such as current magnitude. Below are several differentiating factors that make the presented work unique and preferable to other methods:
* The automated process was developed and tested using real-world data rather than simulated data. All data was recorded by smart field devices-PQ monitors and Digital Fault Recorders (DFRs)-operating in a high-voltage transmission system.
* The rule-based methods mimic the expertise of an engineer in an effort to ease interpretation and understanding of the classification results by power system personnel.
* The developed algorithms contain very few MATLAB(r) specific functions. This reduces the need for expensive MATLAB(r) Toolbox licenses while allowing the algorithms to be translated into other programming languages and software based upon the specific needs of the power utility. This approach is adopted to facilitate widespread use of the developed algorithms across the power industry.
* The rule-based nature of the developed process allows every threshold to be changed as needed by power utility personnel based on performance or system specifics. In this paper, empirical thresholds are designated as \(\tau\) in equations and as **bold** lettering in sentences.
* The methods used are very detailed and will predict the actual disturbance (e.g., ferroresonance) that occurred on the power system rather than simple signal characteristics like voltage sag and swell.
Another aspect of the project was to analyze continuous oscillography data that is stored on the DFRs. Each day of data can be as much as twenty to fifty gigabytes (GB), which is far too much data for an engineer to analyze manually. Due to on-board memory constraints, each DFR stores two weeks of continuous oscillography data before it is overwritten. The approach in this work uses a method known as a cyclic histogram [4] to reduce an average day's thirty-five GB worth of continuous oscillography data down to seventy-two megabytes (MB). This memory reduction not only increases the time window of how long the data can be stored-from two to roughly 1,000 weeks-but also allows engineers to monitor for trends and subtle deviations in continuous signal data that has not produced a disturbance large enough to trigger a DFR event.
The remainder of this paper is as follows. Section II presents the methodology including general calculations, continuous waveform analysis, and the various disturbance event types. Section III provides the results of each event type and continuous waveform analysis. Section IV provides a summary and lists some opportunities for future work.
## II Methodology
This section first presents descriptions of calculation, analyses, and tests that are used in the categorization of multiple events. A specific event may require the threshold of one or more of these general calculations, analyses, or tests to be changed and are detailed under the specific event being categorized. The remainder of this section describes the methodologies developed and employed for the categorization
of specific events and continuous signal processing using the cyclic histogram.
### _General Calculations, Analyses, and Tests_
#### Ii-A1 Calculating Nominal Values
The first task in the processing of a voltage or current signal is to calculate nominal values from the data itself. The sampling frequency is calculated by,
\[F_{s}=\frac{N}{t_{e}-t_{1}}, \tag{1}\]
where \(F_{s}\) is the sampling frequency in Hertz (Hz), \(N\) is the number of samples in the time vector, and \(t_{1}\) and \(t_{e}\) are the first and last values of the time vector, respectively. After the sampling frequency is known, the nominal number of samples in each cycle is determined by,
\[N_{c}=\frac{F_{s}}{F_{n}}, \tag{2}\]
where \(N_{c}\) represents the number of samples per cycle, \(F_{s}\) is the sampling frequency, and \(F_{n}\) is the nominal frequency of the power system, which is assumed to be 60 Hz.
Generally, PQ event records capture several cycles of the voltage or current signal that occur before a disturbance begins. The DFRs that recorded the data used in this work are normally set to record fifteen cycles of data before a disturbance. The nominal peak values of voltage and current signals are determined using these "pre-event" cycles for each processed signal. For this work, the first cycle in the event record is used to determine these nominal scalar values denoted as: (i) \(\hat{V}_{q}\) for nominal peak voltage, (ii) \(\hat{I}_{q}\) for nominal peak current, (iii) \(\bar{V}_{q}\) for nominal Root Mean Square (RMS) voltage, and (iv) \(\bar{I}_{q}\) for nominal RMS current. The magnitudes of voltage and current signals are compared to these nominal values to normalize the data with respect to the particular voltage or current level of the power system. This allows for more flexibility for these tools to be used at a different scale on the system.
#### Ii-A2 Root Mean Square
The RMS of a signal is another characteristic used in the classification of electrical disturbance events. A signal's RMS is given by,
\[\bar{x}=\sqrt{\frac{1}{N_{w}}\sum_{i=1}^{N_{w}}|x[i]|^{2}}, \tag{3}\]
where \(x\) is the analog signal, \(N_{w}\) is the size of the RMS window, and \(\bar{x}\) is the RMS calculation of the analog signal [5]. Unless otherwise stated, the size of the RMS window was set at the nominal number of samples in each cycle, \(N_{c}\).
One use of RMS is in determining if the signal value is non-zero. In the instantaneous case, the sinusoidal signal will cross zero every half-cycle, so it is more difficult to tell whether the value remains near zero. A signal's RMS is used in events such as motor starting where the current increases over time.
#### Ii-A3 Differentiation
One of the most common calculations used is a signal's derivative. The equation in (4) represents the first derivative with respect to the number of samples.
A positive first derivative indicates that the signal is increasing, and a negative first derivative indicates the signal is decreasing. This fact is used to detect the presence of peaks or spikes within a signal. The maximum or minimum of a peak or spike corresponds to the first derivative changing sign (i.e., going from positive to negative or vice versa). A change in the first derivative's sign is calculated by,
\[x^{\prime}(n_{1})\times x^{\prime}(n_{2})<0 \tag{4}\]
where \(x^{\prime}\) is the first derivative of the analog signal, \(n_{1}\) is the sample before the first derivative's sign changes, and \(n_{2}\) is the sample after the sign changes. Multiple sign changes over a short time interval provide a strong indication that a transient disturbance is present within the signal being processed.
The second derivative is used to determine the change in the slope of the curve. A sudden increase in the second derivative shows as a sudden increase in slope and can indicate the point at which a fault begins. Fig. 1 provides a representative illustration showing the use of the second derivative in determining the start of a fuse fault. The red circle shown is where the second derivative is higher than an empirical threshold, thus indicating a sudden increase in the slope of the curve. The third derivative is used to detect a shift in the slope of a curve.
#### Ii-A4 Harmonic Ratios
Harmonics can be key indicators of particular events within a transmission system (e.g., current transformer saturation, harmonic resonance, etc.). Harmonic analysis is facilitated through the calculation of the harmonic ratio, which is useful in determining the dominant frequency components within a signal. The \(n^{\text{th}}\) harmonic ratio is calculated by,
\[H_{n}=\frac{|X_{n}|}{|X_{1}|}, \tag{5}\]
where \(X\) is the Fast Fourier Transform (FFT) of \(x\), \(|X_{1}|\) is the magnitude of the fundamental frequency (i.e., 60 Hz), and
Fig. 1: Fuse fault showing second derivative test
\(|X_{n}|\) is the magnitude of the \(n^{\text{th}}\) multiple of the fundamental frequency [6].
#### Ii-A5 First Cycle Comparison
The CSV files generally store at least fifteen cycles of the voltage and current signals that occur prior to the disturbance event, thus a useful disturbance detection approach is to compare the signal's first cycle with each of its remaining cycles within the CSV file. After the first cycle is selected, it is replicated to construct an ideal signal that is of the same length as that of the recorded signal from which the first cycle was extracted. The generated ideal signal is then subtracted from the recorded signal. The time indices where this difference is very high indicates the start of a disturbance. Fig. 2 illustrates the application of this approach in detecting the start of a capacitor switching event within a recorded voltage signal. Fig. 2 shows the voltage signal with the capacitor switching disturbance portion of the signal highlighted and the result of the difference calculation overlaid. Where the difference calculation is highest corresponds with the start of the capacitor switching event, which is assigned a start time of zero milliseconds.
### _Continuous Signal Processing_
In addition to disturbance event classification this work makes use of the cyclic histogram in an attempt to reduce the memory storage requirements associated with a DFR's continuously recorded signal data. This work extends the cyclic histogram by also generating the residual and frequency histograms. The cyclic histogram was first proposed in [4] to significantly reduce the size of continuously recorded oscillography data. This reduction in size allows for data to be stored for much longer than the OSG file, and allows for PQ analysts to pull data from each DFR without putting strain on the telecommunications network. A Python(r) script performs the following tasks:
* Read the most recent configuration (CFG) file and extract the necessary data to read and correctly interpret the matching oscillography (OSG) file.
* Time-synchronize each cycle reliably to generate the cyclic and residual histograms.
* A custom "maximum frequency" calculation is developed to generate the frequency histogram that is faster and less computationally intense than traditional FFT processing.
* Generation of six CSV files. For each of the three histogram types, there is a CSV that contains the histogram and an accompanying metadata file that stores the bin values and record dates.
Signals are analyzed based on a sine representation, thus the continuous signal data is processed using a negative-to-positive transition in the cycle. This negative-to-positive transition is designated the beginning and end of each cycle. This helps in cases of signal disturbance as the disturbance is typically a magnitude disturbance and not additive. Current signal cyclic histograms are not generated due to transformers' inductive nature, which makes current an effect and more prone to its sinusoidal activity being negatively impacted to the point where cyclic analysis is not possible. Voltage is source driven, thus making it less susceptible to drift.
The most recent CFG file is loaded and the OSG metadata extracted. The OSG metadata provides the: number of channels, sampling rate, and timestamp in accordance with IEEE COMMron format for TRAnsient Data Exchange (COMMRE) Standard 2013 [7].
#### Ii-B1 Time Synchronization
Due to the physical properties of the transmitted voltage, the signal is never exactly 60 Hz and the sampling Data Acquisition (DAQ) device will never sample the signal at the exact point of \(x(t)=0\). At the transformer, the frequency can drift by as much as \(\pm 0.03\) Hz, so the exact time in between cycles is not consistent. Due to this inconsistency, the position of \(x(t)=0\) must be estimated to synchronize each cycle before generating the cyclic histogram. If this frequency drift is not account for, then it is impossible to generate the cyclic histogram for one hour of continuous oscillography data. Each cycle is detectedby finding two consecutive negative-to-positive transitions in the sampled waveform \(x[n]\). A window is collected starting with the sample before the first transition, and the sample directly after the second transition and then processed for time synchronization. An ideal time vector \(t_{I}\) is created as a reference where \(t\in[0,1/F_{n}]\) in steps of \(\Delta t\). A relative time \(t_{r}\) vector is generated based on the slope estimated from the windowed signal. The first slope is,
\[m_{1}=\frac{x[2]-x[1]}{\Delta t},\ \ \text{and}\ b_{1}=x[2]-m_{1}t[2], \tag{6}\]
where \(m_{1}\) is the slope between the first two sampled points and \(b_{1}\) is the estimated position of the first zero-crossing. The first entry of the relative time vector is,
\[t_{r}[1]=t_{I}[1]+\frac{b_{1}}{m_{1}}. \tag{7}\]
The end of the windowed signal is used to find the second slope characteristics,
\[m_{1}=\frac{x[N_{c}+1]-x[N_{c}]}{\Delta t},b_{2}=x[N_{c}]-m_{2}*t_{I}[N_{c}]. \tag{8}\]
Fig. 2: Voltage signal showing disturbance during capacitor switching.
The last entry of the relative time vector is,
\[t_{r}[N_{c}+1]=t_{I}[N_{c}]-\left(\frac{-b_{2}}{m_{2}}-\frac{1}{F_{n}}\right). \tag{9}\]
The rest of the relative time vector is,
\[\Delta t_{r}=\frac{t_{r}[N_{c}+1]-t_{r}[1]}{N_{c}+1}. \tag{10}\]
Now that the relative time vector has been calculated, the values of \(x(t)=0\) lines up with \(t_{r}=[0,1/F_{n}]\). Linear interpolation is used to generate a representation of the sampled waveform \(x[n]\) from the relative \(t_{r}\) and synchronize it onto the ideal time vector \(t_{I}\). Once a cycle has been collected and synchronized, it is then stored to generate the cyclic and residual histograms.
#### Iii-B2 Histogram Generation
The cyclic histogram is a combination of per-sample histograms concatenated to show the quality of the signal over time. For the case of \(N_{c}=16\), sixteen histograms are generated for each sample in the nominal cycle and stored in a matrix that represents the cyclic histogram. The global minimum and maximum of all of the synchronized cycles are used as the bin limits of all histograms to maintain a consistent scale for the cyclic histogram. Each histogram is generated using the \(n^{\text{th}}\) sample of each of the synchronized cycles. By default, there are 1,024 bins per histogram, but this resolution can be increased or decreased as desired by utility personnel. A large number of bins will increase the size of the generated, output file. The cyclic histogram is generally unexciting as seen in Fig. 20. A residual histogram is generated by subtracting the first cycle from the remaining cycles in the record. Subtracting the first cycle accentuates any abnormal behavior(s) present within the processed signal at a per cycle resolution. The residual histogram-corresponding to the cyclic histogram in Fig. 20-is presented in Fig. 20. The voltage in Fig. 21 is within the range of approximately \(\pm 135\) kV while the voltage range is \(\pm 4\) kV in the residual histogram of Fig. 21. This is almost a 40-times increase in activity resolution for no additional data cost.
#### Iii-B3 Frequency Histograms
The dominant frequency is calculated using the FFT. Typically, the FFT is calculated over all frequencies within the range of \(\pm F_{s}/2\). Calculating the FFT over this entire range of frequencies is inefficient, because the power grid's frequency is very stable with an expected maximum deviation of \(\pm 0.03\) Hz with respect to the 60 Hz fundamental frequency. Based upon a sampling frequency of 960 Hz, a high resolution (i.e., a small step size between consecutive frequency values) frequency representation requires a significant number of zeros (e.g., 1.2 million) to be appended to the end of the time signal. Since the power grid's frequency is so stable most of the actionable information is contained within a very small range of frequencies, thus most of the resulting frequency response can be "thrown out" without loss of information. Zero padding the time signal-only to remove most of the resulting frequency response-represents a waste of computational resources and time. This problem is addressed by generating a support vector of frequencies centered at 60 Hz and with a Proccess BandWidth (PBW) of 0.2 Hz. The PBW can be changed based upon the specifics of: the DFR or equivalent device as well as utility personnel preferences or standards. The DFT of sixty cycles is calculated for only the frequencies specified in the support vector and a step size of thirty cycles between consecutive calculations. This results in the dominant frequency being calculated per second with an overlap of half a second. The DFT is calculated by,
\[X[f]=\sum_{n=1}^{N_{x}}x[n]\exp{[-j2\pi ft[n]]}, \tag{11}\]
where
\[f\in\left[F_{n}\pm\frac{PBW}{2}\right],\]
and \(N_{x}\) is the total number of samples in the waveform over which the DFT is calculated [8]. The dominant frequency is selected by,
\[F_{d}(t)=\operatorname*{arg\,max}_{f}|X[f]|. \tag{12}\]
The output of the dominant frequency calculated for a sliding 60-cycle window of the recorded waveform and is stored and used to generate the frequency histogram. The support of the histogram is the same vector as the PBW calculated over in the DFT. The number of cycles per evaluation can be adjusted in the head of the code.
This process is accelerated using Python's mumba library that allows Just-In-Time (JIT) run-time compilation directly into machine code. Currently, JIT does not support the FFT algorithm, however it does support the calculation of the described, custom DFT. The result is not only faster, but requires far fewer computational resources and time than the zero padded FFT.
### _Event Types_
#### Iii-C1 Current Transformer Saturation
The first PQ event analyzed is Current Transformer (CT) saturation. A CT is commonly used in relaying or metering applications in high-voltage circuits by producing an alternating current in its secondary winding that is proportional to the current that it is measuring on the high-voltage system. These low-voltage, low magnitude currents are then used as input signals to various instrumentation [9]. CT saturation occurs when the primary current is so high that its core cannot handle anymore flux. This results in inaccurate replication of the current signal on the secondary winding, which can cause protection relays to operate improperly. A key indicator of CT saturation is a change of slope as the current crosses zero each half-cycle. This change in slope is commonly referred to as "kneeing". Fig. 3 shows a representative illustration of "kneeing"-between 280 ms and 320 ms-within a CT's current signal.
In this work, the following criteria are used to determine the occurrence of CT saturation. These criteria are: (i) current exceeding fifteen times the continuous current rating of the CT, (ii) presence of DC offset, (iii) the DC offset returning to normal (i.e., 0 Hz) during the fault, (iii) inconsistent
spacing between zero crossings, (iv) high third derivative of the current, (v) high second harmonic current, and (vi) high third harmonic within the current. A mix of these criteria are used to determine the likelihood of CT saturation as described at the end of this section.
The first step is to determine the presence of a fault or not. Processing continues if a fault is detected and moves to the next event otherwise. For the purposes of this work, a fault means that an abnormal flow of current has occurred causing the protective relay(s) to operate and trip the breaker(s). The presence of a fault is determined using the CT ratio defined in the COMRRADE configuration file. The CT ratio is,
\[R_{\text{CT}}=\frac{I_{P}}{I_{S}}, \tag{13}\]
where \(R_{\text{CT}}\) is the turns ratio of the CT, \(I_{P}\) is the rated continuous primary current, and \(I_{S}\) is the rated continuous secondary current. The CTs in this work used a continuous rated current of 5 Amperes (A) on the secondary side of the CT. For instance, if the CT ratio \(R_{\text{CT}}=240\), then the rated continuous current would be 1,200 A on the primary side and 5 A on the secondary side.
If the current exceeds fifteen times the continuous current rating of the CT, then a detected fault is high enough to be CT saturation. This threshold was empirically selected based upon recommendations of PQ engineers to ensure only abnormally high faults are selected since extremely high currents are generally indicative of CT saturation. Faults that do not meet this threshold will have a lower chance of being CT saturation. The threshold is given by,
\[\frac{I(n)}{I_{P}}>\tau_{\text{CT}}, \tag{14}\]
where \(I\) is the instantaneous current being analyzed, \(I_{P}\) is the rating of the CT on the primary side, \(\tau_{\text{CT}}=15\) is the CT saturation threshold, and \(n=1,2,\ldots,N\). The CT saturation threshold was set based upon inputs from power utility personnel, but can be changed based upon local criteria.
The presence of DC offset is also an indicator of CT saturation [9]. For this particular event, DC offset is determined by first calculating the peak value of each cycle of the faulted section of the waveform. The peaks of the positive and negative half-cycles are then averaged together to give a value for the offset above or below 0 A. If the maximum of this value exceeds a threshold compared to the nominal peak current, then DC offset is detected in the fault as given by,
\[\frac{|I_{\text{DC}}|}{\hat{I}_{q}}>\tau_{\text{DC}}, \tag{15}\]
where \(I_{\text{DC}}\) is the maximum DC offset detected during the fault, \(\hat{I}_{q}\) is the nominal peak current extracted from the first cycle, and \(\tau_{\text{DC}}=3\) is the empirically selected threshold for the ratio of DC offset magnitude to nominal peak current. A loss of DC offset is detected if the offset magnitude is lower at the end of the fault than the beginning.
The number of samples between zero crossings is then compared to half the nominal number of samples in each cycle calculated using (2) as described in Sect. II-A1. The zero crossing points are calculated as the indices at which the waveform changes sign (i.e., from negative to positive or vice versa). The number of samples between each zero crossing is calculated for every cycle by subtracting the indices accordingly. This number of samples is compared to the nominal value and is given by,
\[\max\left|N_{\text{Z}}(k)-\frac{N_{c}}{2}\right|>\tau_{\text{Z}},\ k=(1,2,3, \ldots,N_{\text{F}}) \tag{16}\]
where \(N_{\text{Z}}\) is the number of samples between zero crossings, \(N_{c}\) is the nominal number of samples in each cycle, \(k\) is index of each cycle, \(N_{\text{F}}\) is the total number of cycles in the faulted portion of the waveform, and \(\tau_{\text{Z}}=10\) is the empirically selected threshold for the difference from nominal in the number of zero crossings.
The "kneeing" present in the waveform is detected using a third derivative test. The maximum third derivative present in the first cycle of the waveform (i.e., before the fault) is used as the nominal value. The maximum third derivative of the faulted portion of the waveform is compared to the nominal value and will be "flagged" if it exceeds a certain threshold as given by,
\[\frac{\max|I^{\prime\prime\prime}_{f}(n)|}{\max|I^{\prime\prime\prime}_{c}(n)| }>\tau_{\text{D3}} \tag{17}\]
where \(I^{\prime\prime\prime}_{f}(n)\) is the third derivative of the faulted current waveform, \(I^{\prime\prime\prime}_{c}(n)\) is the third derivative of the first cycle of the current signal, and \(\tau_{\text{D3}}=5\) is the empirically selected threshold for the ratio of the fault third derivative with the nominal one.
Finally, the the harmonic ratios of the entire current waveform are calculated using equation (5) as described in Sect. II-A4. A very good indicator of CT saturation is when the second and third harmonic currents exceed the thresholds of **15%** and **5%** of the fundamental, respectively.
Fig. 3: A representative illustration of βkneeingβ within a current signal during a CT saturation event.
All these criteria are combined to give a confidence level for CT saturation as given by:
* _High confidence:_ The thresholds are exceeded for the current rating of the CT and the second harmonic current. The thresholds must also be exceeded for _three_ of the following: DC offset, loss of DC offset, inconsistent spacing between zero crossings, third derivative, or third harmonic current.
* _Medium confidence:_ The threshold is exceeded for the current rating of the CT, but the second harmonic threshold is not exceeded. The thresholds must then be exceeded for _three_ of the following: DC offset, loss of DC offset, inconsistent spacing between zero crossings, third derivative, or third harmonic current.
* _Low confidence:_ The threshold is exceeded for the current rating of the CT, but the second harmonic threshold is not exceeded. The thresholds must then be exceeded for _two_ of the following: DC offset, loss of DC offset, inconsistent spacing between zero crossings, third derivative, or third harmonic current.
* _Low confidence (alternative):_ The threshold is not exceeded for the current rating of the CT but is for the second and third harmonics. The thresholds must then be exceeded for _two_ of the following: DC offset, loss of DC offset, inconsistent spacing between zero crossings, or third derivative.
#### Iii-B2 Analog-to-Digital Converter Clipping
An analog-to-digital (A/D) converter is a device that converts continuously varying analog signals into a binary or digitized sequence. Many electronic devices in substations (e.g.,relays and DFRs) utilize A/D converters to record voltage and current signals in a binary format. The range of the digitized scale is restricted by the power supply rail voltage. If the analog value results in a digitized sequence that exceeds the rail voltage, then the digitized sequence will appear "clipped" or "flat-topped" at its minimum and maximum values. For substation devices, clipping often appears in current signals during fault events. This results in inaccurate replication of the current signals, which can result in relaying mis-operation. Fig. 4 shows the visible clipping at the minimum and maximum values of a current signal's digitized sequence.
Clipping is indicated by the repetition of equal magnitude samples within the digitized sequence. First, the index of the absolute maximum of the signal is calculated. The section of the waveform **ten** samples before and **ten** samples after the maximum is then extracted for analysis. If the first derivative of this section of the signal is equal to zero for more than **four** consecutive samples, then A/D converter clipping is present within the signal.
#### Iii-B3 Induced Transient Noise due to Switching
When high voltage devices-such as air-break switches-are opened to denergize a bus section, the resulting arcing can induce high-frequency noise upon the voltage or current signals of the electronic monitoring equipment (e.g., PQ monitor). Identification of this induced transient noise is used to determine where signal chokes may need to be installed or where shielding and ground bonding integrity may need to be checked. Fig. 5 provides a representative illustration of this transient noise within a voltage signal.
This event is characterized by the presence of small random spikes (i.e., noise) throughout the voltage or current signals. Switching induced transient noise is identified by its: (i) overall difference from an ideal waveform, (ii) harmonic content below **5%** of the fundamental, (iii) sudden spikes determined by the first derivative exceeding **10%** of the nominal peak value, (iv) persistence over **five** cycles or more, (v) occurrence averaging **once** per cycle, (vi) instances totaling **twenty** or more, and (vii) presence causing individual sample values to exceed the nominal peak signal value occurring at least **five** times.
The first criterion is determined using the approach described in Sect. II-A5 in which a voltage signal is compared to a reference signal, which is made up of replications of the first cycle. The condition in which the difference between the actual voltage and the reference voltage exceeds a threshold
Fig. 4: A representative current signal showing Analog-to-Digital Converter (A/D) clipping.
Fig. 5: A representative voltage signal showing transient noise due to switching.
is given by,
\[\frac{\bar{V}_{\Delta}}{N}>\tau_{\text{N}} \tag{18}\]
where \(\bar{V}_{\Delta}\) is the mean value of the voltage difference between the actual and ideal signals, \(N\) is the total number of samples in the waveform, and \(\tau_{\text{N}}=30\) is the empirically chosen threshold for this ratio. If the first six criteria are met, then induced transient switching is classified with _medium_ confidence. If all seven criteria are met, then this event type is classified with _high_ confidence.
#### Iii-B4 High-Speed Reclosing with Tapped Motor Loads
A common practice is to employ high-speed instantaneous reclosing on faulted transmission lines. Sometimes there may be large or significant motor load served from stations tapped on the line. For this work, a motor load is considered significant if it is directly served from a high-voltage transmission line (e.g., 161 kV). In such cases, the line voltage may be supported by the motors-as they spin down-so that residual voltage remains on the line by the time of a high-speed breaker reclose operation. The residual voltage may require up to five seconds to decay in large machines [10]. Since this residual voltage is unlikely to be in phase with the system voltage, the result can be a failed reclose attempt by the line breakers as well as damage to the motors. Thus, it is important to identify lines where high-speed reclosing needs to be delayed to allow the voltage to sufficiently decay before carrying out the reclosing operation. Fig. 6 shows a voltage signal in which sufficient time has passed to allow the voltage signal to decay to a point after which the reclosing operation was successfully completed.
For identification of this event, it must be determined whether the reclosing operation is a high-speed reclosing operation. For this work, the reclosing operation is a high-speed one if it is "blind" (i.e., without any supervision or checks) and occurs within thirty cycles of the initial current interruption by the breaker [10]. Identification of the reclosing with tapped motor loads is achieved by determining the sample points at which the: (i) voltage signal begins to decay, (ii) voltage signal reaches zero, and (iii) reclosing operation occurred. The time between these three points determines whether the reclosing is a high-speed operation. In this work and as shown in Fig. 6, these three sample points are designated as \(t_{1}\) (magenta circle), \(t_{2}\) (black square), and \(t_{3}\) (blue triangle), respectively. The location of these three sample points is determined using the RMS signal, which is calculated using equation (3) as described in Sect. II-A2 and is shown in Fig. 6 as a broken, red line. For this event, the RMS window was set to half the number of samples in each cycle (i.e., \(N_{c}/2\)).
The point \(t_{1}\) is the time at which the RMS voltage first decays below a threshold and is determined by,
\[\frac{\bar{V}(t)}{\bar{V}_{q}(t)}<\tau_{\text{S}}, \tag{19}\]
where \(\bar{V}\) is the RMS of the voltage, \(\bar{V}_{q}\) is the nominal RMS voltage as determined from the first cycle, and \(\tau_{\text{S}}=0.9\) is the empirically selected threshold for the sag in voltage indicating the start of a decay. The point \(t_{2}\) is determined as the time at which the voltage decays low enough to be considered approximately zero. An empirical threshold of \(\tau_{0}=0.01\) was used as the threshold below which the RMS voltage must reach to be considered zero. If this condition is not met, then \(t_{2}\) is the time at which the RMS voltage is at its minimum. The RMS voltage must decay to below **50%** of the nominal value for the process to continue.
The voltage decay portion is the RMS voltage between times \(t_{1}\) and \(t_{2}\) and is designated here as \(\bar{V}_{\text{D}}\). The median (i.e., middle value) of \(\bar{V}_{\text{D}}\) must be lower in magnitude than the voltage at time \(t_{1}\) and higher than the voltage at time \(t_{2}\). The mean of the first derivative of \(\bar{V}_{\text{D}}\) must also be negative to indicate a downward slope or decrease in voltage. The maximum first derivative of the voltage decay must also be less than a threshold to ensure that the voltage decay was not sudden. This condition is given by,
\[\frac{\max|\bar{V}_{\text{D}}^{\prime}|}{\bar{V}_{q}}<\tau_{l} \tag{20}\]
where \(\bar{V}_{\text{D}}^{\prime}\) is the first derivative of the decaying portion of the RMS voltage, \(\bar{V}_{q}\) is the nominal RMS voltage, and \(\tau_{l}=0.5\) is the empirically selected threshold for the maximum first derivative of the decaying voltage. The point \(t_{3}\) is the time at which the RMS voltage increases by **30%** of nominal value in one RMS sample. This condition is determined by the first derivative of the RMS signal as given by,
\[\frac{\max|\bar{V}_{\text{S}}^{\prime}|}{\bar{V}_{q}}>\tau_{\text{U}} \tag{21}\]
where \(\bar{V}_{\text{S}}^{\prime}\) is the first derivative of the portion of the RMS voltage after time \(t_{2}\), \(\bar{V}_{q}\) is the nominal RMS voltage, and \(\tau_{\text{S}}=0.3\) is the empirically selected threshold for the minimum first derivative of the reclosing voltage. Time \(t_{3}\) is the point when reclosing occurs and the voltage is restored.
Fig. 6: A representation of the case in which the voltage signal _does_ decay sufficiently prior to a successful reclosing operation in the presence of a tapped motor load.
The criteria given thus far serve to classify the event as normal reclosing with a tapped motor load. Fig. 6 is a normal event in which there was sufficient time between \(t_{2}\) and \(t_{3}\). If there is not sufficient time between these two points, then the event is "flagged" as needing attention. The condition that defines a high-speed reclosing operation is given by,
\[t_{3}-t_{2}>\tau_{\text{HS}} \tag{22}\]
where \(t_{2}\) is the time at which the voltage first decays to zero, \(t_{3}\) is the time at which the voltage is restored, and \(\tau_{\text{HS}}=30\) cycles is the threshold for the minimum time the voltage must be zero before reclosing as recommended [10]. Fig. 7 shows a case in which the minimum time for which the voltages needs to be zero is not satisfied.
#### Iii-B5 DC Offset
DC offsets in analog channels are a common issue and when they are large enough can negatively impact RMS calculations. A large DC offset is accounted for by re-calibration of the corresponding monitoring or recording device. Automated calculation of DC offset affords utility personnel the ability to prioritize re-calibration of those devices associated with the largest amounts of DC offset. The DC offset event is characterized by an asymmetry between the positive and negative half-cycles of a voltage or current signal.
The presence and amount of DC offset is determined using both time and frequency domain analysis. In the frequency domain, a DC offset is present if the magnitude of the 0 Hz frequency component is greater than 50% of the magnitude at the fundamental frequency component (i.e., 60 Hz in the United States). Mathematically this condition can be expressed as,
\[\frac{X_{0}}{X_{1}}>\tau_{f} \tag{23}\]
where \(X_{0}\) is the magnitude of the 0 Hz frequency component, \(X_{1}\) is the magnitude at the fundamental frequency component, and \(\tau_{f}=0.5\) is empirically selected as the minimum ratio with respect to the fundamental frequency. Fig. 8 provides a representative illustration of a current signal in which a large amount of DC offset is present from 40 ms to 90 ms. Fig. 9 shows the magnitude of the zeroth through fifth harmonic of the current signal shown in Fig. 8. In this case, the 0 Hz frequency component is over two times larger than that of the fundamental frequency component (i.e., the first harmonic) and would be "flagged" as a DC offset event. Interestingly, the presence of the third harmonic indicates that another disturbance is also present within the recorded signal of Fig. 8.
If the frequency domain analysis results in the identification of a DC offset event, then time domain analysis is performed as a validation step. Time domain analysis is conducted by computing the mean over each cycle within the recorded signal. If a given cycle's mean value is zero, then there is no DC offset present within that cycle. This is because the area under the positive and negative portions of the cycle would negate each other. However, if the selected cycle's mean exceeds 50% of the nominal signal's peak value, then the DC
Fig. 8: Representative illustration of a large DC offsetβfrom 40 ms to 90 msβwithin a current signal.
Fig. 7: A representation of the case in which the voltage signal _does not_ decay sufficiently prior to a successful reclosing operation in the presence of a tapped motor load.
Fig. 9: Illustration of the zeroth through fifth harmonic ratios of current signal shown in Fig. 8.
offset event "flag" is set once more. The amount of DC offset-returned by the automated process-is,
\[\operatorname*{arg\,max}_{i}\mu_{i}, \tag{24}\]
where \(\mu_{i}\) is the mean value of the \(i^{\text{th}}\) cycle within the signal being processed.
#### V-A6 Motor Starting
Instantaneous increases in current may be due to faults, motor starts, transformer energizations, or other events. Signatures present within the recorded signals can be used to distinguish and classify each of these events. PQ disturbances can then be correlated by event classification. In the case of motor starting, the voltage sags and the current can increase to five to six times its rated value [11]. It is challenging to set protective relays in such a way to enable recognition of a motor starting event rather than recognizing the event as a fault on the system. The automated processed described in this section is developed under the assumption that the corresponding relays are properly set so they do not trip open when motor intrush current is present. Fig. (a)a and Fig. (b)b show representative illustrations of motor starting voltage and current signals, respectively.
The automated process checks for a voltage sag below 95% of the signal's nominal RMS value and a current spike to twice the CT's rated value determined by (13). If both of these conditions persist for at least ten consecutive cycles, then the first indicator of motor starting is identified. The persistence of both conditions-for ten or more consecutive cycles-distinguishes motor starting events from a fault condition, which typically occurs for only several cycles before the relay trips open the breaker. Motor starting events are also associated with a frequency response that is low in harmonic content. Thus, if none of the voltage or current signals' harmonics exceed **15%** of the fundamental frequency components magnitude, then the second indicator of motor starting is identified. The final indicator for motor starting is that all three conditions (i.e., voltage sag, current spike, and harmonics below 15% of the fundamental) occur on all three phases, because motors are three phase devices.
#### V-A7 Variable Frequency Drive Motor Starting
Some motors utilize electronic starting (e.g., Variable Frequency Drives - VFDs) to bring the motor up to speed in a controlled manner to limit voltage supply disturbance(s). VFDs produce unique harmonic patterns, which allows these events to be easily identified by our automated process. When a VFD motor starts it creates a very distinct current signal. A representative illustration of this distinct current signal can be seen in Fig. 11.
In Fig. 11, each phase has two pulses per half-cycle. The number of pulses per half-cycle indicates the type of VFD (e.g., six-pulse, twelve-pulse, etc.), thus VFD motor starting events are identified by counting the number of times the current signal drops below 50% of each cycle's maximum value. Two pulses in each half cycle of a current signal for each phase (e.g., Fig. 11) would indicate a six-pulse VFD. The number of pulses for the drive is given by,
\[N_{p}=\frac{3}{2}\times mode(K),\ K>2 \tag{25}\]
where \(K\) is number of times the current crosses 50% of each cycle's maximum value every half-cycle, and \(mode(K)\) refers to the most often occurring value of \(K\). The current must cross the threshold more than two times for at least **eight** cycles during the event to be considered VFD motor starting. After \(N_{p}\) is calculated, harmonic analysis is conducted, because VFD motor starting events result in dominant harmonics on either side of an integer multiple of \(N_{p}\). Fig. 12 shows the harmonics for the six-pulse (i.e., \(N_{p}=6\)) VFD motor starting event illustrated in Fig. 11. The fifth and seventh harmonics are the two most dominant harmonics and occur on either side of the sixth harmonic, which is equal to that of \(N_{p}=6\). The value of \(N_{p}\) is validated by ensuring that the dominant
Fig. 11: A illustration showing the distinct current signal generated during a six-pulse VFD motor start event.
Fig. 10: Voltage and current signals showing signal characteristics associated with a motor starting event.
harmonics are at least **five** times larger than the value of the harmonics at integer multiples of the \(N_{p}\). This validation check is performed by,
\[\frac{H_{kN_{p}\pm 1}}{H_{kN_{p}}}>\tau_{\mathrm{V}},\ (k=1,2,3,4) \tag{26}\]
where \(N_{p}\) is the number of pulses in the VFD, \(H_{kN_{p}}\) is the harmonic at an integer multiple of \(N_{p}\), \(k\) is an integer, and \(\tau_{\mathrm{V}}=5\) is the empirically determined threshold for the ratio of the dominant harmonics with those at integer multiples of \(N_{p}\). If equation (26) is satisfied, then the number of predicted pulses is deemed correct. Finally, the event is identified as VFD motor starting so long as all three currents (i.e., phase A, B, and C) increase over the events duration.
#### Iii-A8 Melting Fuse
Unlike a breaker, a blown (a.k.a., melted) fuse requires utility personnel to physically replace it, so it is helpful to distinguish fuse faults from breaker faults. These two faults are distinguished from one another by the speed at which the fault is cleared. Breakers require between two or more cycles to clear a fault while fuses require less than two cycles. Fig. 13 shows an example of a fuse melting event that is cleared in a little more than one cycle.
The key to automated identification of fuse melting events is accurate determination of the fault's inception and clearing points. A fuse melting event occurs if the total clearing time was less than one and a half-cycles and is determined by,
\[|t_{I}-t_{C}|<\tau_{c} \tag{27}\]
where \(t_{I}\) is the inception point, \(t_{C}\) is the clearing point, and \(\tau_{c}=1.5\) is the threshold for the maximum fuse clearing time.
Automated identification of a fuse melting event is initialized by determining if the event persisted for at least a quarter of a cycle and the current reaches at least twice its nominal value over the event's duration. The cycle before and just after the portion associated with these two conditions is then analyzed one half-cycle at a time to determine the fault inception and clearing points. The three possible approaches used to determine these points are: (i) a sign change in the first derivative, (ii) a sudden increase in the second derivative, and (iii) the current signal's zero crossings.
The first derivative approach is implemented using equation (4) as described in Sect. II-A3. A sign change in the first derivative before or after the spike in current indicates the fault inception and clearing points. This approach is used to determine the inception and clearing points of the fuse melting event shown in Fig. 13 where the red circle indicates the fault inception point, and the black square indicates the fault clearing point.
If the first derivative approach is unsuccessful (i.e., a sign change in the first derivative does not exist), then the second derivative is used as described in Sect. II-A3. The condition for a large second derivative is given by,
\[\frac{\max|I^{\prime\prime}(n)|}{\tilde{I}_{q}}>\tau_{\mathrm{D2}} \tag{28}\]
where \(I^{\prime\prime}(n)\) is the second derivative of the current signal, \(\bar{x}_{c}\) is the nominal peak current, and \(\tau_{\mathrm{D2}}=0.02\) is the empirically selected threshold for the minimum ratio of the second derivative of the current to the nominal value. This approach was used to determine the fault inception point of Fig. 1 as described in Sect. II-A3.
If the second derivative approach is also unsuccessful (i.e., the minimum threshold is not met), then the fault inception and clearing points are assumed to be the zero crossings just before and just after the current spike, respectively. After the fault inception and clearing points are determined, equation (27) is used to determine whether the fault was short enough in duration to be a melted fuse.
#### Iii-A9 Ferroresonance
Ferroresonance is electric circuit resonance that occurs when a circuit containing a nonlinear inductance is fed from a source that has a series capacitance connected to it. In a transmission system, ferroresonance can occur when a breaker-with grading capacitors-is used to de-energize a bus that has magnetic Voltage Transformers (VTs)
Fig. 12: The harmonic ratios calculated from the current signal of the six-pulse VFD motor start event shown in Fig. 11.
Fig. 13: A current signal during a fuse melting event that lasts just over one cycle.
connected to it. The described scenario presents a serious safety risk to utility personnel and damage risk to equipment, because severe overvoltages can occur despite the breaker being in an open state. Ferroresonance manifests in the voltage signals and causes the signals to take on a square wave like shape/appearance. Fig. 14 provides a representative illustration of the square wave appearance that a voltage signal can take on due to ferroresonance. Another characteristic of ferroresonance events is that the current is normally zero during the event. This is due to the line being de-energized; however, depending on the recording device's location, the current can be recorded as a nominal signal.
Ferrroresonance events are identified using three criteria: (i) a large difference between discrete samples in the voltage signal, (ii) this behavior continuing for a certain number of cycles and often enough during that time, (iii) significant harmonic content present in the voltage signal, and (iv) the current signal is recorded as zero or a nominal waveform.
The first criterion is met if the first derivative of the voltage signal exceeds 50% of nominal peak voltage as given by,
\[\frac{|V^{\prime}(n)|}{\hat{V}_{q}}>\tau_{\text{F}} \tag{29}\]
where \(V^{\prime}(n)\) is the first derivative of the voltage signal, \(\bar{x}_{c}\) is the nominal peak voltage, and \(\tau_{\text{F}}=0.5\) is the empirically selected threshold for the minimum ratio of the first derivative of the voltage to the nominal value. The second criterion is met if this threshold is exceeded a minimum of **five** times, occurs at least every **three** cycles, and occurs for a length of at least **five** cycles. The third criterion is met if one of the harmonic currents is greater than **5%** of the fundamental. Finally, the fourth criterion is met if the RMS current is recorded as zero or the current signal is nominal, which is characterized by a small number of first derivative sign changes. This nominal condition is given by,
\[\frac{N_{\text{I}}}{N}<\tau_{\text{I}} \tag{30}\]
where \(N_{\text{I}}\) is the number of first derivative sign changes in the current as calculated using equation (4), \(N\) is the total number of samples in the waveform, and \(\tau_{\text{I}}=0.3\) is the empirically selected threshold for the ratio of sign changes to total samples.
#### Iii-B10 Capacator Bank Switching
One of the most common events on a power system is capacitor bank switching. Capacitor bank switching induces temporary voltage transients that can create PQ events. A typical capacitor bank switching transient is characterized by a quick depression of the voltage signal toward zero, followed by an overshoot and subsequent transient disturbance-lasting approximately one cycle-as the system returns to steady state. These voltage transients may be recorded by devices that are connected to the same bus as the capacitor bank as well as those connected to a different bus. Based upon this fact, the presented automated process is designed to identify capacitor switching for both recording device connection scenarios. Fig. 15 shows an example of capacitor bank switching in which a broken, red line highlights the portion of the recorded signal associated with the event.
In a power transmission system, capacitor banks are simultaneously switched in on all three phases. Although Fig. 15 shows only a single phase, the other two phase voltage signals are similar in appearance, but will not be identical due the 120\({}^{\circ}\) phase difference between each of the three signals (i.e., the switching event occurs at different points of the corresponding phase's sinusoidal signal). The disturbance is located within the signal using the first cycle as a reference as described in Sect. II-A5 and shown in Fig. 2. The condition for the difference between the actual and ideal voltage signals is given by,
\[\frac{|V_{\Delta}|}{\hat{V}_{q}}>\tau_{\Delta} \tag{31}\]
where \(V_{\Delta}\) is the difference between the actual and ideal voltage signals, \(\hat{V}_{q}\) is the nominal peak voltage value, and \(\tau_{\Delta}=0.02\) is the threshold empirically selected for this ratio.
Fig. 14: Illustration of a voltage signal collected during a ferroresonance event.
Fig. 15: Illustration of a voltage signal collected during capacitor switching event.
Once the presence and location of the disturbance has been determined, the disturbance's duration is calculated to ensure that it does not exceed **two** cycles. The voltage signal's peak values must satisfy one of these two criteria: (i) one peak **2%** above nominal value and no more than one peak **10%** above nominal value; (ii) exactly two peaks **10%** above nominal value occurring in neighboring cycles.
The next step is to determine the three characteristic points highlighted on the waveform of Fig. 15, which are designated as \(t_{1}\) (red circle), \(t_{2}\) (green square), and \(t_{3}\) (black triangle). These points are indicative of a capacitor switching event. First, the portion of the voltage signal one half-cycle before and one half-cycle after the highest peak value is extracted and designated as \(V_{\text{O}}\). The point \(t_{1}\) is determined as the first point in which the voltage signal's first derivative exceeded a certain threshold as given by,
\[\frac{|V_{\text{O}}^{\prime}(n)|}{\hat{V}_{q}}>\tau_{\text{O}} \tag{32}\]
where \(V_{\text{O}}^{\prime}(n)\) is the first derivative of the overvoltage cycle of the voltage signal, \(\hat{V}_{q}\) is the nominal peak voltage value, and \(\tau_{\text{O}}=0.02\) is the threshold empirically selected for this ratio. The first occurrence of this condition is determined to be \(t_{1}\). The point \(t_{2}\) occurs at the lowest point the signal at which the magnitude of voltage signal has dropped below **90%** of nominal peak value. The point \(t_{3}\) is then determined as the time index of the highest peak of the voltage signal \(V_{\text{O}}\).
The location of these three characteristic points is then validated using the following three checks: (i) the voltage magnitudes at these points are expected the expected values, (ii) nominal number of samples between the overvoltage and the peak prior to it, (iii) the waveform slope is reversed at \(t_{1}\). For the first check, the expected voltage magnitudes at \(t_{1}\), \(t_{2}\), and \(t_{3}\) must follow the inequality given by,
\[|V_{t_{2}}|<|V_{t_{1}}|<|V_{t_{3}}|, \tag{33}\]
where \(|V_{t_{1}}|\), \(|V_{t_{1}}|\), and \(|V_{t_{3}}|\) are the voltage magnitudes at times \(t_{1}\), \(t_{2}\), and \(t_{3}\), respectively. The second check is that the peak before must be approximately equal to \(N_{c}/2\) samples before the overvoltage peak as determined by,
\[\frac{N_{\text{PB}}-N_{c}/2}{N_{c}}<\tau_{\text{P}} \tag{34}\]
where \(N_{\text{PB}}\) is the number of samples between the overvoltage peak and the peak before it, \(N_{c}\) is the number of samples in each cycle, and \(\tau_{\text{P}}=0.1\) is the threshold empirically selected for this ratio. Finally, the third check was validated using (4) described in Sect. II-A3. If the first derivative of the voltage signal leading up to \(t_{1}\) is of opposite sign than the first derivative of the voltage between \(t_{1}\) and \(t_{2}\), then the third check is met. After all these criteria are met for one of the three voltage phases, the other two phases are analyzed to ensure that some form of disturbance is present.
#### Iii-B1 Lightning Strikes
Transient overvoltages due to lightning strikes on a transmission line are typically impulses with a rise and decay time in the microseconds. Due to limitations of instrument transformers to pass these high frequencies and instrumentation sampling rates, lightning strike events are not readily identified. A representative voltage signal that includes a lightning strike event is shown in Fig. 16.
First, the automated identification process attempts to identify the event as a capacitor bank switching event (Sect. II-C10) and then a melting fuse event (Sect. II-C8). These steps are taken to ensure that a lightning strike event is not incorrectly identified as either of these two events-that although similar to a lightning strike-are easily distinguished from it as well as one another. If the event is not identified as a capacitor bank switching or melting fuse event, then the disturbance is isolated from the overall signal using the exact same method as that given in equation (31) for the isolation of the capacitor bank switching event's disturbance. The disturbance isolation process is repeated for each lightning strike, and the longest strike duration is checked to ensure that it does not exceed **one** cycle. If more than **five** disturbances are isolated, then the event is not identified as a lightning strike. In all of the processed data, lightning did not strike more than three times during a single recording. So long as no more than three lightning strike disturbances are isolated, then the automated process identifies the event as a lightning strike and returns the number of strikes along with the disturbance's duration in seconds.
#### Iii-B12 Harmonic Resonance
Power systems have natural frequencies that are a function of the system's inductive and capacitive impedance. When a nonlinear load on the power system-such as a VFD-generates a frequency that is a natural frequency (i.e., a multiple of the fundamental frequency) of the power system, then a resonance condition can result. This resonance can subject equipment to overvoltages or currents, which can result in equipment failure or misoperation. Thus, it is important to detect harmonic resonance conditions quickly,
Fig. 16: Illustration of a voltage signal collected during a lightning strike event.
so that appropriate and necessary actions can be taken to correct the problem(s).Fig. 17 shows an example case of harmonic resonance on an operationally recorded voltage signal.
Harmonic resonance is characterized by the presence of high frequency content in the voltage signals. Based upon this information, the automated identification process first calculates the Total Harmonic Distortion (THD) of the voltage signal by,
\[V_{\text{THD}}=\frac{\sqrt{\sum\limits_{i=2}^{M}|H_{i}|^{2}}}{H_{1}}, \tag{35}\]
where \(H_{i}\) is the \(i^{\text{th}}\) harmonic, \(H_{1}\) is the fundamental frequency, \(M=100\) is the total number of harmonics used for the calculation, and \(|\bullet|\) denotes the magnitude [12]. If the THD is greater than **8%** of the fundamental frequency, then the process continues else it moves onto the next event category. A value of 8% was empirically selected, but can be adjusted as more data becomes available or based upon power system specifics.
If the THD threshold is satisfied, then the automated identification process determines whether or not at least the sixth or one of the higher harmonics is more than 5% of the fundamental frequency's magnitude. If this is the case, then the sign changes in the first derivative are calculated for each cycle using (4) as described in Sect. II-A3. The number of first derivative sign changes in each cycle must be at least **10%** of the samples in each cycle \(N_{c}\) and also occur across **three** cycles. If all these criteria are satisfied, then the automated process identifies the event as harmonic resonance.
#### Iv-B13 Improper Voltage Transformer Secondary Grounding
It is good design practice to use a single and solid grounding point on an instrument VT's secondary [13]. Otherwise, the result may be incorrect secondary voltage signals in both magnitude and angle, which can lead to the misoperation of protective relays. This can be exacerbated when faults occur on the lines protected by these relays.
A key indicator of improper VT secondary grounding is when one voltage phase is sagged while another one is swelled. Fig. 18 provides a representative example of this indicator in which the Phase B-C voltage signal is experiencing a sag from 250 ms to 300 ms (Fig. 18a) while the Phase C-A voltage signal experiences a swell over the same time period (Fig. 18b). Automated identification of improper VT secondary grounding is facilitated by determining if a voltage sag and swell simultaneously exists on two of the three voltage phases. In this work, a sag occurs when one of the voltage signal's peaks _falls_ below the nominal peak voltage by more than **5%**, and a swell occurs when one of the voltage peaks _rises_ above the nominal peak by more than **5%**. The phase angle between the sagged and swelled voltage phases is calculated by,
\[\theta=\cos^{-1}\left(\frac{\mathbf{V}_{\alpha}\cdot\mathbf{V}_{\beta}}{|V_{ \alpha}||V_{\beta}|}\right), \tag{36}\]
where \(\mathbf{V}_{\alpha}\) and \(\mathbf{V}_{\beta}\) are two of the three faulted voltage phasors, \(\cdot\) denotes dot product, and \(\theta\) is the phase angle between \(\mathbf{V}_{\alpha}\) and \(\mathbf{V}_{\beta}\). The phase angle is calculated between phases: A to B, B to C, and A to C. In a balanced system, the nominal angle between two voltage phases is 120\({}^{\circ}\)[14]. If the phase angle deviates from this 120\({}^{\circ}\) nominal angle by more than **5\({}^{\circ}\)**, then the event is identified as an improper VT secondary grounding event.
#### Iv-B14 Incipient Capacitive Voltage Transformer Failure
Capacitive Voltage Transformers (CVTs) supply voltage to protective relays, so it is very important that the CVT is measuring voltage accurately. If a catastrophic CVT failure results in a complete loss of this voltage, then the affected relays detect the loss using Loss of Potential (LOP) logic and act accordingly [15]. However, relays are not equipped to detect a CVT that is showing early signs of failure by
Fig. 17: Illustration of a voltage signal collected during a harmonic resonance event.
Fig. 18: Voltage signals indicating improper VT secondary grounding due to the simultaneous presence of a voltage sag and swell on two phases.
providing incorrect data, but has not yet failed to provide the supply voltage. The developed automated identification process is designed to detect early indicators of impending CVT failures to facilitate proper actions by utility personnel or equipment. Additionally, a CVT failure poses a significant safety risk to any utility personnel who happen to be nearby when it fails. The voltage signal shown in Fig. 19 provides a representative illustration of the early indicators of an impending (a.k.a., incipient) CVT failure.
The first indicator of an incipient CVT failure event is that one of the voltage signal's peaks will contain a rise or fall of more than **10%** of the nominal peak value, and this behavior must persist for at least **three** cycles. The second incipient CVT failure indicator is that the disturbance portion of the voltage signal will differ from its corresponding nominal signal, by more than \(\tau_{\Delta}=0.02\) as introduced in Sect. II-A5 and implemented in (31). Since CVTs are single-phase devices, incipient CVT failure would also only occur in one phase, which is a differentiating factor between other events. Finally, the current signal is analyzed to ensure that no disturbance is present since this event type is specific to voltage signals.
## III Results
The performance of the developed rule-based, automated electrical disturbance identification process is assessed using a data set comprised of 160 total event records that were collected by field devices operating in a power utility's transmission system. This data set contains approximately ten records for each of the discussed events. The data set also contains events with undisturbed voltage and current signals as well as single-phase and multi-phase events. Each phase of every single-phase event is processed, thus tripling the size of the associated event's data set. False positive and false negative event identifications are counted as incorrect or mis-identifications. If a signal did not contain one of the listed electrical disturbances and the automated process did not identify it as a disturbance, then it was counted as a correct result. Overall automated identification results are presented in Table I for each of the fourteen event types. Table I provides the: number of events analyzed, number correctly identified, and the percent correct accuracy for each event type.
### _Results: Current Transformer Saturation_
The accuracy of the automated process in determining CT saturation is 96.67% (i.e., correctly identifying 464 out of 480 total signals processed). The test for CT saturation proved to be challenging due to the complexity of this event. The range of criteria used may not always be met for each CT saturation event. For example, the A/D clipping waveform of Fig. 4 appears to contain CT saturation based on the characteristic "kneeing" in the first two cycles of the fault. However, DC offset is not present and the rating of the CT was likely not exceeded, so this event could be incorrectly classified. Also, for most of this testing, a CT ratio of 1,200:5 is used for each event type regardless of the actual CT ratio. This was done for simplicity, but actual CT ratios from COMRADE configuration files will be used when these tools are implemented in a production environment. When the actual CT ratio is known, then the rated current of the CT will be known and the automated process is able to accurately determine whether this rating was exceeded.
### _Results: Analog-to-Digital Converter Clipping_
The accuracy in detecting the A/D converter clipping event is very high achieving an accuracy of 99.27% (i.e., correctly identifying 953 out of 960 total signals processed). The threshold for the number of consecutive repeated samples is set to four samples. There are some events where clipping looks obvious to the human eye, but the samples that looked repeated are slightly different. Those results are counted as incorrect, even though the automated process functioned properly. Each utility's personnel could decide whether events like these actually are a problem with the A/D converter. The A/D clipping detection methods should return proper results 100% of the time if the repeated samples have the exact same value.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Event Type** & **\# Events** & **\# Correct** & **\% Correct** \\ \hline CT Saturation & 480 & 464 & 96.67\% \\ A/D Clipping & 960 & 953 & 99.27\% \\ Induced Transient Noise & 480 & 477 & 99.38\% \\ High-Speed Reclosing & 160 & 160 & 100\% \\ DC Offset & 960 & 956 & 99.58\% \\ Motor Starting & 160 & 160 & 100\% \\ VFD Starting & 160 & 160 & 100\% \\ Blown Fuse & 160 & 159 & 99.38\% \\ Ferroresonance & 480 & 476 & 99.17\% \\ Capacitor Switching & 160 & 159 & 99.38\% \\ Lightning & 480 & 477 & 99.38\% \\ Harmonic Resonance & 480 & 480 & 100\% \\ VT Secondary Grounding & 160 & 159 & 99.38\% \\ CVT Failure & 160 & 154 & 96.25\% \\ \hline \hline \end{tabular}
\end{table} TABLE I: Automated electrical disturbance event identification performance results.
Fig. 19: Illustration of a voltage signal showing an incipient CVT failure event.
If they do not, then a very small tolerance (e.g., 10 V or 1 A) could be allowed between the magnitudes of samples that appear to be the same value.
### _Results: Induced Transient Noise from Switching_
Initial identification performance for this event was poor at roughly 70%. In an effort to improve automated identification of induced transient noise from switching events, the automated process was modified by incorporating a rule in which the presence of ferroresonance is checked first, then harmonic resonance, and finally induced transient noise from switching so that the three events not take place at the same time. The reason for this is purely due to the similarity with other events and the lack of distinguishing characteristics in this one. Also, a change was made to use the first cycle as a reference to isolate the disturbance as described in Sect. II-A5. These changes result in an improved accuracy of 99.38% (i.e., correctly identifying 477 out of 480 total signals processed).
### _Results: High-Speed Reclosing with Tapped Motor Loads_
The accuracy of this event was 100% in the tests that were conducted. However, there were only two events in which the voltage did not sufficiently decay before reclosing since these events do not often occur if utilities are aware of special settings that are needed for reclosers on such lines with tapped motor loads. Thus, a larger data set will be needed to determine the accuracy of this algorithm.
### _Results: DC Offset_
The DC offset algorithm is one that is well-suited for rule-based analytics as shown by its accuracy of 99.58% (i.e., correctly identifying 956 out of 960 total signals processed). The frequency analysis method combined with the cycle mean method are very accurate at identifying DC offset. A few signals were falsely classified as DC offset. Signals such as the CT saturation example in Fig. 3 contain a very steep spike at the fault inception, so DC offset will be seen in that first half-cycle. Further logic could be added in future work to account for these faults so that DC offset is not detected in the first half-cycle.
### _Results: Motor Starting_
Motor starting events were very straightforward to identify. 160 out of 160 total signals were correctly identified. One reason for the 100% accuracy is that the other events analyzed did not have many similarities with motor starting. Transformer inrush would produce a similar signal signature, but the differentiating factor is that motor starting is not as rich in harmonics. Motor inrush is also different from single-phase (i.e., the most often occurring) faults in that the elevated current always occurs across all three phases. For these reasons, the motor inrush classification process should be one of the most robust.
### _Results: Variable Frequency Drive Motor Starting_
This event type also produced a 100% accuracy when tested (i.e., correctly identifying 160 out of 160 total signals processed). However, the 10 VFD starting events used were all from the same motor on the transmission system since these devices are not extremely common. More data will be needed to test the accuracy of the process for this event type.
### _Results: Melted Fuse_
The accuracy in classifying melted fuse events is 99.38% as it correctly identified 159 out of 159 total signals. Melted fuse events are relatively straightforward to identify due to their short duration. One incorrect classification stemmed from an event containing a minor fault that was incorrectly labeled as a fuse fault. Although the fault lasted several cycles, the part of the current that exceeded the threshold was short enough to be classified as a blown fuse. The process of finding the fault inception and clearing points is very nuanced, and it may not always be 100% accurate in determining the clearing time, especially for faults that do not greatly (e.g., two times the rated current) exceed the predefined threshold.
### _Results: Ferroresonance_
Ferroresonance is a unique event that was classified with 99.17% accuracy by these analytics (i.e., correctly identifying 476 out of 480 total signals processed). In most of the data studied, the signals contain large gaps between samples (i.e., at least 50% of nominal peak value). A few signals did not have such large gaps, possibly due to the ferroresonance being less severe. These events were not identified as ferroresonance, so new methods will need to be developed in the identification of these events. One such method could be incorporating breaker statuses (i.e., open or closed) into the process since ferroresonance usually occurs with the breaker(s) in the open state.
### _Results: Capacitor Switching_
The capacitor switching classification process correctly identified 159 out of 160 total signals resulting in an accuracy of 99.38%. The methods employed for this event type are very detailed and are much more likely to generate false negatives than false positives. As long as the characteristic three points on the signal of Fig. 15 are present, the results should be accurate. The only capacitor switching event that was missed was one in which the voltage transient occurred on the first cycle. Since the nominal peak value is taken using the first cycle as a reference, the rest of the processing becomes incorrect. This issue could be solved by using a predefined nominal peak value for each voltage level from an external data source.
### _Results: Lightning_
The automated process correctly identified whether or not lightning was present for 477 out of 480 signals for an accuracy of 99.38%. Originally, many capacitor switching events were characterized as lightning. To remedy this, the process
was updated such that the presence of lightning would only be checked if capacitor switching returned negative. The lightning detection process relies on an accurate determination of the duration of the disturbance. A short disturbance distinguishes lightning from other events. A few events were discovered in which the algorithm determined the disturbance to be longer than it was, which could possibly be due to an outside disturbance unrelated to lightning. This phenomenon results in a few mis-classifications.
### _Results: Harmonic Resonance_
Harmonic resonance is difficult to distinguish from ferroresonance, so a modification was made to only run the harmonic resonance algorithm if ferroresonance has not occurred. This resulted in an accuracy of 100% with 480 out of 480 signals being correctly identified. There are 5 different harmonic resonance event records in the data set for a total of 15 voltage signals, so more data will be needed to test the robustness of this algorithm. One future improvement that could be made is to detect resonance under the 5th harmonic since resonance conditions can sometimes develop that those frequencies.
### _Results: Voltage Transformer Secondary Grounding_
The classification for this event was very successful with an accuracy of 99.38% on the 160 signals studied. There are a large number of events in which there is improper VT secondary grounding. Many of the CT saturation faults are not exactly 120\({}^{\circ}\) apart in their voltage phase angles, which would indicate improper grounding. This event is straightforward to classify by rule-based techniques. The only issue that may occur is if inaccurate data is fed into the automated process.
### _Results: Incipient Capacitive Voltage Transformer Failure_
The results for this event are not as accurate as the others studied. 154 out of 160 signals (i.e., 96.25%) were correctly classified as demonstrating incipient CVT failure or not. The lower accuracy is due to the inconsistency in CVT failure events. CVTs could be in different stages of their incipient failure, so the signal signatures will not look the same. The differentiating factor though is that these events are assumed to only occur on one phase at a time, which improves the results.
### _Results: Cyclic Histogram-based Continuous Signal Analysis_
For continuous signal analysis, the developed Python(r) script successfully generated the time and frequency-based cyclic histograms and associated residual histograms from a DFR generated CSV file that contains a twenty-four hour period of continuously recorded signals, which includes all three voltage and current signals. After continuous signal processing, the required storage space was reduced by a factor of 320 (i.e., 35 GB to 72 MB). Fig. 20 and Fig. 21 provide representative examples of the time-based and residual cyclic histograms for one hour, respectively. The same hour of continuous data associated with the cyclic histogram in Fig. 20 is used to generate the frequency-based histogram in Fig. 22. Current efforts are focused on integrating the developed Python(r) script into a power utility's DFR. Part of this integration involves reducing the amount of DFR compute and memory resources needed to generate the frequency-based histogram and its residual representation. The overarching goal is to use the cyclic histograms to detect deviations within the corresponding signal-that would not ordinarily result in an electrical disturbance event-for incipient prediction, detection, identification,
Fig. 21: Residual voltage magnitude histogram for one hour of operational data from a 161 kV transformer DFR. The range of magnitude has changed from 270 kV in the cyclic histogram to 7 kV in the residual.
Fig. 22: Voltage frequency histogram for one hour of operational data from a 161 kV transformer DFR where \(f\in[59.9,60.1]\)Hz. It is observed that the frequency tends to operate 0.03 Hz below the intended 60 Hz, but no fault has occurred.
Fig. 20: Voltage magnitude cyclic histogram for one hour of operational data from a 161 kV transformer DFR.
or analysis. Ongoing work is focused on determining the best method of presenting the cyclic histograms so they are informative to PQ engineers.
## IV Conclusion
In this work, an approach was presented for automated identification of electrical disturbances in a power system. Fourteen different disturbance event types were successfully classified with an average accuracy of 99.13%, and continuous waveform data was processed and stored using a technique known as a cyclic histogram, which resulted in the file's storage size being reduced in size by a factor of 320. The developed processes will result in time savings for utility personnel as well as increase awareness of disturbances occurring on the power system. This process can categorize events in a matter of minutes rather than hours or days, thus providing utility engineers, operators, and managers with actionable intelligence that will enable immediate and decisive corrective action. Impending-or incipient-device failures will also be detected to enable corrective action before complete failure and so that safety hazards can be removed. This work serves to increase the overall reliability of the transmission system. One goal of future work is to increase the number of disturbance event types that can be classified as well as further test the process using more data. For the continuous waveform analysis portion, future work will involve optimizing the process to reduce computing hardware requirements and further developing the presentation of the data in a useful manner.
|
2309.09123 | Conditional Mutual Information Constrained Deep Learning for
Classification | The concepts of conditional mutual information (CMI) and normalized
conditional mutual information (NCMI) are introduced to measure the
concentration and separation performance of a classification deep neural
network (DNN) in the output probability distribution space of the DNN, where
CMI and the ratio between CMI and NCMI represent the intra-class concentration
and inter-class separation of the DNN, respectively. By using NCMI to evaluate
popular DNNs pretrained over ImageNet in the literature, it is shown that their
validation accuracies over ImageNet validation data set are more or less
inversely proportional to their NCMI values. Based on this observation, the
standard deep learning (DL) framework is further modified to minimize the
standard cross entropy function subject to an NCMI constraint, yielding CMI
constrained deep learning (CMIC-DL). A novel alternating learning algorithm is
proposed to solve such a constrained optimization problem. Extensive experiment
results show that DNNs trained within CMIC-DL outperform the state-of-the-art
models trained within the standard DL and other loss functions in the
literature in terms of both accuracy and robustness against adversarial
attacks. In addition, visualizing the evolution of learning process through the
lens of CMI and NCMI is also advocated. | En-Hui Yang, Shayan Mohajer Hamidi, Linfeng Ye, Renhao Tan, Beverly Yang | 2023-09-17T01:16:45Z | http://arxiv.org/abs/2309.09123v1 | # Conditional Mutual Information Constrained Deep Learning for Classification
###### Abstract
The concepts of conditional mutual information (CMI) and normalized conditional mutual information (NCMI) are introduced to measure the concentration and separation performance of a classification deep neural network (DNN) in the output probability distribution space of the DNN, where CMI and the ratio between CMI and NCMI represent the intra-class concentration and inter-class separation of the DNN, respectively. By using NCMI to evaluate popular DNNs pretrained over ImageNet in the literature, it is shown that their validation accuracies over ImageNet validation data set are more or less inversely proportional to their NCMI values. Based on this observation, the standard deep learning (DL) framework is further modified to minimize the standard cross entropy function subject to an NCMI constraint, yielding CMI constrained deep learning (CMIC-DL). A novel alternating learning algorithm is proposed to solve such a constrained optimization problem. Extensive experiment results show that DNNs trained within CMIC-DL outperform the state-of-the-art models trained within the standard DL and other loss functions in the literature in terms of both accuracy and robustness against adversarial attacks. In addition, visualizing the evolution of learning process through the lens of CMI and NCMI is also advocated.
Alternating minimization, concentration and separation, conditional mutual information, cross entropy, deep learning.
## I Introduction
In recent years, deep neural networks (DNNs) have been applied in a wide range of applications, revolutionizing fields like computer vision, natural language processing, and speech recognition [1, 2]. Typically, a DNN consists of cascaded non-linear layers that progressively produce multi-layers of representations with increasing levels of abstraction, starting from raw input data and ending with a predicted output label. The success of DNNs is largely attributable to their ability to learn these multi-layers of representations as features from the raw data through a deep learning (DL) process.
Putting its neural architecture aside, a classification DNN is, mathematically, a mapping from raw data \(x\in\mathbb{R}^{d}\) to a probability distribution \(P_{x}\) over the set of class labels, predicting an output label \(\hat{y}\) with probability \(P_{x}(\hat{y})\). Given a pair of random variables \((X,Y)\), the distribution of which governs either a training set or testing set, where \(X\in\mathbb{R}^{d}\) represents the raw data and \(Y\) is the ground truth label of \(X\), the prediction performance of the DNN is often measured by its error rate
\[\epsilon=\Pr\{\hat{Y}\neq Y\},\]
where \(\hat{Y}\) is the label predicted by the DNN with probability \(P_{X}(\hat{Y})\) in response to the input \(X\). The accuracy of the DNN is equal to \(1-\epsilon\). The error rate is further upper bounded by the average of the cross entropy between the conditional distribution of \(Y\) given \(X\) and \(P_{X}\) (see Section II). To have better prediction performance, a DL process is then applied to minimize the error rate \(\epsilon\) or its cross entropy upper bound [1, 2].
Although the error rate of a DNN is its most important performance as far as its prediction is concerned, focusing entirely on the error rate is not enough, and can actually lead to several problems. First, the error rate of a DNN depends not only on the DNN itself, but also on the governing joint distribution of \((X,Y)\). When a DNN has a small error rate for one governing joint distribution of \((X,Y)\), it does not necessarily imply that it would have a small error rate for another governing joint distribution of \((X,Y)\), especially when two distributions are quite different. This is essentially related to the well-known overfitting and robustness problems [2, 3, 4, 5]. Second, even when a DNN works well across different governing distributions of \((X,Y)\), it remains a block box to us, especially when its architecture is huge. We don't know why it works and how it works. Its error rate does not reveal any useful information about the intrinsic mapping structure such as the intra-class concentration and inter-class separation of the DNN in its output probability distribution space.
To gain deep insights into the intrinsic mapping structure of a DNN as a mapping from \(x\in\mathbb{R}^{d}\) to \(P_{x}\), in this paper we introduce information quantities from information theory [6] to measure intra-class concentration and inter-class separation of the DNN. Specifically, we propose to use the conditional mutual information (CMI) \(I(X;\hat{Y}|Y)\) between \(X\) and \(\hat{Y}\) given \(Y\) as the measure for the intra-class concentration of the DNN as a mapping \(x\in\mathbb{R}^{d}\to P_{x}\). For each class label \(y\), the conditional mutual information \(I(X;\hat{Y}|Y=y)\) between \(X\) and \(\hat{Y}\) given \(Y=y\) tells how all output probability distributions \(P_{X}\) given \(Y=y\) are concentrated around its "centroid", the conditional probability distribution \(P_{\hat{Y}|Y=y}\). The smaller \(I(X;\hat{Y}|Y=y)\) is, the more concentrated all output probability distributions \(P_{X}\) given \(Y=y\) are around its centroid. We further introduce another information quantity (see Section II) to measure the inter-class separation of the
DNN as a mapping \(x\in\mathbb{R}^{d}\to P_{x}\). Define the ratio between \(I(X;\hat{Y}|Y)\) and the inter-class separation as the normalized conditional mutual information (NCMI) between \(X\) and \(\hat{Y}\) given \(Y\). One may interpret CMI and NCMI as certain mapping structure traits of the DNN. Then in addition to its error rate, the DNN can also be evaluated in terms of its CMI and NCMI.
Equipped with our new concepts of CMI and NCMI, we further evaluate popular DNNs pretrained in the literature over ImageNet in terms of their respective CMI and NCMI. It turns out that their validation accuracies over the ImageNet validation data set are more or less inversely proportional to their NCMI values. In other words, even though these DNNs have different architectures and different sizes, their error rates and NCMI values have more or less a positive linear relationship. Indeed, the correlation between the error rate and NCMI is above \(0.99\). This implies that given a DNN architecture, one may be able to further improve the effectiveness of DL by simultaneously minimizing the error rate (or cross entropy upper bound) and NCMI of the DNN during the learning process, where the error rate and NCMI represent the prediction performance and the concentration/separation mapping structure performance of the DNN, respectively. This in turn motivates us to modify the standard DL framework to minimize the standard cross entropy function subject to an NCMI constraint, yielding CMI constrained deep learning (CMIC-DL). To this end, a novel alternating learning algorithm is further proposed to solve such a constrained optimization problem. Extensive experiment results show that DNNs trained within CMIC-DL outperform the state-of-the-art models trained within the standard DL and other loss functions in the literature in terms of both accuracy and robustness against adversarial attacks.
The remainder of this paper is organized as follows. In Section II, we formally introduce the concepts of CMI and NCMI to measure intra-class concentration and inter-class separation structure performance of a DNN when it is viewed as a mapping from \(x\in\mathbb{R}^{d}\) to \(P_{x}\). In Section III, we use NCMI to evaluate and compare popular DNNs pretrained in the literature over ImageNet. These DNNs have different architectures and different sizes. Section IV is devoted to the full development of CMIC-DL. In Section V, extensive experiment results are presented and compared with the prior art in the literature; visualizing the evolution of learning process through the lens of CMI and NCMI is also advocated. Finally, conclusions are drawn along with some open problems in Section VI.
## II Performance of DNNs: Concentration and Separation
A DNN can be described either by its neural architecture along with its connection weights, the number of which can be in billions, or by its mathematical mapping from \(x\in\mathbb{R}^{d}\) to \(P_{x}\). Both perspectives are useful. In this and next sections, we will take the second perspective and regard a DNN simply as a mapping \(x\in\mathbb{R}^{d}\to P_{x}\). Before formally introducing CMI and NCMI, we set up notation to be used throughout the paper.
### _Notation_
For a positive integer \(K\), let \([K]\triangleq\{1,\ldots,K\}\). Assume that there are \(C\) class labels with \([C]\) as the set of class labels. Let \(\mathcal{P}([C])\) denote the set of all probability distributions over \([C]\). For any two probability distributions \(P_{1},P_{2}\in\mathcal{P}([C])\), the cross entropy of \(P_{1}\) and \(P_{2}\) is defined as
\[H(P_{1},P_{2})=\sum_{i=1}^{C}-P_{1}(i)\ln P_{2}(i), \tag{1}\]
where \(\ln\) denotes the logarithm with base \(e\); the Kullback-Leibler (KL) divergence (or relative entropy) between \(P_{1}\) and \(P_{2}\) is defined as
\[D(P_{1}||P_{2})=\sum_{i=1}^{C}P_{1}(i)\ln\frac{P_{1}(i)}{P_{2}(i)}. \tag{2}\]
For any \(y\in[C]\) and \(P\in\mathcal{P}([C])\), write the cross entropy of the one-hot probability distribution corresponding to \(y\) and \(P\) as
\[H(y,P)=-\ln P(y). \tag{3}\]
Given a DNN: \(x\in\mathbb{R}^{d}\to P_{x}\), let \(\theta\) denote its weight vector consisting of all its connection weights; whenever there is no ambiguity, we also write \(P_{x}\) as \(P_{x,\theta}\), and \(P_{x}(y)\) as \(P(y|x,\theta)\) for any \(y\in[C]\).
### _Error Rate_
Fix a DNN: \(x\in\mathbb{R}^{d}\to P_{x}\). As before, let \((X,Y)\) be a pair of random variables representing the raw input data and the corresponding ground truth label; let \(\hat{Y}\) be the label predicted by the DNN with probability \(P_{X}(\hat{Y})\) in response to the input \(X\), that is, for any input \(x\in\mathbb{R}^{d}\) and any \(\hat{y}\in[C]\)
\[P(\hat{Y}=\hat{y}|X=x)=P_{x}(\hat{y})=P(\hat{y}|x,\theta). \tag{4}\]
Note that \(Y\to X\rightarrow\hat{Y}\) forms a Markov chain in the indicated order. Therefore, given \(X=x\), \(Y\) and \(\hat{Y}\) are conditionally independent.
The error rate of the DNN for \((X,Y)\) is equal to
\[\epsilon=\Pr\{\hat{Y}\neq Y\}\]
which can be upper bounded by the average of the cross entropy of the conditional probability distribution of \(Y\) given \(X\), \(P_{Y|X}=P_{Y|X}(\cdot|X)\), and \(P_{X}\), as shown in the following theorem.
**Theorem 1**.: _For any DNN: \(x\in\mathbb{R}^{d}\to P_{x}\) and any \((X,Y)\),_
\[\epsilon\leq\mathbf{E}_{X}\left[H(P_{Y|X},P_{X})\right] \tag{5}\]
_where \(\mathbf{E}_{X}\) denotes the expectation with respect to \(X\)._
Proof:: Let \(I_{\{\hat{Y}\neq Y\}}\) denote the indicator function of the event \(\{\hat{Y}\neq Y\}\). Then
\[\epsilon = \Pr\{\hat{Y}\neq Y\} \tag{6}\] \[= \mathbf{E}[I_{\{\hat{Y}\neq Y\}}]\] \[= \mathbf{E}_{X}\left[\mathbf{E}[I_{\{\hat{Y}\neq Y\}}|X]\right]\] \[= \mathbf{E}_{X}\left[1-\sum_{i=1}^{C}P_{Y|X}(i|X)P_{X}(i)\right]\] \[= \mathbf{E}_{X}\left[\sum_{i=1}^{C}P_{Y|X}(i|X)(1-P_{X}(i))\right]\] \[\leq \mathbf{E}_{X}\left[\sum_{i=1}^{C}-P_{Y|X}(i|X)\ln P_{X}(i)\right]\] (7) \[= \mathbf{E}_{X}\left[H(P_{Y|X},P_{X})\right] \tag{8}\]
where (6) follows from the fact that \(Y\) and \(\hat{Y}\) are conditionally independent given \(X\), and (7) is due to the inequality \(\ln z\leq z-1\) for any \(z>0\). This completes the proof of Theorem 1.
Given \(X=x\), what happens if the DNN outputs instead the top one label \(\hat{Y}^{*}\)
\[\hat{Y}^{*}=\operatorname*{arg\,max}_{i\in[C]}P_{x}(i)?\]
In this case, the error rate of the DNN for \((X,Y)\) is equal to
\[\epsilon^{*}=\Pr\{\hat{Y}^{*}\neq Y\}\]
which can also be upper bounded in terms of \(\mathbf{E}_{X}\left[H(P_{Y|X},P_{X})\right]\).
**Corollary 1**.: _For any DNN: \(x\in\mathbb{R}^{d}\to P_{x}\) and any \((X,Y)\),_
\[\epsilon^{*}\leq C\epsilon\leq C\mathbf{E}_{X}\left[H(P_{Y|X},P_{X})\right]. \tag{9}\]
Proof::
\[\epsilon^{*} = \Pr\{\hat{Y}^{*}\neq Y\} \tag{10}\] \[= \mathbf{E}_{X}\left[1-P_{Y|X}(\hat{Y}^{*}|X)\right]\] \[\leq C\mathbf{E}_{X}\left[P_{X}(\hat{Y}^{*})(1-P_{Y|X}(\hat{Y}^{*}|X))\right]\] \[\leq C\mathbf{E}_{X}\left[\sum_{i=1}^{C}P_{X}(i)(1-P_{Y|X}(i|X))\right]\] \[= C\mathbf{E}_{X}\left[1-\sum_{i=1}^{C}P_{Y|X}(i|X)P_{X}(i)\right]\] \[= C\epsilon\leq C\mathbf{E}_{X}\left[H(P_{Y|X},P_{X})\right], \tag{11}\]
where (10) follows from the fact that \(P_{X}(\hat{Y}^{*})\geq 1/C\), and (11) is due to (6) and (8).
In view of Theorem 1 and Corollary 1, no matter which form of error rate \(\epsilon\) or \(\epsilon^{*}\) is used, minimizing the average of the cross entropy \(\mathbf{E}_{X}[H(P_{Y|X},P_{X})]\) would have an effect to reduce \(\epsilon\) and \(\epsilon^{*}\). This provides mathematical justifications for the use of the average of the cross entropy \(\mathbf{E}_{X}[H(P_{Y|X},P_{X})]\) as an objective function or a major component thereof in DL and knowledge distillation, where \(P_{Y|X}\) is approximated by the one-hot probability vector corresponding to \(Y\) in DL [1, 2], and by the output probability distribution of the teacher in knowledge distillation [7, 8, 9].
### _Concentration_
The error rates \(\epsilon\) and \(\epsilon^{*}\) of the DNN: \(x\in\mathbb{R}^{d}\to P_{x}\) for \((X,Y)\) do not provide any useful information on the intrinsic mapping structure of the DNN in the probability distribution space \(\mathcal{P}([C])\). Two important mapping structure properties the DNN: \(x\in\mathbb{R}^{d}\to P_{x}\) possesses, are its intra-class concentration and inter-class separation in the space \(\mathcal{P}([C])\). In this and next subsections, we formally introduce information quantities to quantify these two mapping structure properties, respectively.
Visualize the DNN: \(x\in\mathbb{R}^{d}\to P_{x}\) according to Fig. 1. Given \(Y=y\), \(y\in[C]\), the input data \(X\) is conditionally distributed according to the conditional distribution \(P_{X|Y}(\cdot|y)\) and then mapped into \(P_{X}\), a random point in the space \(\mathcal{P}([C])\). The instances (or realizations) of this random point \(P_{X}\) form a cluster in the space \(\mathcal{P}([C])\). The centroid of this cluster is the average of \(P_{X}\) with respect to the conditional distribution \(P_{X|Y}(\cdot|y)\), which is exactly the conditional distribution of \(\hat{Y}\) given \(Y=y\)
\[P_{\hat{Y}|y}=\mathbf{E}[P_{X}|Y=y]. \tag{12}\]
Measure the "distance" between each \(P_{X}\) and the centroid \(P_{\hat{Y}|y}\) by their KL divergence \(D(P_{X}||P_{\hat{Y}|y})\). Then the average of KL divergence \(D(P_{X}||P_{\hat{Y}|y})\) with respect to the conditional distribution \(P_{X|Y}(\cdot|y)\) is equal to
\[\mathbf{E}\left[D(P_{X}||P_{\hat{Y}|y})|Y=y\right] \tag{13}\] \[= \mathbf{E}\left[\left(\sum_{i=1}^{C}P_{X}(i)\ln\frac{P_{X}(i)}{P_ {\hat{Y}|y}(\hat{Y}=i|Y=y)}\right)|Y=y\right]\] \[= \sum_{x}P_{X|Y}(x|y)\left[\sum_{i=1}^{C}P(\hat{Y}=i|x)\times\right.\] \[\ln\frac{P(\hat{Y}=i|x)}{P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right]\] \[= I(X;\hat{Y}|y), \tag{14}\]
where \(I(X;\hat{Y}|y)\) is the conditional mutual information between \(X\) and \(\hat{Y}\) given \(Y=y\). (Please refer to [6] for the notions of mutual information and conditional mutual information.) In (13), \(X\) is assumed to be discrete; if \(X\) is continuous, then the average \(\sum_{x}P_{X|Y}(x|y)\) should be replaced by the integral
\[\int_{x}dP_{X|Y}(x|y).\]
Note that (14) is due to the fact that \(Y\to X\rightarrow\hat{Y}\) forms a Markov chain.
The information quantity \(I(X;\hat{Y}|y)\) quantifies the concentration of the cluster formed by the instances of the random point \(P_{X}\) given \(Y=y\) around its centroid \(P_{\hat{Y}|y}\). Averaging \(I(X;\hat{Y}|y)\) with respect to the distribution \(P_{Y}(y)\) of \(Y\), we get
the conditional mutual information \(I(X;\hat{Y}|Y)\) between \(X\) and \(\hat{Y}\) given \(Y\):
\[I(X;\hat{Y}|Y) = \sum_{y\in[C]}P_{Y}(y)I(X;\hat{Y}|y) \tag{15}\] \[= \mathbf{E}\left[D(P_{X}||P_{\hat{Y}|Y})\right]\] \[= \sum_{y}\sum_{x}P(x,y)\left[\sum_{i=1}^{C}P(\hat{Y}=i|x)\times\right.\] \[\left.\ln\frac{P(\hat{Y}=i|x)}{P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right].\]
The CMI \(I(X;\hat{Y}|Y)\) can then be regarded as a measure for the intra-class concentration of the DNN: \(x\in\mathbb{R}^{d}\to P_{x}\) for \((X,Y)\).
In practice, the joint distribution \(P(x,y)\) of \((X,Y)\) may be unknown. To compute the CMI \(I(X;Y|Y)\) in this case, one may approximate \(P(x,y)\) by the empirical distribution of a data sample \(\{(x_{1},y_{1}),(x_{2},y_{2}),\cdots,(x_{n},y_{n})\}\). For any \(y\in[C]\), let
\[n_{y}=|\{(x_{j},y_{j}):y_{j}=y,1\leq j\leq n\}|, \tag{16}\]
where \(|S|\) denotes the cardinality of a set \(S\), and
\[P_{y}=\frac{1}{n_{y}}\sum_{(x_{j},y_{j}):y_{j}=y}P_{x_{j}}. \tag{17}\]
Then \(I(X;\hat{Y}|Y)\) can be computed as follows
\[I(X;\hat{Y}|Y) =\sum_{y\in[C]}\sum_{(x_{j},y_{j}):y_{j}=y}\frac{1}{n}D(P_{x_{j}}|| P_{y})\] \[=\frac{1}{n}\sum_{j=1}^{n}D(P_{x_{j}}||P_{y_{j}}). \tag{18}\]
### _Separation and NCMI_
Let \((U,V)\) be a pair of random variables independent of \((X,Y)\), and having the same joint distribution as that of \((X,Y)\). With reference to Fig. 1, we define the following information quantity1
Footnote 1: Other information quantities can also be defined and used as a measure for the inter-class separation of the DNN: \(x\in\mathbb{R}^{d}\to P_{x}\), which will be explored in Appendix B. Although they are more or less equivalent, the information quantity \(\Gamma\) defined here is more convenient for the selection of hyper parameters in our proposed CMIC deep learning.
\[\Gamma=\mathbf{E}\left[I_{\{Y\neq V\}}H(P_{X},P_{U})\right], \tag{19}\]
and use \(\Gamma\) as a measure for the inter-class separation of the DNN: \(x\in\mathbb{R}^{d}\to P_{x}\). It is clear that the larger \(\Gamma\) is, the further apart different clusters are from each other on average.
Ideally, we want \(I(X;\hat{Y}|Y)\) to be small while keeping \(\Gamma\) large. This leads us to consider the ratio between \(I(X;\hat{Y}|Y)\) and \(\Gamma\):
\[\hat{I}(X;\hat{Y}|Y)\equiv\!\frac{I(X;\hat{Y}|Y)}{\Gamma}. \tag{20}\]
We call \(\hat{I}(X;\hat{Y}|Y)\) the normalized conditional mutual information between \(X\) and \(\hat{Y}\) given \(Y\).
In case where the joint distribution \(p(x,y)\) of \((X,Y)\) is unknown, it can be approximated by the empirical distribution of a data sample \(\{(x_{1},y_{1}),(x_{2},y_{2}),\cdots,(x_{n},y_{n})\}\). In parallel with (18), \(\Gamma\) can be computed in this case as follows:
\[\Gamma=\frac{1}{n^{2}}\sum_{j=1}^{n}\sum_{k=1}^{n}I_{\{y_{j}\neq y_{k}\}}H(P_ {x_{j}},P_{x_{k}}), \tag{21}\]
from which and (18), \(\hat{I}(X;\hat{Y}|Y)\) can be computed accordingly.
### _Related Works_
In the literature, intra-class concentration and inter-class separation of a DNN have been mainly investigated in the feature space corresponding to the penultimate layer of the DNN, and largely treated in an ad-hoc manner in a deep learning process or algorithm. Specifically, it was observed numerically in [10, 11, 12] that DNNs concentrate features of each class around their separated mean. This observation was further analyzed in [13] under the Gaussian mixture model assumption about features. In [14, 15, 16, 17, 18] and references therein, different loss functions including the so-called center loss,
Fig. 1: The mappings from the label space to the input space, and from the input space to the output space of a DNN. Here caricatures are used to depict label and input spaces, where each of the three instances in the label space are mapped to two instances in input space according to \(P_{X|Y}(\cdot|Y=y_{i})\), for \(i\in\{1,2,3\}\). On the other hand, the figure for the output space is obtained from a real example, where for the ResNet56 model trained on CIFAR-100 dataset, the output probability vectors corresponding to all validation sample instances from three randomly-picked classes are projected over the two-dimensional probability simplex.
contrastive center loss, orthogonal project loss, constrained center loss, and their variants, all of which are defined in the feature space, were proposed and used in the respective learning processes to improve the intra-class concentration and inter-class separation of such trained DNNs.
In contrast, in this paper we investigate the intra-class concentration and inter-class separation of a DNN in its output probability distribution space \(\mathcal{P}([C])\), where the DNN is viewed as a mapping from \(x\in\mathbb{R}^{d}\) to \(P_{x}\). This perspective allows us to introduce information quantities, CMI, \(\Gamma\), and NCMI, to quantify the intra-class concentration and inter-class separation of each DNN. In addition, our introduced CMI and NCMI can also be regarded as additional performance metrics for any DNN, which are in parallel with the error rate performance metric, are independent of any learning process, and represent mapping structure properties of a DNN. As additional performance metrics, they can be used to evaluate and compare different DNNs regardless of the architectures and sizes of DNNs.
Another related work in the sense of introducing information theoretic ideas into DL is the so-called coded deep learning (CDL) [19], where information theoretic coding ideas are embedded into the inner workings of DL. The purposes of CDL are to eliminate essentially floating operations of a coded DNN during its inference time and efficiently compress the coded DNN while maintaining or even improving the error rate of the coded DNN.
In the next section, CMI and NCMI \(\hat{I}(X;\hat{Y}|Y)\) will be used to evaluate and compare popular DNNs pre-trained over ImageNet in the literature.
## III NCMI Vs. Accuracy
The popular DNNs we selected for evaluation according to their respective CMI and NCMI are ResNet-\(\{18,34,50,101,152\}\)[20], VGG-\(\{11,13,16,19\}\)[21], EfficientNet-\(\{\text{B0},\text{B1},\text{B2},\text{B3}\}\)[22], Wide-ResNet-\(\{50,101\}\)[23], MobileNet-V3-\(\{\text{small},\text{large}\}\)[24], and AlexNet [25]. They are all pre-trained on ImageNet dataset and obtained from the Pytorch official website2.
Footnote 2: [https://pytorch.org/vision/stable/models.html](https://pytorch.org/vision/stable/models.html).
Table I lists the values of CMI, \(\Gamma\), and NCMI of the selected DNNs, which are calculated, according to (18), (21), and (20), over the ImageNet validation set, along with their respective error rate \(\epsilon^{*}\). From Table I, it is clear that within the same family, as the model size increases, the CMI value decreases. This shows that larger models have more compact clusters in the output probability space \(\mathcal{P}([C])\). For the \(\Gamma\) value, although the general trend is that within the same family, the \(\Gamma\) value increases as the model size gets larger, there does exist an exception. Note that for the EfficientNet family, the smallest model EfficientNet-B0 has the largest \(\Gamma\) value.
Now turn our attention to the NCMI value. From Table I, it follows that as the model size within the same family increases, the NCMI value decreases as well. Even more interesting is the relationship between the NCMI and error rate \(\epsilon^{*}\). Across all models evaluated, as the NCMI value decreases, so does the error rate \(\epsilon^{*}\). To make the relationship between the NCMI and error rate \(\epsilon^{*}\) more transparent, Figure 2 illustrates the relationship graphically. From Figure 2, it seems that the NCMI and error rate \(\epsilon^{*}\) have a positive linear relationship; indeed, the Pearson correlation coefficient \(\rho\)[26] between them is \(\rho=0.9929\), strongly supporting the former statement. As such, the NCMI value of a DNN can be used to gauge the prediction performance of the DNN.
To conclude this section, let us draw some analogies. If a DNN is analogized with a student, then the error rate and NCMI of the DNN can be analogized with the testing score of the student in an exam and certain trait of the student, respectively. In a way similar to using the trait of the student to predict the student's testing performance, one can also use the NCMI value of the DNN to predict the DNN's testing performance.
## IV CMIC Deep Learning
The discussions in the above section suggest a new way of learning. In the learning process, instead of minimizing the average of cross entropy \(\mathbf{E}_{X}\left[H(P_{Y|X},P_{X})\right]\) alone, one also needs to look after the NCMI \(\hat{I}(X;\hat{Y}|Y)\). This leads to a new form of learning framework dubbed CMI constrained deep learning (CMIC-DL), which is described next.
### _Optimization Problem Formulation_
In CMIC-DL, the optimization problem to be solved is as follows:
\[\min_{\theta} \mathbf{E}_{X}\left[H(P_{Y|X},P_{X,\theta})\right]\] s.t. \[\hat{I}(X;\hat{Y}|Y)=r, \tag{23}\]
where \(r\) is a positive constant. By interpreting \(\hat{I}(X;\hat{Y}|Y)\) as a rate, and \(\mathbf{E}_{X}\left[H(P_{Y|X},P_{X,\theta})\right]\) as a distortion, the above optimization problem resembles the rate distortion problem in information theory [6, 27, 28]. By rewriting the constraint in (23), and using the Lagrange multiplier method, the constrained optimization problem in (23) could be formulated as the following unconstrained one
\[\min_{\theta} \mathbf{E}_{X}\left[H(P_{Y|X},P_{X,\theta})\right]\] \[+\lambda I(X;\hat{Y}|Y)-\beta\mathbf{E}\left[I_{\{Y\neq V\}}H(P_ {X,\theta},P_{U,\theta})\right], \tag{24}\]
where \(\lambda>0\) is a scalar, and \(\beta=\lambda r\).
Note that in view of (15), the CMI \(I(X;\hat{Y}|Y)\) in (24) depends on \(P_{\hat{Y}|Y}\), which, for \(Y=y\), is the average of \(P_{X,\theta}\) with respect to the conditional distribution \(P_{X|Y}(\cdot|y)\) (see (12)). As such, the unconstrained optimization problem in its form (24) is not amenable to numerical solutions. To overcome this, we first convert it into a double unconstrained minimization problem by introducing a dummy distribution \(Q_{y}\in\mathcal{P}([C])\) for each \(y\in[C]\), as shown in the following theorem, which will be proved in Appendix A.
**Theorem 2**.: _For any \(\lambda>0\) and \(\beta>0\),_
\[\min_{\theta} \left\{\mathbf{E}_{X}\left[H(P_{Y|X},P_{X,\theta})\right]\right. \tag{25}\] \[\left.+\lambda I(X;\hat{Y}|Y)-\beta\mathbf{E}\left[I_{\{Y\neq V\}} H(P_{X,\theta},P_{U,\theta})\right]\right\}\] \[= \min_{\theta}\min_{\{Q_{c}\}_{c\in[C]}}\;\left\{\mathbf{E}[H(P_{ Y|X},P_{X,\theta})+\lambda D(P_{X,\theta}||Q_{Y})]\right.\] \[\left.-\beta\mathbf{E}[I_{\{Y\neq V\}}H(P_{X,\theta},P_{U,\theta} )]\right\}.\]
In practice, the joint distribution \(P(x,y)\) of \((X,Y)\) may be unknown. In this case, to solve (25) numerically, one may approximate \(P(x,y)\) by the empirical distribution of a data sample (such as a mini-batch in the DL process) \(\mathcal{B}=\{(x_{i_{1}},y_{i_{1}}),(x_{i_{2}},y_{i_{2}}),\cdots,(x_{i_{n}},y _{i_{m}})\}\), and \(P_{Y|X}\) by the one-hot probability distribution corresponding to \(Y\). Accordingly, the objective function in the double minimization (25) can be approximated by \(J_{\mathcal{B}}\left(\lambda,\beta,\theta,\{Q_{c}\}_{c\in[C]}\right)\) shown in (22) (on the top of the page).
### _Algorithm for Solving the Optimization in (25)_
Having addressed how to approximate the objection function in the double minimization (25), we are now ready to present an algorithm for solving (25). In fact, by reformulating the single minimization problem as a double minimization problem, Theorem 2 lends us an alternating algorithm that optimizes \(\theta\) and \(\{Q_{c}\}_{c\in[C]}\) alternatively to minimize the objective function in (25), given that the other is fixed.
Given \(\{Q_{c}\}_{c\in[C]}\), \(\theta\) can be updated using the same strategy as in the conventional DL through stochastic gradient descent iterations over mini-batches, where the training set is divided into \(B\) mini-batches \(\{\mathcal{B}_{b}\}_{b\in[B]}\) with each batch of size \(|\mathcal{B}|\). Given \(\theta\), how is \(\{Q_{c}\}_{c\in[C]}\) updated? This is where differences arise. In view of (12) and (32), the optimal \(\{Q_{c}\}_{c\in[C]}\) given \(\theta\) is equal to
\[Q_{c}=P_{Y|y=c}=\sum_{x}P(x|y=c)P_{x,\theta}, \tag{26}\]
for any \(c\in[C]\). Therefore, to update \(\{Q_{c}\}_{c\in[C]}\) given \(\theta\), we construct, at each iteration, \(C\) mini-batches \(\{\mathfrak{B}_{c}\}_{c\in[C]}\) in the following manner: to make \(\mathfrak{B}_{c}\), \(\forall c\in[C]\), we randomly sample \(|\mathfrak{B}_{c}|\) instances from the training samples whose ground truth labels are \(c\). It then follows from (26) that for any \(c\in[C]\), \(Q_{c}\) is updated as3
Footnote 3: To update \(\{Q_{c}\}_{c\in[C]}\), we may use momentum to make the updation more stable and less noisy.
\[Q_{c}=\frac{\sum_{x\in\mathfrak{B}_{c}}P_{x,\theta}}{|\mathfrak{B}_{c}|}. \tag{27}\]
The procedure for solving the optimization problem (25) is now summarized in Algorithm 1, where we use \((\cdot)_{c,b}^{t}\) to indicate class \(c\) at the \(b\)-th batch updation during the \(t\)-th epoch. We also use \((\cdot)_{c,B}^{t}\) as \((\cdot)_{c}^{t}\) whenever necessary, and set \((\cdot)_{c,0}^{t}=(\cdot)_{c}^{t-1}\).
## V Experiment Results
To demonstrate the effectiveness of CMIC-DL and compare it with some state-of-the-art alternatives, we have conducted a series of experiments. Specifically, we have performed experiments on two popular image classification datasets, namely CIFAR-100 [29] and ImageNet [25]. In Subsections V-A and V-B, we present their respective accuracy results. In Subsection V-C, we explore how to visualize the concentration
Fig. 2: The error rate vs NCMI value over the validation set of popular pre-trained models on ImageNet dataset. The sizes of the circles represent the sizes of respective models in terms of the number of model parameters; the larger the circle, the larger the model.
and separation of a DNN, which is made possible by viewing the DNN as a mapping from \(x\in\mathbb{R}^{d}\) to \(P_{x}\); using such a visualization method, the concentration and separation of ResNet-56 trained within our CMIC-DL framework are then compared with those of ResNet-56 trained within the standard DL framework.
In the literature, a deep learning process is typically analyzed experimentally through the evolution curve of its error rate. With our newly introduced performance metrics, CMI, \(\Gamma\) (separation), and NCMI, the learning process can also be analyzed through the evolution curves of CMI, \(\Gamma\), and NCMI, which show interestingly how the mapping structure in terms of CMI, \(\Gamma\), and NCMI evolves over the course of learning process. In Subsection V-D, we use ResNet-56 as an example, and illustrate and compare the evolution curves of CMI, \(\Gamma\), NCMI, and error rate within our CMIC-DL framework vs within the standard DL framework. Lastly, in Subsection V-E, we evaluate the robustness of models trained within our CMIC-DL framework against two different adversarial attacks, and show that in comparison with the standard DL, CMIC-DL improves the robustness of DNNs as well.
### _Experiments on CIFAR-100_
CIFAR-100 dataset contains 50K training and 10K test colour images of size \(32\times 32\), which are labeled for 100 classes.
\(\bullet\)**Models**: To show the effectiveness of CMIC-DL, we have conducted experiments on three different model architectural families. Specifically, we have selected (i) three models from ResNet family [20], namely ResNet-\(\{32,56,110\}\); (ii) VGG-13 from VGG family [21]; and (iii) Wide-ResNet-28-10 from Wide-ResNet family [23].
\(\bullet\)**Benchmarks**: We evaluate the performance of the DNNs trained via CMIC-DL against those trained by conventional cross entropy loss (CE), center loss (CL) [16] which promotes clustering the features, focal loss (FL) [30] which uses regularization, large-margin Gaussian Mixture (L-GM) loss [31] which imposes margin constraints, and orthogonal projection loss (OPL) [18] which imposes orthogonality in the feature space.
\(\bullet\)**Training settings**: We have deployed an SGD optimizer with a momentum of 0.9, a weight decay of 0.0005, and a batch size of 64. We have trained the models for 200 epochs, and adopted an initial learning rate of 0.1, which is further divided by 10 at the 60-th, 120-th and 160-th epochs. To have a fair comparison, we have reproduced the results of all the benchmark methods using their respective best hyper-parameters reported in their original papers. In addition, in Algorithm 1, we set \(\{Q_{c}^{0}(i)\}_{c\in[C]}=\frac{1}{C}\), for \(i\in[C]\), use \(|\mathfrak{B}_{c}|=8\), \(\forall c\in[C]\), and also update \(Q_{c,b}^{t}\) using the momentum of 0.9999.
The results are summarized in Table II. As seen, the models trained within our CMIC-DL framework outperform those trained by the benchmark methods. Importantly, the improvement is consistent across the models from different architectural families, showing that CMIC-DL can effectively train DNNs from different families. As a rule of thumb, compared to the CE method, CMIC-DL yields DNNs with almost 1.3% higher validation accuracy for the ResNet models.
Furthermore, in Table III we report the NCMI values \(\hat{I}(X;\hat{Y}|Y)\), over the validation set, for the models we trained in Table II, where we use the notation \(\hat{I}_{\text{Loss}}\) to denote the NCMI value when the underlying DNN is trained using "Loss" method. As observed, \(\hat{I}_{\text{CMIC}}\) has the smallest value compared to the other counterparts.
In addition, in Table IV, we report the \(\lambda^{*}\) and \(\beta^{*}\) values for which we obtained the best validation accuracies. As observed, the \(\lambda^{*}\) and \(\beta^{*}\) values are almost the same for all the models.
\(\bullet\)**Models**: We have conducted experiments on two models from ResNet family, namely ResNet-18 and ResNet-50.
\(\bullet\)**Benchmarks**: We evaluate the performance of CMIC-DL against CE and OPL.
\(\bullet\)**Training settings**: We have deployed an SGD optimizer with a momentum of 0.9, a weight decay of 0.0001, and a batch size of 256. We have trained the models for 90 epochs, and adopted an initial learning rate of 0.1, which is further divided by 10 at the 30-th and 60-th epochs. In Algorithm 1, we set \(\{Q^{0}_{c}(i)\}_{c\in[C]}=\frac{1}{C}\), for \(i\in[C]\), use \(|\mathcal{B}_{c}|=8\), \(\forall c\in[C]\), and also update \(Q^{t}_{c,b}\) using the momentum of 0.9999.
The top-\(\{1,5\}\) accuracies are reported in Table V. As seen, in comparison with the CE method, CMIC-DL increases the top-1 validation accuracy for ResNet-18 and ResNet-50 by 0.56% and 0.37%, respectively. The improvement is also consistent for the top-5 validation accuracy.
The hyper parameters \((\lambda^{*},\beta^{*})\) used in CMIC-DL for ResNet-18 and ResNet-50 are \((0.6,0.1)\) and \((0.6,0.2)\), respectively. The corresponding NCMI values are \(\tilde{I}_{\text{CE}}=0.110\) and \(\tilde{I}_{\text{CMIC}}=0.102\) for ResNet-18, and \(\tilde{I}_{\text{CE}}=0.091\) and \(\tilde{I}_{\text{CMIC}}=0.088\) for ResNet-50.
### _Concentration and Separation Visualization_
In this subsection, we explore how to visualize concentration and separation of a DNN. Consider the data set CIFAR-100. To visualize concentration and separation of a DNN in a dimension reduced probability space, we randomly select three class labels. Restrict ourselves to a subset consisting of all validation sample instances with labels from the three selected labels. Given a DNN, feed each validation sample instance from the subset into the DNN, keep only three logits corresponding to the three selected labels, and then convert these three logits into a 3 dimension probability vector through the softmax operation. Following these steps in the indicated order, the DNN then maps each validation sample instance from the subset into a 3 dimension probability vector. Further project the 3 dimension probability vector into the 2 dimension simplex. Then the concentration and separation properties of the DNN for the three selected classes can be more or less visualized through the projected 2 dimension simplex.
Using the above visualization method, Fig. 3 compares the concentration and separation properties of ResNet-56 trained within our CMIC-DL framework with those of ResNet-56 trained within the standard CE framework. From Fig. 3, it is clear that the three clusters in the case CMIC-DL are more concentrated than their counterparts in the case of CE, and also further apart from each other than their counterparts in the case of CE. Again, this is consistent with the NCMI values reported in Table III.
### _Evolution of CMI, \(\Gamma\), NCMI, and error rate_
In this subsection, we analyze and visualize a learning process within either our CMIC-DL framework or the conventional CE-based DL framework through the lens of CMI, \(\Gamma\), NCMI, and error rate. Fig. 4 shows the evolution curves of CMI, \(\Gamma\), NCMI, and error rate over the validation set during the course of training ResNet-56 on CIFAR-100 dataset in each case, where the training setup is the same as that used in Subsection V-A, and we use \(\lambda=0.7\) and \(\beta=0.4\) in the case of CMIC-DL.
As seen in Fig. 3(a), the CMI value in both CE and CMIC-DL cases is small at the beginning of the training (epoch zero). This is because at the beginning, all clusters in the output probability distribution space \(\mathcal{P}([\mathcal{C}])\) stick around together, as shown from the separation distance curve (see Fig. 3(b)), and probability distributions within each cluster are not separated at all. After the training starts and for the first a few epochs, the clusters move away from each other; during the course of movement, probability distributions within each cluster
Fig. 3: Visualization and comparison of concentration and separation: ResNet56 trained via CE vs ResNet56 trained via CMIC, where different shapes indicate different classes.
move in different speed, and become separated. As such, both the values of CMI and \(\Gamma\) increase. Indeed, this is shown in Fig. 3(a) and Fig. 3(b). Hereafter, the clusters continue to move away from each other, while at the same time, probability distributions within each cluster tend to move together. Thus the \(\Gamma\) value continues to increase, while the CMI value decreases, as shown again in Fig. 3(a) and Fig. 3(b).
The above summarizes the general behaviour of the CMI and \(\Gamma\) evolution curves in both CE and CMIC-DL cases. Let us now examine the differences between them. From Fig. 3(a), it is clear that the CMI evolution curve in the case of CMIC-DL always remains below its counterpart in the CE case. On the other hand, as shown in Fig. 3(b), although initially the \(\Gamma\) value increases faster in the CE case than in the CMIC-DL case, after the first a few epochs, the rate of increase in \(\Gamma\) value is consistently higher in the CMIC-DL case than in the CE case to the extent that the \(\Gamma\) value in the CMIC-DL case surpasses its counterpart in the CE case in the late stage of the learning process.
From Fig. 3(c) and Fig. 3(d), we can see that once the learning process is more or less stabilized, both the NCMI value and error rate in the CMIC-DL case are consistently smaller than their counterparts in the CE case. Once again, this is consistent with our observation in Fig. 2: the smaller the NCMI value, the lower the error rate. In conjunction with the visualization method discussed in Subsection V-C, we have created a video available at [https://youtu.be/G0fDwv609Ek](https://youtu.be/G0fDwv609Ek) to illustrate the learning process during the course of training ResNet-56 on CIFAR-100 dataset in each of the CE and NMIC-DL cases through the lens of CMI and \(\Gamma\), where concentration and separation are shown for three randomly selected classes, and the evolution curves of CMI and \(\Gamma\) are shown for all classes.
### _Robustness against adversarial attacks_
As a by-product, we would expect that DNNs trained within the CMIC-DL framework are more robust against adversarial attacks, in comparison with their counterparts trained within the standard CE-based DL framework. This is because when a DNN is trained within our CMIC-DL framework, its clusters in its output probability distribution space are more compact, and also further separated from each other, in comparison with its counterpart trained within the standard CE-based DL framework. As such, it is harder for an adversary to craft a perturbation which, when added to a clean sample, would result in an attacked sample falling into a cluster with a different label. Our purpose in this subsection is to confirm this by-product. To this end, we have performed the following experiments.
\(\bullet\)**Dataset**: We have used MNIST dataset [32] comprising of 10-class handwritten digits.
\(\bullet\)**Model**: We have selected a simple DNN with three convolutional and one fully connected layers.
\(\bullet\)**Attacks**: Two white-box attacks have been selected, where the adversary has an access to the gradients of the underlying model. Specifically, FGSM [3] and PGD attack [5] with 5 iterations were employed with attack perturbation budgets \(\|\epsilon\|_{\infty}=\{0.05,0.10,0.15,0.20,0.25,0.30,0.35\}\).
\(\bullet\)**Training settings**: We have deployed an SGD optimizer with a batch size of 64. We have trained the models for 15 epochs and adopted an step learning rate annealing with decay factor of 0.7. The hyper parameters were selected to be \(\lambda^{*}=2\) and \(\beta^{*}=9\) in our CMIC-DL framework due to the fact that the classification task over MNIST dataset is far simpler than that over CIFAR-100 and ImageNet dataset.
Fig. 5 illustrates the resulting trade-offs between robust accuracy and perturbation budget. From Fig. 5, it is clear that the DNN trained within the CMIC-DL framework is more robust against both FGSM and PGD attacks, in comparison with its counterpart trained within the standard CE-based DL framework, thus confirming the by-product. In addition, the clean accuracy for the models trained within the CE-based DL and CMIC-DL frameworks are 99.14% and 99.21%, respectively, showcasing that the accuracy over the benign samples is not sacrificed for a higher robust accuracy.
We conclude this subsection by pointing out that although CMIC-DL can improve the robustness of DNNs trained therein against adversarial attacks, CMIC-DL itself is not a framework for adversarial training. In our future work, we will fully address CMIC adversarial training by extending the performance metrics of CMI, \(\Gamma\) (separation), and NCMI to the new concepts of robust CMI, robust separation, and robust NCMI.
## VI Conclusion
Viewing a DNN as a mapping from \(x\in\mathbb{R}^{d}\) to \(P_{x}\), in this paper we have introduced conditional mutual information (CMI) and normalized conditional mutual information (NCMI)
Fig. 4: The evolution curves of (a) CMI, (b) \(\Gamma\), (c) NCMI, and (d) error rate over the course of training ResNet-56 over CIFAR-100 dataset using CE and CMIC frameworks.
as new performance metrics of the DNN to measure the intra-class concentration and inter-class separation of the DNN. As new performance metrics, CMI and NCMI are in parallel with error rate. We then have used CMI and NCMI to evaluate and compare DNNs of different architectures and sizes. It turns out that NCMI and error rate have essentially a positive linear relationship with their correlation \(\geq 0.99\). As such, the NCMI value of a DNN can be used to gauge the prediction performance of the DNN.
Based on NCMI, we have then developed a learning framework called CMI constrained deep learning (CMI-DL) within which the conventional cross entropy function is minimized subject to a NCMI constraint. An novel alternating learning algorithm has been further proposed to solve such a constrained optimization problem. Extensive experiment results consistently show that DNNs trained within the CMIC-DL framework outperform those trained using the other DL benchmark methods discussed in the paper. In addition, with CMI and NCMI as performance metrics for measuring the concentration and separation of a DNN, the learning process of the DNN can also be analyzed and visualized through the evolution of CMI and NCMI.
Open problems include (1) how to extend CMI and NCMI to define concepts of robust CMI, robust separation, and robust NCMI; (2) how to extend CMIC-DL to robust CMIC-DL to fully address adversarial training; (3) how to use CMI to help estimate the conditional probability distribution of \(Y\) given \(X\); and (4) the investigation of minimizing NCMI alone without using the standard cross entropy objective function by modifying a predictor. These problems will be addressed in the future.
## Appendix A Proof of Theorem 2
Since \(\lambda>0\) and \(\beta>0\), it suffices to show that
\[I(X;\hat{Y}|Y)=\min_{\{Q_{c}\}_{c\in[C]}}\mathbf{E}[D(P_{X,\theta}||Q_{Y})]. \tag{29}\]
To this end, we apply (15) to get the following:
\[I(X;\hat{Y}|Y)=\sum_{y}\sum_{x}P(x,y)\left[\sum_{i=1}^{C}P(\hat{ Y}=i|x,\theta)\times\right.\] \[\left.\ln\frac{P(\hat{Y}=i|x,\theta)}{P_{\hat{Y}|y}(\hat{Y}=i|Y=y)}\right]\] \[=\sum_{y}\sum_{x}P(x,y)\left[\sum_{i=1}^{C}P(\hat{Y}=i|x,\theta) \times\left[\ln\frac{P(\hat{Y}=i|x,\theta)}{Q_{y}(i)}\right.\right.\] \[\left.\left.+\ln\frac{Q_{y}(i)}{P_{\hat{Y}|y}(\hat{Y}=i|Y=y)} \right]\right]\] \[=\sum_{y}\sum_{x}P(x,y)D(P_{x,\theta}||Q_{y})+\sum_{y}\sum_{x}P(x,y)\times\] \[\left[\sum_{i=1}^{C}P(\hat{Y}=i|x,\theta)\ln\frac{Q_{y}(i)}{P_{ \hat{Y}|y}(\hat{Y}=i|Y=y)}\right]\] \[=\mathbf{E}[D(P_{X,\theta}||Q_{Y})] \tag{30}\]
for any \(Q_{y}\in\mathcal{P}([C]),y\in[C]\), where the inequality above is due to the nonnegativity of KL divergence. Thus
\[I(X;\hat{Y}|Y)\leq\min_{\{Q_{c}\}_{c\in[C]}}\mathbf{E}[D(P_{X,\theta}||Q_{Y})]. \tag{31}\]
On the other hand, (30) becomes an equality whenever
\[Q_{c}=P_{\hat{Y}|y=c},\forall c\in[C]. \tag{32}\]
This, together with (30), implies (29), and hence completes the proof of Theorem 2.
## Appendix B Other Information Quantities for Separation
In this Appendix, we explore other information quantities which can also be defined and used as a measure for the inter-class separation of the DNN: \(x\in\mathbb{R}^{d}\to P_{x}\). Specifically, two more information quantities \(\Gamma^{\prime}\) and \(\Gamma^{\prime\prime}\) are introduced and compared with \(\Gamma^{\prime}\) defined in (19). Although they are more or less equivalent, \(\Gamma\) is more convenient for selecting hyper parameters in our CMIC-DL framework.
### _Information Quantity \(\Gamma^{\prime}\)_
A possible information quantity for measuring inter-class separation can be defined as follows
\[\Gamma^{\prime}=\mathbf{E}\left[I_{\{Y\neq V\}}D(P_{X}||P_{U})\right], \tag{33}\]
Fig. 5: The robustness of a simple DNN over MNIST dataset trained within the conventional CE-based DL and CMIC-DL frameworks against (a) FGSM attack and (b) PGD attack with 5 iterations, respectively.
where the cross entropy function \(H(P_{X},P_{U})\) in (19) is replaced by the KL divergence \(D(P_{X}||P_{U})\). To connect \(\Gamma^{\prime}\) with CMI and \(\Gamma\), we simplify \(\Gamma^{\prime}\) as follows:
\[\Gamma^{\prime} =\mathbf{E}\left[I_{\{Y\neq V\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln \frac{P(\hat{Y}=i|X)}{P(\hat{Y}=i|U)}\right]\] \[=\mathbf{E}\left[I_{\{Y\neq V\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\left( \ln\frac{P(\hat{Y}=i|X)}{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}\right.\right.\] \[\left.\left.+\ln\frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)} \right)\right]\] \[=\mathbf{E}\left[I_{\{Y\neq V\}}D(P_{X}||P_{\hat{Y}|Y})\right]\] \[+\mathbf{E}\left[I_{\{Y\neq V\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln \frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)}\right] \tag{34}\] \[=\mathbf{E}\left[(1-P(Y))D(P_{X}||P_{\hat{Y}|Y})\right]\] (35) \[+\mathbf{E}\left[I_{\{Y\neq V\}}\sum_{i=1}^{C}P_{\hat{Y}|Y}(\hat{ Y}=i|Y)\ln\frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)}\right]\] (36) \[=\mathbf{E}\left[(1-P(Y))D(P_{X}||P_{\hat{Y}|Y})\right]\] \[+\mathbf{E}\left[I_{\{Y\neq V\}}D(P_{\hat{Y}|Y}||P_{U})\right], \tag{37}\]
where (35) is due to the fact that \(V\) is independent of \((X,Y)\), and (36) follows from the independence of \((X,Y)\) and \((U,V)\) and the Markov chain \(Y\to X\rightarrow\hat{Y}\).
Note that the first expectation in (37) is related to the CMI \(I(X;\hat{Y}|Y)\). Indeed, when \(P(Y)\) is equal to a constant, i.e., \(1/C\), which is true in most empirical cases, it follows from (15) that
\[\mathbf{E}\left[(1-P(Y))D(P_{X}||P_{\hat{Y}|Y})\right]=(1-\frac{1}{C})I(X,\hat {Y}|Y),\]
which, together with (37), implies that
\[\Gamma^{\prime}=(1-\frac{1}{C})I(X,\hat{Y}|Y)+\mathbf{E}\left[I_{\{Y\neq V\}} D(P_{\hat{Y}|Y}||P_{U})\right]. \tag{38}\]
Plugging (38) into the optimization problem in (24), we get the following optimization problem
\[\min_{\theta} \mathbf{E}_{X}\left[H(P_{Y|X},P_{X,\theta})\right]+\left(\lambda -\left(\beta-\frac{\beta}{C}\right)\right)I(X;\hat{Y}|Y)\] \[\quad-\beta\mathbf{E}\left[I_{\{Y\neq V\}}D(P_{\hat{Y}|Y}||P_{U, \theta})\right]. \tag{39}\]
Thus, if \(\Gamma^{\prime}\) was used as a measure for inter-class separation, then it would cancel out part of the CMI, making the selection of hyper parameters \(\lambda\) and \(\beta\) become harder.
### _Information Quantity \(\Gamma^{\prime\prime}\)_
Equations (38) and (39) suggest that one might use the following information quantity as a measure for inter-class separation instead
\[\Gamma^{\prime\prime}=\mathbf{E}\left[I_{\{Y\neq V\}}D(P_{\hat{Y}|Y}||P_{U}) \right]. \tag{40}\]
In fact, \(\Gamma^{\prime\prime}\) has a descent physical meaning in the sense that it measures the average of distances between the output distributions of the DNN in response to input sample instances and the centroids of the clusters with different ground truth labels.
To connect \(\Gamma^{\prime\prime}\) with CMI and \(\Gamma\), we further simplify \(\Gamma^{\prime\prime}\) as follows
\[\Gamma^{\prime\prime} =\mathbf{E}\left[I_{\{Y\neq V\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln \frac{P_{\hat{Y}|Y}(\hat{Y}=i|Y)}{P(\hat{Y}=i|U)}\right] \tag{41}\] \[=\mathbf{E}\left[I_{\{Y\neq V\}}H(P_{X},P_{U})\right]\] \[+\mathbf{E}\left[I_{\{Y\neq V\}}\sum_{i=1}^{C}P(\hat{Y}=i|X)\ln P _{\hat{Y}|Y}(\hat{Y}=i|Y)\right]\] \[=\Gamma+\mathbf{E}\left[I_{\{Y\neq V\}}\sum_{i=1}^{C}P_{\hat{Y}|Y} (\hat{Y}=i|Y)\ln P_{\hat{Y}|Y}(\hat{Y}=i|Y)\right]\] (42) \[=\Gamma-\mathbf{E}\left[(1-P(Y))H(P_{\hat{Y}|Y},P_{\hat{Y}|Y}) \right]. \tag{43}\]
In the above, (41) follows from (34) and (37), (42) is due to the fact that \(X\) is independent of \(V\), and \(Y\to X\rightarrow\hat{Y}\) forms a Markov chain, and (43) is attributable to the independence of \(V\) and \(Y\).
Note again that the second term in (43) is related to the CMI \(I(X;\hat{Y}|Y)\). Indeed, when \(P(Y)\) is equal to a constant, i.e., \(1/C\), which is true in most empirical cases, it follows that
\[\mathbf{E}\left[(1-P(Y))H(P_{\hat{Y}|Y},P_{\hat{Y}|Y})\right]\] \[= (1-\frac{1}{C})H(\hat{Y}|Y)\] \[= (1-\frac{1}{C})\left[I(X;\hat{Y}|Y)+H(\hat{Y}|X,Y)\right]\] \[= (1-\frac{1}{C})\left[I(X;\hat{Y}|Y)+H(\hat{Y}|X)\right], \tag{44}\]
where \(H(W|Z)\) denotes the Shannon conditional entropy of the random variable \(W\) given the random variable \(Z\), and (44) is due to the Markov chain \(Y\to X\rightarrow\hat{Y}\). Combining (44) with (43) yields
\[\Gamma^{\prime\prime}=\Gamma-(1-\frac{1}{C})\left[I(X;\hat{Y}|Y)+H(\hat{Y}|X) \right]. \tag{45}\]
Plugging (45) into the optimization problem in (24), we get the following optimization problem
\[\min_{\theta} \mathbf{E}_{X}\left[H(P_{Y|X},P_{X,\theta})\right]+\left(\lambda+ \left(\beta-\frac{\beta}{C}\right)\right)I(X;\hat{Y}|Y)\] \[\quad+\beta(1-\frac{1}{C})H(\hat{Y}|X)-\beta\Gamma. \tag{46}\]
Thus, if \(\Gamma^{\prime\prime}\) was used as a measure for inter-class separation, then it would further enhance the effect of the CMI, making the selection of hyper parameters \(\lambda\) and \(\beta\) become harder as well.
## Acknowledgments
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under Grant RGPIN203035-22, and in part by the Canada Research Chairs Program. |
2302.14566 | Continuous interaction with a smart speaker via low-dimensional
embeddings of dynamic hand pose | This paper presents a new continuous interaction strategy with visual
feedback of hand pose and mid-air gesture recognition and control for a smart
music speaker, which utilizes only 2 video frames to recognize gestures.
Frame-based hand pose features from MediaPipe Hands, containing 21 landmarks,
are embedded into a 2 dimensional pose space by an autoencoder. The
corresponding space for interaction with the music content is created by
embedding high-dimensional music track profiles to a compatible two-dimensional
embedding. A PointNet-based model is then applied to classify gestures which
are used to control the device interaction or explore music spaces. By jointly
optimising the autoencoder with the classifier, we manage to learn a more
useful embedding space for discriminating gestures. We demonstrate the
functionality of the system with experienced users selecting different musical
moods by varying their hand pose. | Songpei Xu, Chaitanya Kaul, Xuri Ge, Roderick Murray-Smith | 2023-02-28T13:43:02Z | http://arxiv.org/abs/2302.14566v1 | # Continuous Interaction with a Smart Speaker via Low-Dimensional Embeddings of Dynamic Hand Pose
###### Abstract
This paper presents a new continuous interaction strategy with visual feedback of hand pose and mid-air gesture recognition and control for a smart music speaker, which utilizes only 2 video frames to recognize gestures.
Frame-based hand pose features from MediaPipe Hands, containing 21 landmarks, are embedded into a 2 dimensional pose space by an autoencoder. The corresponding space for interaction with the music content is created by embedding high-dimensional music track profiles to a compatible two-dimensional embedding. A PointNet-based model is then applied to classify gestures which are used to control the device interaction or explore music spaces. By jointly optimising the autoencoder with the classifier, we manage to learn a more useful embedding space for discriminating gestures.
We demonstrate the functionality of the system with experienced users selecting different musical moods by varying their hand pose.
Songpei Xu, Chaitanya Kaul, Xuri Ge, Roderick Murray-Smith+School of Computing Science, University of Glasgow, UK Continuous mid-air hand pose control, MediaPipe Hands, Low-dimensional embeddings+
Footnote β : Thanks to Moodagent for partial funding of S.X. RM-S & C.K. are partially funded by EPSRC projects EP/R018634/1 and EP/MO1326X/1.
## 1 Introduction
Mid-air gesture recognition and control has recently attracted increasing research attention in multimedia applications. Early works such as Kinect in the Kitchen [1] has explored mid-air gestural control and feedback using a Kinect in cooking scenarios, where common devices have limited displays and touch is less appropriate when cooking. Recently, many studies [2, 3] have proposed mid-air interaction methods for driving based on mid-air gesture recognition and control, which can prevent driver distraction caused by conventional physical handling or touch. However, conventional gesture interaction methods require a physical device as support and focus on solving physical controls and interactions, such as simple music control, device selection, _etc._ Hence, in recent years deep learning based gesture recognition methods [4, 5] have been studied to enhance mid-air gesture control and interaction for many applications. For instance, [4] proposed a deep learning architecture based on the combination of a 3D Convolutional Neural Network (3D-CNN) and a Long Short-Term Memory (LSTM) network, which takes advantages of spatial-temporal information from 30-frame video sequences. Though they achieve significant improvements, these methods remain unsatisfactory due to long time sequences dependencies and high-dimensional feature inputs of models. In addition, the interpretability of the user gesture interaction process is not addressed by most current methods, which is widely regarded as a critical component in real-world gesture control.
To deal with the mentioned problems, we propose a straightforward solutions to combine gesture recognition with low-dimensional embeddings, which uses low-dimensional embeddings to reduce the dimension of features of multiple frames and visualize the gestures. Low-dimensional feature embeddings [6, 7, 8] are very popular for real-time tasks, where gesture recognition and interaction are temporally connected, due to multiple types of sensors used for data collection. These embeddings can be linear mappings based on principal component analysis (PCA) [6] and factor analysis (FA) [7], and even non-linear mappings based on t-distributed stochastic neighbor embedding (t-SNE) [7], uniform manifold approximation and projection (UMAP) [8], and autoencoders - all creating effective explorations of low-dimensional embeddings. However, the linear nature of PCA limits the ability to project the features to a lower dimension without losing information. t-SNE and UMAP can perform non-linear feature dimensionality reduction, but they cannot reduce the additional costs associated with subsequent classification. In this work, we design a fully-connected autoencoder to reduce the dimension of the detected hand pose features and to learn a better low-dimensional pose space for interaction. The autoencoder takes hand pose features as inputs, reduces their dimensionality via an encoder to get low-dimensional latent features, and then reconstructs them using a decoder. This is an unsupervised process, and by training using a dataset containing numerous gestures, we can obtain generalization ability and expressiveness. Our proposed autoencoder reduces the gestures to an interactive 2D space, which facilitates visualization of the embedded pose space mapped with the corresponding music space to provide a more intuitive hand pose based exploration. The temporal information from video sequences input in the gesture or action recognition model is exploited to further improve the classification accuracy in many studies [9, 4]. However, the dependence of longer frame sequences makes them difficult to be truly interactive in real time. And for gesture recognition and control applied to realistic scenarios, interaction response delays tend to annoy users and reduce perceived usability [10, 11]. In this paper, we propose a simple PointNet-based classification network to recognize the predefined discrete gestures and continuous hand poses by fewer frames (2 frames) in low-dimensional inputs (2 dimensions) from an autoencoder. Discrete gestures mean that feedback is obtained after the full gesture has been triggered, while continuous gestures obtain the real-time feedback while the gesture is in progress. Compared to the methods [12, 13, 14] that focus only on discrete gesture interaction, our approach handles both discrete and continuous gesture-based interaction scenarios. Finally, we defined corresponding functions for the different recognized gestures, which include discrete gesture control for music start/stop and continuous hand pose control for real-time musical space exploration, _etc._ Different from other gestural interaction strategies [15] and video process methods [16], our proposed pipeline overcomes to some extent the disadvantages of high-dimensional feature input and long sequence dependence, and implements a continuous hand pose to explore the music space. Specifically, we map the predicted continuous hand pose to a mu
sic space with different properties, where the two-dimensional hand pose space is embedded from the autoencoder encoder. The low frame dependency of the gesture recognition stage gives our model the advantage of low latency. Visible user interaction based on the autoencoder encoding gives the user more freedom of choice and exploration, which is not exploited in the literature.
## 2 Methodology
Fig. 1 presents a detailed structure of our proposed pipeline for interaction with a smart music speaker. It includes generating a low-dimensional embedding (Section 2.1), gesture classification (Section 2.2), and Interaction (Section 2.3).
### Low-dimensional Embedding
In this work, the main purpose of the low-dimensional embedding is that the space distribution of low-dimensional hand pose features can be visualized. In this way, gesture interactions are more understandable and more controllable when interacting with a music space. Specifically, MediaPipe Hands [17, 18], a real-time hand landmarks detection model, is used to extract 3-dimensional (3D) coordinate information of 21 hand landmarks, which saves a significant model space compared to directly using pixel-level image (image size is \(480\times 620\)). Then, an autoencoder based on fully connected layers [19] is designed to reduce the dimensionality of the 3D coordinates of the 21 hand landmarks. The encoder and decoder in autoencoder contain four fully-connected layers respectively, and the layers use LeakyReLU [20] for non-linear activation. We set the neuron numbers of 4 fully-connected layers in the encoder are 128, 96, 64 and 2, respectively and the reverse in the decoder.
### Gesture Classification
We employ a highly efficient and effective PointNet [21] that directly consumes point information, based on multiple linear layers, to classify the predefined gestures. PointNet-based classification network reduces the model size and training time by using low-dimensional inputs from the autoencoder, as well as assisting the pose space to obtain better distinguishability in the autoencoder by their jointly learning. Furthermore, inspired by the popular sequence-based methods [4, 16], we also explore and take advantage of the sequence information from the gesture video. Specifically, the frame-based sequence features are encoded by the autoencoder as new inputs of the classification network. Then, we get the corresponding predicted gesture categories from the optimized classification network. In this work, we explore single-, 2- and 8-frame sequence inputs for the PointNet-based classification, and we chose a 2-frame sequence as our final inputs, based on trading-off classification performance with minimizing time delay.
Our goal is to jointly learn the parameters of the proposed fully-connected layer based autoencoder and PointNet-based classification by minimizing a loss function over the training set, which employs mean square loss [22] and cross-entropy loss [23], respectively. We employ Adam [24] with 0.001 weight decay and 0.001 learning rate as the optimizer of our joint model.
### Interaction
In this work, we anticipate that users will interact with the smart music speaker through a combination of discrete gestures and continuous hand poses. We propose a novel, user-friendly strategy for control interaction and exploration in a visible music space, where a discrete gesture (Pinch) is used for activation and the continuous hand pose (Continuous arm open/close) is used for continuous exploration. And other gestures will be used for other interactions in future works. Our music data is provided by our industrial partner, Moodagent, including about 55,000 music tracks. Each track is represented by 34 features, including predicted subjective scores for 6 emotion types, 14 genre types, and 14 style types. These features are derived from the track's audio signal using a convolutional neural network to predict human subjective classifications. The music features are embedded by UMAP down to a 2-dimensional music space for human interaction. In this work, we focus on the exploration of a music space with different emotions, including sadness, joy, fear, erotic, anger and tenderness, which the user can interact with via continuous hand pose changes. To connect the pose space to the music space, we use a physical mapping, _i.e._, first scaling the music space and the pose space to the same range and computing the cluster centres for each type of music and then computing the distance to the coordinates of the real-time pose in the two-dimensional pose space. The music category with the closest distance to the coordinates of the real-time pose will be highlighted. We mark different emotions in different colours in the music space. The colours range
Figure 1: Pipeline of mid-air gesture control
from light to dark, indicating light to heavy emotional expressions of music. In this way, users have more freedom and controllability to explore the music space with an entire music database by the continuous mid-air hand pose movement, as shown in Fig. 1. In addition, we explore the possibility of using alternative representations, such as quaternion [25, 26], to make pose spaces more stable.
## 3 Experiments
### Dataset
We investigate and design interactive control gestures that conform to human habits for the smart music speaker, including 'continuous arm open' (the arm and hand away from body), 'continuous arm close' (the arm and hand close to body), index finger drawing 'circle clockwise', drawing 'circle counterclockwise', 'pinch' and 'double-pinch'. In fact, since these gestures are in pairs and opposite directions, we collect 6 gestures in total. Specifically, using the Intel(r) RealSense(tm) LiDAR Camera L515 depth camera (frame rate is 30 fps), we collected 25 clip videos for each gesture for 7 volunteers. Each video duration varies from 1 to 3 seconds. To avoid the influence of the background, we choose a white wall about one meter away from the camera as the background and the collected gestures are 40-50 cm away from the camera. After that, we extract over 60, 000 frames containing gestures from the collected clip videos. In Table 1, we provide the detailed information of our collected gesture dataset.
### Experimental Results
In this section, we first present an empirical finding that highlights the effectiveness of a well-selected fully-connected autoencoder in the proposed pipeline, which will get low-dimensional pose spaces for interactions. Then the effectiveness of well-designed PointNet-based classifier will be proved. Finally, the visualization of mid-air hand pose interactions for a smart music speaker will be given.
**The effectiveness of the autoencoder.** As shown in Fig. 2, the 2D outputs of the different gestures from the encoder in the autoencoder are plotted in a 2D space on the display. Compared with the widely used UMAP (indicated by (a), (a') and (a")), which has significant effects on dimensionality reduction [8] and clustering [27], the proposed fully-connected layer based autoencoder (indicated by (b), (b') and (b")) can better distinguish the distribution of different gestures with lower model complexity. This allows users to more clearly and intuitively see the positions of different gestures in the pose space and the relationship between different gestures. When hand pose points in the pose space are sufficiently dispersed, subtle hand pose changes will be clearly tracked. Furthermore, we explore the distributional effects of using different frame sequences (2-frame and 8-frame) on the encoding of the proposed fully-connected autoencoder. By comparing the visualization results with the first column of Fig. 2 with single-frame inputs, using time-sequence information can significantly improve inter-class gesture clustering and intra-class dispersion. As low latency is very important for user interaction, we focus on 2 frames in our subsequent study, which can avoid long time latency caused by longer sequence dependencies in other methods [4].
**Effects of classification.** Different from previous approaches, _e.g._[19], where only clustering is used for gesture reduction and visualization, in this paper we utilize a PointNet-based classification network to recognize the mid-air gestures for interactions and to further guide the visualization of the distribution of pose space. As shown in the third row of Fig. 2, by joint learning with the classification network, the proposed autoencoder can better distinguish the space distribution of inter- and intra-class mid-air gestures. Although this increases the training time, there is no additional cost in the inference process of gesture feature dimensionality reduction and clustering. In addition, in Table 2, we provide the detailed classification results for different frame-based gestural sequence features from the autoencoder. We also provide inference time for the entire mid-air hand pose interaction process of each sequence. Compared to the 12.6 ms required by the original PonitNet without autoencoder, the recognition and interactions with autoencoder take 2.3 ms, 2.4 ms and 3 ms on 1-, 2- and 8-frame based sequences, respectively. It demonstrates that the joint learning of the fully-connected autoencoder and PointNet-based classifier can improve the clustering effect
\begin{table}
\begin{tabular}{c|c|c c|c} \hline No. & Gesture & Train & Test & Total \\ \hline
1 & Continuous arm open & 8793 & 2209 & 11002 \\
2 & Continuous arm close & 9096 & 1711 & 10807 \\
3 & Circle clockwise & 8647 & 1873 & 10520 \\
4 & Circle counterclockwise & 8517 & 1943 & 10460 \\
5 & Pinch & 7588 & 1593 & 9181 \\
6 & Double pinch & 6742 & 1513 & 8255 \\ \hline & Total & 49383 & 10842 & 60225 \\ \hline \end{tabular}
\end{table}
Table 1: Overview information of our collected Gesture frames.
of the autoencoder while also keeping the accuracy of the classification. Finally, by balancing classification effectiveness and low latency requirements, we finally choose 2-frame sequences as inputs.
**Visualization of user interaction and exploration.** In order to provide a better experience and understanding when using the smart music player, we display the selected music position and use the dynamic hand pose continuous control process to map to locations in the music space. As shown in the top row of Fig. 3, when we specify different target music positions in music spaces, they can all be reached by continuous movement and exploration of mid-air hand pose. For the Fig. 3 (d), (e), we measure the time for an experienced user to reach the specified target point when exploring the pose space for two consecutive times, 4.4 s and 2.1 s, respectively. This demonstrates that users can learn to explore the music space by continuous dynamic mid-air hand pose control to enhance their understanding of interaction with the music space, and thus reach the goals faster. The three different tracks of hand pose in the Fig. 3 (f) proves that users can reach the same target music position with continuous control by different dynamic hand pose, and that the starting position and pose of the hand does not affect the exploration of the target music. Notably, the delay of the interaction process of a frame-based gesture sequence (2 frames) is about 2.4 ms, as shown in Table 2, including the inference of autoencoder and classifier and drawing.
In addition, we find that different experienced users with different distances from the camera usually produce different control results on the interaction with the music space for the same hand pose. To have a more stable interaction with the music space, we further explore the use of quaternion [26] to avoid the effect of hand size, hand position in the camera field, and distance from the camera on gesture embedding. Fig. 4 compares two experienced users (indicated U1 and U2) of different palm sizes (palm width and palm length) without and with quaternion conversion (as (a) and (b)), respectively, to control the music space at different locations (start position and distance) from the camera using the same gesture. Specifically, the palm width and length for U1 are 7 em and 15 em respectively, and for U2 are 10 cm and 17 cm, where palm width is the distance from the widest part of the palm and palm length is the distance from the root of the palm to the tip of the middle finger. P1 and P2 indicate different starting positions, with a difference of 20 cm. D1 and D2 indicate different distances from the camera, which are 45 cm and 100 cm, respectively. Compared with (a) and (b) in Fig. 4, it suggests that the use of quaternion effectively reduces the effects of hand size, hand position and distance from the camera on the low-dimensional embedding of the gestures, thus allowing the interactive system to work more stably.
## 4 Conclusion
In this work, we study the problem of mid-air hand pose control and visible interaction for a smart music speaker and propose a novel pose space encoding and visualization model by a fully-connected autoencoder joined with a PointNet-based gesture classification network. Specifically, a new mid-air gesture dataset is collected to train and evaluate the proposed mid-air gesture recognition and control method. The proposed autoencoder embeds gestures into low-dimensional spaces suitable for visualisation and interaction, which helps to unify the pose space with the music space for interactions. In addition, the auxiliary classification network further improves the clustering of gestures in pose space and maintains outstanding classification performance. Moreover, the proposed interaction strategy requires only a few gesture frames (2-frame sequence) of input to get a continuous control that the user can explore, which contributes to the reduction of interaction latency.
The paper provides an exploratory demonstration of the ability to control and select different areas of the user space via continuous hand pose changes by experienced users.
|
2309.10472 | Fully automated landmarking and facial segmentation on 3D photographs | Three-dimensional facial stereophotogrammetry provides a detailed
representation of craniofacial soft tissue without the use of ionizing
radiation. While manual annotation of landmarks serves as the current gold
standard for cephalometric analysis, it is a time-consuming process and is
prone to human error. The aim in this study was to develop and evaluate an
automated cephalometric annotation method using a deep learning-based approach.
Ten landmarks were manually annotated on 2897 3D facial photographs by a single
observer. The automated landmarking workflow involved two successive
DiffusionNet models and additional algorithms for facial segmentation. The
dataset was randomly divided into a training and test dataset. The training
dataset was used to train the deep learning networks, whereas the test dataset
was used to evaluate the performance of the automated workflow. The precision
of the workflow was evaluated by calculating the Euclidean distances between
the automated and manual landmarks and compared to the intra-observer and
inter-observer variability of manual annotation and the semi-automated
landmarking method. The workflow was successful in 98.6% of all test cases. The
deep learning-based landmarking method achieved precise and consistent landmark
annotation. The mean precision of 1.69 (+/-1.15) mm was comparable to the
inter-observer variability (1.31 +/-0.91 mm) of manual annotation. The
Euclidean distance between the automated and manual landmarks was within 2 mm
in 69%. Automated landmark annotation on 3D photographs was achieved with the
DiffusionNet-based approach. The proposed method allows quantitative analysis
of large datasets and may be used in diagnosis, follow-up, and virtual surgical
planning. | Bo Berends, Freek Bielevelt, Ruud Schreurs, Shankeeth Vinayahalingam, Thomas Maal, Guido de Jong | 2023-09-19T09:39:55Z | http://arxiv.org/abs/2309.10472v1 | # Fully automated landmarking and facial segmentation on 3D photographs
###### Abstract
Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs by a single observer. The annotation process was repeated by the first observer, a second observer, and a third observer on 50 randomly selected 3D photos to assess intra-observer and inter-observer variability. The automated landmarking workflow involved two successive DiffusionNet models and additional algorithms for facial segmentation. The dataset was randomly divided into a training (85%) and test (15%) dataset. The training dataset was used to train the deep learning networks, whereas the test dataset was used to evaluate the performance of the automated workflow. The landmarks were also annotated using a semi-automatic method on all 3D photographs. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and the semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 \(\pm\) 1.15 mm was comparable to the inter-observer variability (1.31 \(\pm\) 0.91 mm) of manual annotation. The Euclidean distance between the automated and manual landmarks was within 2 mm in 69%. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.
Deep Learning, DiffusionNet, Cephalometry, Landmarks, 3D Photogrammetry, 3D meshes The fields of genetics, orthodontics, craniomaxillofacial surgery, and plastic surgery have greatly benefitted from advances in imaging technology, particularly in three-dimensional (3D) imaging. Three-dimensional stereophotogrammetry has gained popularity in these fields since it can capture a detailed and accurate representation of craniofacial soft tissue without the use of ionizing radiation. [1, 2, 3, 4]
Cephalometric analysis can be performed on 3D stereophotographs to extract information about the position of individual landmarks or distances and angles between several landmarks, with the purpose of objectifying clinical observations [1]. Despite being a commonly used diagnostic tool in the craniofacial
region, landmarking often remains a manual task that is time-consuming, prone to observer variability, and affected by observer fatigue and skill level. Park et al. (2019); Stewart et al. (2008) Therefore, there has been a growing interest in using artificial intelligence (AI), such as deep learning and machine learning algorithms to automate the landmark identification process.
Several studies have described the use of deep learning algorithms for the automation of hard-tissue landmark extraction for cephalometric analysis Guo et al. (2020); Serafin et al. (2023). Studies that include soft-tissue landmarks utilize (projective) 2D imaging, are pose dependent, or require manual input Manal et al. (2019); White et al. (2019). Since only a limited number of studies were performed on the automated extraction of facial soft-tissue landmarks from 3D photographs, this study aimed to develop and validate an automated approach for the extraction of soft-tissue facial landmarks from 3D photographs using deep learning Baksi et al. (2021); Guo et al. (2013).
**Material and methods**
Data acquisition
In total, 3188 3D facial photographs were collected from two databases: the Headspace database (n=1519) Dai et al. (2020); Pears et al. (2018) and the Radboudumc's longitudinal database (n=1669). The Radboudumc's data consisted of healthy volunteers (n=1153) and Oral and Maxillofacial Surgery patients (n=516). The Radboudumc dataset was collected in accordance with the World Medical Association Declaration of Helsinki on medical research ethics. The following ethical approvals and waivers were used: CMO 2007/163; ARB NL 17934.091.07; RUMC CMO 2019-5793. All data were captured using 3dMD's 5-pod 3dMDhead systems (3dMDcranial, 3dMD, Atlanta, Georgia USA). Exclusion criteria were large gaps within the mesh, stitching errors, excessive facial hair interfering with the facial landmarks, meshes that lacked texture (color information), and mesh-texture mismatches. An overview of the data is presented in Table 1.
Data annotation
The 3D photographs were manually annotated by a single observer using the 3DMedX(r) software (v1.2.29.0, 3D Lab Radboudumc, Nijmegen, The Netherlands; details can be found at [https://3dmedx.nl](https://3dmedx.nl)). The following ten cephalometric facial landmarks were annotated: exocanthions, endocanthions, nasion, nose tip, alares, and cheilions. The texture of the 3D photographs was used in the annotation process as a visual cue. Manual annotation was repeated on 50 randomly selected 3D photos by the first observer, a second observer, and a third observer to assess the intra-observer and inter-observer variability.
DiffusionNet (Sharp et al., 2022) on the original meshes; 2) realignment of the meshes based on the roughly predicted landmarks; 3) segmentation of the facial region through fitting of a template facial mesh using a morphable model; 4) refined landmark prediction on the segmented meshes using a final DiffusionNet. The DiffusionNet models used spatial features only and did not use texture information for the automated landmarking task. An overview of the workflow can be seen in Figure 1.
### Training
The data were randomly divided into two sets, 85% for training and 15% for testing of the DiffusionNet models. As a data augmentation step, the 3D meshes from the training dataset were mirrored over the YZ plane to double the number of scans available for training. No validation set was used during training.
### Step 1: Rough prediction of landmarks
A DiffusionNet, a state-of-the-art and robust deep learning network for 3D surfaces, was utilized for initial prediction of the exocanthions, endocanthions, nasion, nose tip, alares, and cheilions as visualized in Figure 2. (Sharp et al., 2022)
### Preprocessing
To speed up the training process, each mesh was downsampled to a maximum of 25.000 vertices (Garland and Heckbert, 1997). Subsequently, a mask was applied, assigning a value of 1 to all vertices located within 5 mm Euclidean distance to the manually annotated landmarks and a value of 0 to the remaining vertices.
Figure 1: Automated landmarking workflow. Step 1: First instance segmentation task for rough landmark prediction. Step 2: Realignment of the meshes using the roughly predicted landmarks. Step 3: Facial region segmentation (white) using MeshMonk (blue wireframe). Step 4: Second instance segmentation task for refined landmark prediction.
Configuration for the first instance segmentation task (DiffusionNet)
Six output channels were configured for the first instance segmentation task. The two midsagittal landmarks (nasion and nose tip) were assigned an individual channel. The four bilateral landmark pairs were assigned to the four remaining channels. The DiffusionNet model was configured with a C-width (internal dimension) of 256, an MLP (multilayer perceptron layer size) of 256 by 256, and an N-block (number of repeating DiffusionNet blocks) of 12. The network used an Adam optimizer with a Cosine Annealing learning rate of 2 x 10\({}^{\text{-5}}\) and a T\({}_{\text{max}}\) of 50 epochs. Furthermore, a binary cross-entropy loss and a dropout rate of 0.10 were applied. Since the orientation and position of the included 3D meshes was not fixed, the network was trained with Heat Kernel Signature (HKS) Features of the 3D meshes. The final output layer was linear. The model was implemented in PyTorch on a 24 GB NVIDIA RTX A5000 GPU and trained for 200 epochs.
### Post-processing
After the instance segmentation, the model was used to predict which vertices belonged to each of the configured channels. For the symmetrical landmarks, a 3D clustering algorithm was utilized to distinguish the predicted vertex clusters from each other. Subsequently, a weighted combination of the output values (activations), as well as the locations of each of the vertices that received a non-zero activation value, were used to determine the landmark positions using Equation 1.
\[\textit{Location}=\frac{\Sigma(10^{activations_{1,2,...i}}\textit{coordinate }_{1,2,...i})}{\Sigma 10^{activations_{1,2,...i}}} \tag{1}\]
A plane was formed by connecting the predicted nasion, nose tip, and chelion midpoint to establish if the bilateral landmarks were on the left or right side of the face using the plane equation.
Figure 3: Second instance segmentation task. The manually annotated landmarks and corresponding masks that were used for training are visualized in green. The green areas represent the vertices within 3.5 mm of the manually annotated landmark. The positively predicted vertices and the calculated refined predicted landmarks are visualized in red.
Figure 2: First instance segmentation task. The manually annotated landmarks (spheres) and corresponding masks are visualized in green. The green areas represent the vertices within 5 mm of the manually annotated landmark. The roughly predicted landmarks are visualized in yellow. The yellow area represents the positively predicted vertices out of which the rough landmarks will be calculated.
### Step 2: Realignment
Based on the rough prediction of the exocanthions, nasion, and cheilions, the 3D meshes were positioned in a reference frame. The nasion was defined as the origin, with the x-axis running parallel to the line connecting both exocanthions and the z-axis parallel to the nasion-cheilion midpoint line (Figure 1).
### Step 3: Facial region segmentation
The MeshMonk algorithm (White et al., 2019), which utilizes a combination of rigid and non-rigid template matching, was used to segment the facial region on the realigned meshes; the default face template of the algorithm was used. The exocanthions, nose tip, and cheilions were used for the initial registration of the facial template mesh to the 3D meshes. The configuration of the MeshMonk fitting algorithm is given in the Appendix. After fitting, the vertices of the aligned 3D meshes that were located further than 15 mm from the fitted template were removed (Figure 1).
The MeshMonk algorithm can also be utilized for landmark annotation. The ten landmarks were collected using this semi-automatic approach to serve as a reference for the precision of the automated annotation approach. In contrast to the automated approach, the manually annotated landmarks were used for template fitting in the semi-automated approach to comply with the MeshMonk workflow. White et al. (2019)
### Step 4: Refined landmark prediction
A second instance segmentation task, using DiffusionNet, was used to predict the landmarks on the realigned and segmented 3D meshes (Figure 3).
### Preprocessing
In contrast to step 1, the meshes were not downsampled. A mask was created in which the vertices within 3.5 mm of the manually annotated landmarks were assigned value 1 and the other vertices were assigned value 0. Default mesh normalization and scaling were applied as provided by the DiffusionNet package (Sharp et al., 2022).
### Configuration for the second instance segmentation task (DiffisionNet)
For the second instance segmentation task, ten output channels were configured: each individual landmark was assigned to an individual channel. The DiffusionNet was configured with a C-width of 384, an MLP of 768, and an N-blocks of 12. Compared to the first network, the same optimizer, loss, and drop-out were used. However, this second network was trained with XYZ settings instead of HKS settings as the rotation invariance was no longer present after step 3 (Figure 1). The model was implemented in PyTorch on a 24 GB NVIDIA RTX A5000 GPU and trained for 200 epochs. The final output layer was linear.
### Post-processing
A weighted combination of the activations, supplemented by the locations of each of the vertices, was again used to determine the final landmark positions.
### Statistical analysis
Statistical analyses were performed on available patient characteristics to assess differences between the source databases and between the training and test data. To assess the intra-observer and inter-observer variability of the manual annotation method, the Euclidean distances between the landmarks annotated by the different observers were calculated. Descriptive statistics were used to summarize the results. The Euclidean distances between the predicted and the manually annotated landmarks were calculated for every test set to evaluate the performance of automated landmarking; descriptive statistics were used for summarizing the results. This was done for both the rough (initial DiffusionNet) and the refined (final DiffusionNet) predictions. The
performance of the automated landmarking workflow was compared to the intra-observer and inter-observer variability of the manual annotation method.
The Euclidean distances between the manually annotated landmarks and the predictions by the semi-automated MeshMonk method were calculated and compared to the precision of the refined predictions using a one-way repeated measures ANOVA test. A p-value \(<\)0.05 was used as a cut-off value for statistical significance.
### Results
Based on the stated exclusion criteria, 291 3D photographs were excluded, yielding a total of 2897 3D photographs that were used for training and testing of the developed workflow (Table 1). Most of the exclusions were due to the lack of texture information (n=271). The age and gender characteristics are given in Table 2. A statistically significant difference was found for age and gender between the source databases (p\(<\)0.001 and p\(<\)0.001, respectively). However, there were no statistically significant differences between ages and genders of the training and test splits (p=0.323 and p=0.479, respectively). There were no unknown genders or ages in the test dataset. The training dataset held one transgender case and had five unknown ages.
The intra-observer and interobserver differences of the manual annotation method are summarized in Table 3 and Table 4, respectively. The overall mean intra-observer variability for manual annotation of the ten landmarks was \(0.94\pm 0.71\) mm; the overall mean interobserver variability was \(1.31\pm 0.91\) mm.
The initial DiffusionNet showed an average precision of \(2.66\pm 2.37\) mm, and the complete workflow achieved a precision of \(1.69\pm 1.15\) mm. The performance of both models is summarized in Table 5. The workflow could be completed for 98.6% of the test data; for six 3D photos (1.4%), one of the rough landmarks required for the consecutive steps could not be predicted by the first DiffusionNet. Upon visual inspection, the six excluded 3D photos contained large gaps and/or substantial amounts of information outside the region of interest, such as clothing or hair. Since the workflow could not be completed, these data sets were excluded from the results.
The precision was within 2 mm for 69% of the refined predicted landmarks, within 3 mm for 89% of the landmarks, and within 4 mm for 96% of the landmarks. Table 6 details the precision within these boundaries for the individual landmarks. The exocanthions and alares were found to perform the worst. The precision of the semi-automated MeshMonk method was on average \(1.97\pm 1.34\) mm for the ten landmarks (Figure 4). Compared to this semi-automatic method, the DiffusionNet-based method was found to have significantly better precision for the left exocanthion, endocanthions, nose tip, and cheilions and worse precision for the alares; no significant differences were found for nasion and right exocanthion.
\begin{table}
\begin{tabular}{l|c c c c|c c c} \hline & \multicolumn{4}{c|}{**Age (years)**} & \multicolumn{4}{c}{**Gender**} \\
**Dataset** & Mean & Std & Min & Max & Male & Female & Transgender \\ \hline Headspace & 35.9 & 17.6 & 2 & 90 & 631 (50.7\%) & 613 (49.2\%) & 1 (0.1\%) \\ Controls & 42.1 & 19.4 & 0 & 90 & 492 (43.2\%) & 647 (56.8\%) &. \\ Patients & 27.8 & 10.9 & 13 & 69 & 190 (37.0\%) & 323 (63.0\%) &. \\ \hline \end{tabular}
\end{table}
Table 2: Population characteristics per dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Exocanthin**} & \multicolumn{2}{c|}{**Endocanthin**} & \multicolumn{2}{c|}{**Nasion**} & \multicolumn{2}{c|}{**Nose tip**} & \multicolumn{2}{c|}{**Alare**} & \multicolumn{2}{c|}{**Cheilion**} \\ \cline{2-13} \multicolumn{1}{c|}{} & _Right_ & _Left_ & _Right_ & _Left_ & & & & _Right_ & _Left_ & _Right_ & _Left_ \\ \hline
**Observer** & 1.16 & 1.14 & 1.08 & 0.87 & 1.64 & 1.16 & 1.34 & 1.27 & 0.93 & 0.97 \\
**1 vs 2** & \(\pm\)0.65 & \(\pm\)0.67 & \(\pm\)0.69 & \(\pm\)0.58 & \(\pm\)0.91 & \(\pm\)0.59 & \(\pm\)0.93 & \(\pm\)0.79 & \(\pm\)0.55 & \(\pm\)0.66 \\ \hline
**Observer** & 1.02 & 0.95 & 1.03 & 1.08 & 1.80 & 1.77 & 1.68 & 1.35 & 1.65 & 1.43 \\
**1 vs 3** & \(\pm\)0.85 & \(\pm\)0.63 & \(\pm\)0.80 & \(\pm\)0.73 & \(\pm\)1.20 & \(\pm\)0.86 & \(\pm\)1.19 & \(\pm\)0.80 & \(\pm\)1.04 & \(\pm\)1.05 \\ \hline
**Observer** & 1.31 & 1.03 & 0.97 & 1.05 & 2.21 & 2.20 & 1.33 & 1.35 & 1.27 & 1.25 \\
**2 vs 3** & \(\pm\)0.88 & \(\pm\)0.57 & \(\pm\)0.73 & \(\pm\)0.64 & \(\pm\)1.30 & \(\pm\)1.07 & \(\pm\)0.98 & \(\pm\)0.82 & \(\pm\)0.83 & \(\pm\)0.84 \\ \hline
**Average** & 1.16 & 1.04 & 1.03 & 1.00 & 1.88 & 1.71 & 1.45 & 1.32 & 1.28 & 1.22 \\ & \(\pm\)0.80 & \(\pm\)0.63 & \(\pm\)0.74 & \(\pm\)0.66 & \(\pm\)1.17 & \(\pm\)0.96 & \(\pm\)1.05 & \(\pm\)0.80 & \(\pm\)0.88 & \(\pm\)0.88 \\ \hline \end{tabular}
\end{table}
Table 4: The interobserver variability is computed by comparing the Euclidean distance between annotations made by three different observers. The Euclidean distances are stated in millimeters \(\pm\) standard deviation.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{**Exocanthin**} & \multicolumn{2}{c|}{**Endocanthin**} & \multicolumn{2}{c|}{**Nasion**} & \multicolumn{2}{c|}{**Nose tip**} & \multicolumn{2}{c|}{**Alare**} & \multicolumn{2}{c|}{**Cheilion**} \\ \cline{2-13} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{_Right_} & _Left_ & _Right_ & _Left_ & & & _Right_ & _Left_ & _Right_ & _Left_ \\ \hline
**Rough** & 2.94 & 2.86 & 2.76 & 2.83 & 1.69 & 1.58 & 2.41 & 2.52 & 3.48 & 3.51 \\
**predictions** & \(\pm\)2.38 & \(\pm\)1.81 & \(\pm\)2.40 & \(\pm\)2.56 & \(\pm\)1.05 & \(\pm\)0.89 & \(\pm\)1.98 & \(\pm\)1.92 & \(\pm\)3.67 & \(\pm\)2.89 \\ \hline
**Refined** & 2.25 & 2.03 & 1.37 & 1.48 & 1.48 & 1.14 & 1.79 & 1.75 & 1.71 & 1.88 \\
**predictions** & \(\pm\)1.23 & \(\pm\)1.27 & \(\pm\)0.86 & \(\pm\)1.00 & \(\pm\)1.02 & \(\pm\)0.73 & \(\pm\)1.07 & \(\pm\)1.11 & \(\pm\)1.26 & \(\pm\)1.34 \\ \hline \end{tabular}
\end{table}
Table 5: The precision of the rough (first DiffusionNet) and refined (second DiffusionNet) is determined by computing the Euclidean distance between the DiffusionNet-predicted and manually annotated landmarks and is stated in millimeters \(\pm\) standard deviation.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{3}{c|}{**Percentage of landmarks predicted with a precision within range**} \\ \cline{2-5} & \(<\) 2 mm & \(<\) 3 mm & \(<\) 4 mm & \(<\) 5 mm \\ \hline
**Exocanthion right** & 47\% & 77\% & 90\% & 97\% \\ \hline
**Exocanthion left** & 56\% & 80\% & 82\% & 97\% \\ \hline
**Endocanthion right** & 80\% & 96\% & 99\% & 100\% \\ \hline
**Endocanthion left** & 76\% & 92\% & 98\% & 99\% \\ \hline
**Nasion** & 77\% & 94\% & 97\% & 99\% \\ \hline
**Nose tip** & 88\% & 98\% & 99\% & 100\% \\ \hline
**Alare right** & 62\% & 88\% & 96\% & 99\% \\ \hline
**Alare left** & 67\% & 86\% & 96\% & 98\% \\ \hline
**Cheilion right** & 71\% & 89\% & 95\% & 97\% \\ \hline
**Cheilion left** & 65\% & 87\% & 94\% & 97\% \\ \hline
**All Landmarks** & 69\% & 89\% & 96\% & 98\% \\ \hline \end{tabular}
\end{table}
Table 6: Overview of the accuracy distribution of each landmark as predicted by the complete workflow.
Figure 4: The precision of the prediction of the rough landmarks (first DiffusionNet), the refined landmarks (second DiffusionNet), and the semi-automated MeshMonk method are visualized for the right exocanthion (Exo R), left exocanthion (Exo L), right endocanthion (Endo R), left endocanthion (Endo L), nasion, nose tip, right alare (Alare R), left alare (Alare L), right cheilion (Cheilion R), and left cheilion (Cheilion L).
### Discussion
Soft-tissue cephalometric analysis can be used to objectify the clinical observations on 3D photographs, but manual annotation, the current gold standard, is time-consuming and tedious. Therefore, this study developed a deep learning-based approach for automated landmark extraction from randomly oriented 3D photographs. The performance was assessed for ten cephalometric landmarks: the results showed that the deep-learning-based landmarking method was precise and consistent, with a precision that approximated the inter-observer variability of the manual annotation method. A precision \(<\)2 mm, which may be considered a cut-off value for clinical relevance, was seen for 69% of the predicted landmarks [13, 14].
In the field of craniofacial surgery, different studies have applied deep-learning models for automated cephalometric landmarking, mainly focusing on 2D and 3D radiographs. Dot et al. used a SpatialConfiguration-Net for the automated annotation of 33 different 3D hard-tissue landmarks from CT images and achieved a precision of 1.0 \(\pm\) 1.3 mm [15]. An automated landmarking method, based on multi-stage deep reinforcement learning and volume-rendered imaging, was proposed by Kang et al. and yielded a precision of 1.96 \(\pm\) 0.78 mm [16]. A systematic review by Serafin et al. found a mean precision of 2.44 mm for the prediction of 3D hard-tissue landmarks from CT and CBCT images [1].
Some studies did describe automated algorithms for 3D soft-tissue landmarking on 3D photographs, but these algorithms did not include deep learning models. Baksi et al. described an automated method, involving morphing of a template mesh, for the landmarking of 22 soft-tissue landmarks from 3D photographs that achieved a precision of 3.2 \(\pm\) 1.6 mm [1]. An automated principal component analysis-based method, described by Guo et al., achieved an average root mean square error of 1.7 mm for the landmarking of 17 soft-tissue landmarks from 3D photographs [12]. Even though a direct comparison is infeasible to make due to the difference in landmarks, datasets, and/or imaging modalities, the precision of the proposed workflow is within the same range as these studies.
The effect of landmark choice on the established precision is underlined by the MeshMonk results found in this study. In the original publication by White et al., an average error of 1.26 mm for 19 soft-tissue landmarks. The same methodology was used to establish the precision for the ten landmarks used in this study, and an overall precision of 1.97 \(\pm\) 1.34 mm was found. This finding highlights the difficulty in comparing landmarking precision from literature [10]. Compared to the semi-automatic method, the fully-automated workflow yielded significantly improved precision for six landmarks, emphasizing the feasibility of fully-automatically annotating soft tissue landmarks from 3D photos using deep learning.
The proposed workflow uses two successive networks and additional algorithms for alignment and facial segmentation. Advantages of this approach include that the DiffusionNet assures robustness against sampling densities and the HKS settings inherently account for rotational, positional, and scale invariance that may arise between different 3D photography systems. A limitation of the current study is that the workflow was only applied to 3D photographs captured using one 3D photography system. Despite the robust nature of DiffusionNet/HKS, the performance of the workflow might be affected when applied to 3D photographs captured with different hardware. Furthermore, the DiffusionNet models were only trained on spatial features, whereas in the manual annotation process texture information was used. Even though this has
the advantage of making the DiffusionNet models insensitive to variations in skin tone or color, landmarks such as the exocanthions, endocanthions, and cheilions could presumably be located more precisely using manual annotation. This would not apply to the landmarks lacking color transitions, such as the nasion and nose tip. Based on these presumptions, the DiffusionNet-based approach might achieve a better precision if texture data of the 3D photographs would be available to the networks.
Another limitation of the proposed workflow arises from the utilization of HKS settings in the initial DiffusionNet, leading to occasional issues with random left-right flipping in the predictions of symmetrical landmarks (e.g., exocanthions). To overcome this challenge, a solution was devised that involved detecting symmetrical landmarks within a single channel. Subsequently, both landmarks were distinguished from each other using a clustering algorithm, followed by a left-right classification based on the midsagittal plane. Although a success rate of 98.6% was achieved using this solution, the workflow failed when the initial DiffusionNet was unable to predict one of the landmarks in the midsagittal plane (nasion, nose tip, or cheilion midpoint). Since this was mainly due to suboptimal quality of the 3D photo, it might be prevented by optimizing image acquisition. For optimal performance of the workflow, it is important to minimize gaps and restrict the depicted area in 3D photos to the face.
Due to its high precision and consistency, the developed automated landmarking method has the potential to be applied in various fields. Possible applications include objective follow-up and analysis of soft-tissue facial deformities, growth evaluation, facial asymmetry assessment, and integration in virtual planning software for 3D backward planning (Memon et al., 2021; Tel et al., 2023). Considering that the proposed DiffusionNet-based approach only uses spatial features, it could be applied on 3D meshes of facial soft tissue that are derived from imaging modalities lacking texture, such as CT, CBCT, or MRI. Nevertheless, further research is necessary to ascertain the applicability of this workflow to these imaging modalities. The fully-automated nature of the workflow also enables cephalometric analysis on large-scale datasets, presenting significant value for research purposes. The position-independency of the workflow might make it suitable for automated landmarking in 4D stereophotogrammetry and give rise to real-time cephalometric movement analysis for diagnostic purposes (Harkel et al., 2020; Shujaat et al., 2014).
## Conclusion
In conclusion, the effectiveness of a deep learning-based approach for automated landmark extraction from 3D facial photographs was developed and its precision was evaluated. The results showed high precision and consistency in landmark annotation, comparable to manual and semi-automatic annotation methods. Automated landmarking methods offer potential for analyzing large datasets, with applications in orthodontics, genetics, and craniofacial surgery and in emerging new imaging techniques like 4D stereophotogrammetry.
**Author contributions**
**Bo Berends:** Conceptualization, Methodology, Software, Validation, Writing - Original Draft **Freek Bielevelt:** Conceptualization, Methodology, Software, Validation, Writing - Original Draft **Ruud Schreurs:** Conceptualization, Writing - Review & Editing **Shankeeth Vinayahalingam:** Writing - Review & Editing **Thomas Maal:** Supervision, Lab Management **Guido de Jong:** Conceptualization, Software, Formal Analysis, Supervision.
**Declaration of Competing Interest**
There are no conflicts of interest to declare. **Ethical Approval and waiver list:** CMO 2007/163; ARB NL 17934.091.07; RUMC CMO 2019-5793
**Funding:** This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
**Data availability:** Coded scripts are available within the following GitHub repository:
[https://github.com/rumc3dlab/3dlandmarkedetection/](https://github.com/rumc3dlab/3dlandmarkedetection/)
**Appendix**
_Appendix Table 1: The parameters set used for the configuration of the MeshMonk algorithm._
\begin{tabular}{|l|c|} \hline
**Parameters** & **Values** \\ \hline \multicolumn{2}{|c|}{**Rigid Registration**} \\ \hline _Number of iterations_ & 30 \\ \hline _Correspondence neighbor_ & 3 \\ _number_ & \\ \hline _Correspondence flag threshold_ & 0.90 \\ \hline _Correspondence symmetric_ & Yes \\ \hline _Correspondence equalize_ & No \\ \hline _Use scaling_ & Yes \\ \hline _Inlier kappa_ & 4.00 \\ \hline _Inlier use orientation_ & Yes \\ \hline _Floating boundary_ & Yes \\ \hline _Target boundary_ & Yes \\ \hline _Target badly shaped triangles_ & Yes \\ \hline _Triangle size Z factor_ & 6.00 \\ \hline _Target up sample_ & No \\ \hline _Non-rigid registration_ & \\ \hline _Number of iterations_ & 80 \\ \hline _Correspondence neighbor_ & 3 \\ _number_ & \\ \hline _Correspondence flag threshold_ & 0.90 \\ \hline _Correspondence symmetric_ & Yes \\ \hline _Correspondence equalize_ & No \\ \hline _Inlier kappa_ & 12.00 \\ \hline _Inlier use orientation_ & Yes \\ \hline _Floating boundary_ & Yes \\ \hline _Target boundary_ & Yes \\ \hline _Target badly shaped triangles_ & Yes \\ \hline _Triangle size Z factor_ & 6.00 \\ \hline _Target upsample_ & Yes \\ \hline _Inlier use weights_ & Yes \\ \hline _Transform sigma_ & 3.00 \\ \hline _Viscous iteration start_ & 200 \\ \hline _Viscous iteration end_ & 1 \\ \hline _Elastic iteration start_ & 200 \\ \hline _Elastic iteration end_ & 1 \\ \hline _Transform neighbors_ & 80 \\ \hline \end{tabular} |
2306.17598 | Navigation of micro-robot swarms for targeted delivery using
reinforcement learning | Micro robotics is quickly emerging to be a promising technological solution
to many medical treatments with focus on targeted drug delivery. They are
effective when working in swarms whose individual control is mostly infeasible
owing to their minute size. Controlling a number of robots with a single
controller is thus important and artificial intelligence can help us perform
this task successfully. In this work, we use the Reinforcement Learning (RL)
algorithms Proximal Policy Optimization (PPO) and Robust Policy Optimization
(RPO) to navigate a swarm of 4, 9 and 16 microswimmers under hydrodynamic
effects, controlled by their orientation, towards a circular absorbing target.
We look at both PPO and RPO performances with limited state information
scenarios and also test their robustness for random target location and size.
We use curriculum learning to improve upon the performance and demonstrate the
same in learning to navigate a swarm of 25 swimmers and steering the swarm to
exemplify the manoeuvring capabilities of the RL model. | Akshatha Jagadish, Manoj Varma | 2023-06-30T12:17:39Z | http://arxiv.org/abs/2306.17598v1 | # Navigation of micro-robot swarms for targeted delivery using reinforcement learning
###### Abstract
Micro-robotics is quickly emerging to be a promising technological solution to many medical treatments with focus on targeted drug delivery. They are effective when working in swarms whose individual control is mostly infeasible owing to their minute size. Controlling a number of robots with a single controller is thus important and artificial intelligence can help us perform this task successfully. In this work, we use the Reinforcement Learning (RL) algorithms Proximal Policy Optimization (PPO) and Robust Policy Optimization (RPO) to navigate a swarm of 4,9 and 16 micro-swimmers under hydro-dynamic effects, controlled by their orientation, towards a circular absorbing target. We look at both PPO and RPO's performances with limited state information scenarios and also test their robustness for random target location and size. We use curriculum learning to improve upon the performance and demonstrate the same on learning to navigate a swarm of 25 swimmers and steering the swarm to exemplify the manoeuvring capabilities of the RL model.
**Keywords**: micro-swimmers, RL, PPO, RPO, curriculum learning, swarm-control
## 1 Introduction
Micro-scale is a fertile area for research and provides the promise of great applications in a variety of fields such as micro-surgery [1], micro-manufacturing [2], cargo delivery [3], pollution rectification [4, 5] and many more. While there has been substantial research going on to understand the physics at this scale for many decades, the research in the design and development of robots that can operate at this scale has exponentially increased in recent years, and we see different methods of realizing them [6, 1, 7]. In addition to the design and propulsion methods, researchers have also been looking at different navigation
strategies for these micro-robots [8]. These methods, however, require complete information of the environment that the micro-robots operate in, which is generally difficult to obtain.
The physical system of micro-robots can be controlled computationally, making it a cyber-physical system, which makes it scalable and reliable. Here, we explore reinforcement learning (RL) as the computational part of the system owing to its incredible performance in recent years in different fields of engineering, such as games [9; 10], robotics [11; 12; 13; 14], operations [15], finance [16] and healthcare [17].
Reinforcement learning is a type of machine learning (ML) technique where an agent learns through interaction with the environment. The RL agent starts choosing actions by trial and error and gradually learns from the rewards of its actions in the environment. Some environments are hard to model because of its complex dynamics and multiple parameters affecting its state. RL can perform well even in these model-free environments and is thus suitable for our application. There are numerous algorithms developed to implement RL, from which we choose Proximal Policy Optimization (PPO) [18] and Robust Policy Optimization (RPO) [19] to be the most appropriate for the task of guiding micro-robotic swarm because of their performance in model-free, continuous action space environments. RL has been proved useful for such navigation strategies of micro-robots recently [20; 21; 22; 23]. We later show that curriculum learning helps in improving the performance of the already better-performing RPO.
In our work, we try to navigate a swarm of magnetically controlled helical micro-swimmers to reach a target aided by RL algorithm. We use a simulated model of micro-swimmers which acts as the environment for the RL agent to operate in. The RL agent acts as the global controller of the entire swarm of micro-swimmers. It gets the current state of the entire swarm and environment and outputs the action that needs to be taken by the magnetic controller in order to navigate the swarm of micro-swimmers. In our experiment, we use PPO and RPO as the RL algorithms due to their suitability for the required application, as explained in section 2.2.2.
The chapter is organised as follows. In section 2.1, we explain the micros-swimmer model. In section 2.2, we describe the RL algorithm details like state, action, rewards in section 2.2.1 and choosing the algorithm in section 2.2.2. In section 3, we look at the experimental results of different scenarios of the environment with increasing complexity. Finally, we conclude in section 4.
Methods
### Simulation model
Here, we describe the two dimensional simulation model of the environment of the swarm of micro-swimmers. The micro-swimmers are modelled after rigid helical swimmers. These artificial micro-swimmers are controlled using rotating magnetic field generated by triaxial Helmholtz coils [24]. This magnetic field aligns the helical swimmers and rotates them around their helical axis causing linear motion along their axial direction. We model these dynamics using the equation 1 for each swimmer \(i\).
\[\begin{split}\Delta x_{i}&=v\Delta tcos(\theta_{i}) \\ \Delta y_{i}&=v\Delta tsin(\theta_{i})\end{split} \tag{1}\]
Where \(\Delta x\) and \(\Delta y\) are the positional increments of the microswimmer along \(x\) and \(y\) directions respectively, \(\Delta t\) is the unit simulation time and \(v\) is the linear velocity of the swimmer. Note that \(v\) is set by the Helmholtz coil. \(v\) is proportional to the frequency of the rotating magnetic field. As the frequency increases, the number of rotations of the swimmer increases, as a result of which the swimmer moves faster, i.e., the velocity increases. \(\theta_{i}\) is the angle of orientation along which the velocity is applied. It incorporates two components, as shown in equation 2.
\[\theta_{i}=\rho_{i}\theta_{hyd}+(1-\rho_{i})\theta_{m} \tag{2}\]
Where \(\theta_{m}\) is the actual orientation of the swimmers set by the Helmholtz coil. \(\theta_{hyd}\) is the orientation due to the hydrodynamic effect of the surroundings, and it is taken as \(\theta_{hyd}=\theta_{m}\)- 90\({}^{\circ}\) because of the transverse drift due to the fluid flow of the nearby micro-swimmers [25]. \(\rho_{i}\) denotes the weight of the hydrodynamic effect and is described by equation 3. It is also coiled at the maximum value of 1.
\[\rho_{i}=\sum_{\forall j,j\neq i}(2/r_{ij}^{2}) \tag{3}\]
The target region where the micro-swimmer swarm is to be navigated is assumed to be circular and specified by \((x_{t},y_{t},r_{t})\) where \(x_{t},y_{t}\) denotes the centre in the 2d space and \(r_{t}\) is the radius of the target region. The target is absorbing in nature, which means that the micro-swimmers reaching the target get stuck there and do not move again.
Also, note that the \(\xi_{x}\) and \(\xi_{y}\) that are usually considered in the Langevin model of micro-robots which capture the Brownian motion [26], are not con
sidered here because the effect of the magnetic field of the Helmholtz coil is high enough to suppress the thermal effects at low Reynold's number [27].
In our simulations, we keep the frequency and hence the velocity of the micro-swimmers constant, and we only learn the orientation \(\theta_{m}\) through RL to navigate the swarm of robots. The challenge here lies in controlling multiple swimmers with a single control parameter in the presence of the hydrodynamic effect.
### Formulating RL
Here, we describe the details of the RL algorithm in connection with the simulation model described in the previous sub-section.
#### 2.2.1 Defining state, action and reward
Reinforcement learning consists of two main components: the agent and the environment. These two components interact with each other using three quantities, namely states, actions and rewards.
The state is the description of the environment given to the RL agent. For our problem of navigating the microswimmers to a target region, the essential state information is the state information of the swarm of micro-swimmers and the state of the target as shown in equation 4.
\[S=(S_{swimmers},S_{target}) \tag{4}\]
\(S_{target}\) describes the target state, and as mentioned in the environment description, \(S_{target}=(x_{t},y_{t},r_{t})\), which completely defines the target. \(S_{swimmers}\) can be done in multiple ways. For instance, it can either be the 2d positional coordinates of all the swimmers along with their orientation information or just the positional information or just the mean positional information. We run the RL simulations for all these states to understand their performance in complete and limited state information, as will be later described in the results section.
Action is the control parameter value the agent calculates and sends to the environment. In our problem, \(\theta_{m}\), as specified in section 2.1 denotes the action which orients the swimmers in the required direction.
The reward is the feedback the RL agent receives from the environment upon execution of the action that the RL agent determined. The agent modifies its action determination policy by analyzing this reward. We use the number of swimmers reaching the target as the reward for our navigation problem, which is the goal for our RL simulations and the quantity that needs to be maximized.
#### Algorithm selection
There are many algorithms available for implementing RL. We wanted to choose one suitable for a model-free environment that works with continuous state and action spaces. We found that algorithms like actor-critic and trust region policy optimization are ideal for our application as they focus on policy optimization directly. Proximal policy optimization (PPO), being state of the art in this class of algorithms, uses the suitable characteristics from both the algorithms mentioned above. We thus choose PPO for our current problem statement.
PPO takes the advantages of actor-critic algorithms along with trust region optimization and minibatch updates. Actor-critic algorithms consist of two parts: actor and critic, where the former chooses the action to be taken, and the latter evaluates the performance of that action. The two optimize each other using separate loss functions and are neural networks for PPO implementation. The second important characteristic of PPO is trust region optimization, where the policy being learnt is not allowed to change significantly in a single update step. This is done by limiting the KL divergence between the old and new policies. The third characteristic, minibatch updates, is an implementation strategy of applying sample efficiency to the algorithm. This enables the RL setup to learn from multiple samples simultaneously rather than just a single sample, like in the predecessor RL algorithm, TRPO (Trust Region Policy Optimization). This last characteristic helps learn the policy quicker than previous policy gradient methods in RL [28].
We came across another algorithm called Robust Policy Optimization (RPO), which achieves increased exploration capabilities in the PPO algorithm with just a minor modification in the implementation [19]. We use the cleanRL implementations [29] with few modifications for RL simulations to suit our micro-swimmer navigation environment. CleanRL provides single page code for each RL algorithm and incorporates features like state normalization and clipping, action clipping, and generalized advantage estimation that help in better convergence of policy estimation through RL.
## 3 RL Simulations and Results
### Environmental set-up
Here, we describe the implementation details of the simulation environment of the swarm of micro-swimmers. The micro-swimmer simulation environment is set-up with the following parameters. The swimmers are initially located in a square shape with the initial mean position located at the centre of the
2d-coordinate system as shown in figure 1. The swimmers are represented by dark blue squiggly arrows depicting the spiral microswimmers. The target is represented by the light blue circle. The spacing between the swimmers is around 6um. Their orientation is chosen randomly for every episode starting. The target centre is randomly selected within a 100um radius from the initial mean position of the micro-swimmers. The target radius is randomly selected within 5-20um. We vary the number of swimmers to 4, 9 and 16 for all the simulations.
An episode is defined as the beginning of a simulation run to the end of the run. The termination of the episode is when all the swimmers reach the target or the length of the episode runs beyond 500 time-steps or the mean position of micro-swimmers is more than 200um from the target centre, where the swimmers have wandered too far from the target. The velocity induced from the Helmholtz coil to the micro-swimmers is taken to be 10um/sec and the orientation is given by the RL agent at each time step. Every time step is a duration of 0.1s.
The reward at each time step is calculated as the number of swimmers reaching the target at that time step. The return is the weighted cumulative reward given by \(E\sum_{n=1}^{\infty}\gamma^{n}[R(s_{n})]\) where gamma is the discount factor taken
Figure 1: Sample environment setup at the beginning of anepisode
to be 0.99 to specify to the agent to focus on distant rewards or ultimate task at hand. The neural networks of the actor and critic contain a hidden layer of 64 neurons along with an initial layer and a final layer. The final layer for the actor provides the mean and standard deviation which provides the normal distribution description of the action that needs to be taken. The final layer of the critic provides the estimate of the value function which is the expected value of the episodic return. The other parameters like the number of mini-batches, learning rate and total time steps are all kept the default values taken from the cleanRL code.
The following subsections describe the different experiments performed and their corresponding performances, a summary of which is presented in the table 1.
### Primary experiment
Here, we performed the learning of navigation of micro-swimmers towards a constant target (constant location and size). This experiment was performed
Figure 2: Performance of primary experiment for PPO and RPO with respect to time-steps for 4, 9 and 16 swimmers: (a) episodic lengths of PPO (b) episodic returns of PPO (c) episodic lengths of RPO (d) episodic returns of RPO
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Environment & Description & no. of swimmers & PPO & RPO \\ \hline env-0 & Constant target location and size & 4 & 4 & 4 \\ \cline{2-5} & with full state information & 9 & 9 & 8.9 \\ \cline{2-5} & 16 & 16 & 13.9 \\ \hline env-1a & Swimmer positional state information & 4 & 3.9 & 3.7 \\ \cline{2-5} & without orientation information & 9 & 8.6 & 8.5 \\ \cline{2-5} & 16 & 15.6 & 15.1 \\ \hline env-1b & Swimmer mean position and orientation information & 4 & 4 & 3.9 \\ \cline{2-5} & Mean position and orientation & 9 & 7.8 & 8.9 \\ \hline env-1c & Swimmer information of swimmers & 4 & 4 & 4 \\ \cline{2-5} & outside target & 9 & 9 & 8.9 \\ \hline env-2 & Swimmer random target location and size & 16 & 11.9 & 11.9 \\ \cline{2-5} & with full state information & 4 & 0.2 & 3.9 \\ \hline env-2 & Random target location and size & 9 & 0.9 & 8.5 \\ \cline{2-5} & with full state information & 16 & 2.7 & 14.1 \\ \hline env-2-om & Multiple environments with target orientation included in state information on env-2 & 16 & - & 14.6 \\ \hline env-2-omc & Curriculum learning on env-2om & 16 & - & 15.8 \\ \cline{2-5} & 25 & - & 24.5 \\ \hline \end{tabular}
\end{table}
Table 1: Performance: smoothened return values after learning
information of the positions and orientations of all swimmers in addition to target location and size, in each case. Thus the size of the state vector was 15, 30 and 51 respectively.
Figure 2 represents the episodic return and length across the time steps for 4, 9 and 16 swimmers for both PPO and RPO algorithms. We observe that both the algorithms learn well and that the experiments with the higher number of swimmers take more time to converge compared to the lower number of swimmers. RPO for 16 swimmers is stuck at a local minimum but is not yet stable at 1 million time steps. We see that it converges to the full expected return if run for more time steps. PPO learns very well for this experiment as seen in the table 1 and figure 2b for all 4, 9 and 16 swimmers.
### State information modification experiments
Here we performed 3 sets of experiments again for 4, 9 and 16 swimmers, the results of which are shown in figure 3. The same seed is considered for all experiments with target specification: \((10.56,41.63,9.36)\mu m\), the same as for primary experiment.
Figure 3: Performance of state modification experiments for PPO and RPO in terms of episodic returns with respect to time-steps for 4, 9 and 16 swimmers: (a) PPO with env-1a (b) PPO with env-1b (c) PPO with env-1c (d) RPO with env-1a (e) RPO with env-1b (f) RPO with env-1c
In the first set, we provided only the position information of swimmers in \(s_{swimmers}\) and no orientation information. Thus, the size of state vector was 11, 21 and 35 respectively for 4, 9 and 16 swimmers. Figures 2(a) and 2(d) show the performance of the learning activity with episodic return over the time steps for PPO and RPO respectively. We observe they are relatively slower to converge compared to the primary case because of less information but they do converge.
In the second set, we provided the mean position and orientation information of swimmers in \(s_{swimmers}\). Thus, the size of state vector was 6 for any number of swimmers. In this case, the figures 2(b) and 2(e) show the performance evolution of the learning environment for PPO and RPO respectively. Here, we observe that both PPO and RPO converge for 4 swimmers but are stuck at local minimum for 9 and 16 swimmers. Probably, the mean information of all the swimmers is useless once most of the swimmers reach the target. RPO shows relatively faster learning compared to PPO.
In the final set of the state information modification experiments, we provided the mean position and orientation information of only the swimmers that have not yet reached the target in \(s_{swimmers}\). The size of the state vector remains the same as in the previous case with performance curves for PPO and RPO as shown in figures 2(c) and 2(f) respectively. Here, we observe that convergence is faster and variance of return is less compared to the second set, but both PPO and RPO are stuck at local maxima. This is probably because steering is hard with just mean information. RPO shows relatively faster learning compared to PPO here as well.
Overall, we observe that performance reduces as the number of swimmers increases and state information reduces. This is likely because the agent needs more manoeuvres to get all the swimmers into the target which is of limited size.
### Robustness experiments
Here, we performed experiments with target positions and size are random at every episode beginning. We constrained the target position to be within a 100um distance from the mean initial position of the swimmers along both x and y coordinate axes. The target radius is between 5 and 20um. Both the location and size of the target are chosen from a uniform distribution. The state information is taken to be like in the primary experiment, where positional and orientational information of all the swimmers are provided. Again, the experiment was performed for 4, 9 and 16 swimmers. Note that these experiments cover the cases of random mean initial position of swimmers also, where the positional state information can be centred to the mean initial
position of swimmers without modifying the orientation information. Here, we observed the performance as shown in figure 4. Here again, RPO performs extremely well compared to the PPO algorithm. PPO fails to converge for the simplest case of 4 swimmers, whereas, RPO can learn the navigation for 16 swimmers as well as observed in the subfigures 3(a) and 3(b) respectively. Here, we observe that the algorithm is burdened with two tasks of finding the path to the changing target as well as steering the swarm to get all the swimmers into the target.
We observed a slight improvement in performance when the target orientation with respect to the mean position of swimmers was provided in the state information. This experiment was performed for RPO only because of its consistently better performance compared to that of PPO and for 16 swimmers only as the result can be extrapolated for the simpler case of 4 and 9 swimmers as observed from the trend in the state information modification experiments. We also performed the sync vector environment with 4 parallel environments to obtain more data for the RL agent to learn effectively. The performance for this case is shown in figure 5 on the '\(rpo\_16\_om\)' line of the graph.
We perform the final set of experiments incorporating the curriculum learning [30] technique for the choice of the target position. The distance between the mean initial position of the swimmer swarm and the target was varied as shown in equation 5.
\[d\leq d_{f}-(d_{f}-d_{s})*\exp^{-e_{n}/t_{d}} \tag{5}\]
where, \(d_{f}\) is the final distance value, \(d_{s}\) is the starting distance value, \(e_{n}\) is the current episode number and \(t_{d}\) is the threshold decay. We used 2 parallel
Figure 4: Performance of robustness experiments (random target location) with env-2 for (a) PPO and (b) RPO in terms of episodic returns with respect to time-steps for 4, 9 and 16 swimmers
environments to obtain more data in this case. This experiment was run for 16 swimmers with \(t_{d}=1000\) and 25 swimmers with \(t_{d}=2000\) and their performance is depicted in figure 5 on the '\(rpo\_16\_omc\)' and '\(rpo\_25\_omc\)' lines of the graph respectively. We observe that there is a tremendous improvement in the learning curve by using curriculum learning, which eases the first task of finding the target position and the agent gradually learns to manoeuvre the swarm around a target of limited size, especially observed in the 25 swimmers case.
### RL model inference
The RL agent that has been trained can be saved with the actor and critic model weights along with the running mean and variance of observation. A new environment once created will feed in the state information to the same model as during training but with updated weights. The trained agent now gives the orientation at each time step that the magnetic coil uses to align the micro-swimmers.
The set of images in figure 6 show the steering performed by the RL agent for a swarm of 25 swimmers to reach a relatively smaller target. The arrow inside the target represents the swimmers absorbed by the target.
Figure 5: Performance improvement in episodic return with orientation information, multiple environments and curriculum learning on the random target positon and size environment
## 4 Conclusion
Reinforcement learning proves effective in learning the navigation for a swarm of micro-swimmers with a single, global controller and in a complex experiment that is hard to model analytically.
The RL agent finds it harder when number of swimmers increases or target size is smaller or target position is farther. It tends to converge at local minimum when the state information is reduced. It is also hard when there is randomness in the environment, where the target location is not constant. To improve the performance of navigation, more useful state information like the orientation of target with respect to the mean swimmer position can be provided and multiple environments can be run in parallel for more data as shown in the env-2-om experiment. Further, curriculum learning improves upon env-2-om where the agent is fed with easier (closer) targets at the beginning gradually moving towards tougher (farther) targets. Also, RPO has consistently shown better performance than PPO except for the scenario where the number of swimmers is 4, in which case PPO is achieving stable performance sooner than RPO.
Future work could focus on reward modification to include distance-based metrics, which would produce non-sparse rewards. Furthermore, the environ
Figure 6: Navigation of 25 swimmers towards a given target
ment can be made more complex with obstacles and drifts, where the agent would be needed to be more robust to navigate towards the target.
|
2309.03340 | Parameter Efficient Audio Captioning With Faithful Guidance Using
Audio-text Shared Latent Representation | There has been significant research on developing pretrained transformer
architectures for multimodal-to-text generation tasks. Albeit performance
improvements, such models are frequently overparameterized, hence suffer from
hallucination and large memory footprint making them challenging to deploy on
edge devices. In this paper, we address both these issues for the application
of automated audio captioning. First, we propose a data augmentation technique
for generating hallucinated audio captions and show that similarity based on an
audio-text shared latent space is suitable for detecting hallucination. Then,
we propose a parameter efficient inference time faithful decoding algorithm
that enables smaller audio captioning models with performance equivalent to
larger models trained with more data. During the beam decoding step, the
smaller model utilizes an audio-text shared latent representation to
semantically align the generated text with corresponding input audio. Faithful
guidance is introduced into the beam probability by incorporating the cosine
similarity between latent representation projections of greedy rolled out
intermediate beams and audio clip. We show the efficacy of our algorithm on
benchmark datasets and evaluate the proposed scheme against baselines using
conventional audio captioning and semantic similarity metrics while
illustrating tradeoffs between performance and complexity. | Arvind Krishna Sridhar, Yinyi Guo, Erik Visser, Rehana Mahfuz | 2023-09-06T19:42:52Z | http://arxiv.org/abs/2309.03340v1 | Parameter Efficient Audio Captioning with Faithful Guidance Using Audio-Text Shared Latent Representation
###### Abstract
There has been significant research on developing pretrained transformer architectures for multimodal-to-text generation tasks. Albeit performance improvements, such models are frequently overparameterized, hence suffer from hallucination and large memory footprint making them challenging to deploy on edge devices. In this paper, we address both these issues for the application of automated audio captioning. First, we propose a data augmentation technique for generating hallucinated audio captions and show that similarity based on an audio-text shared latent space is suitable for detecting hallucination. Then, we propose a parameter efficient inference time faithful decoding algorithm that enables smaller audio captioning models with performance equivalent to larger models trained with more data. During the beam decoding step, the smaller model utilizes an audio-text shared latent representation to semantically align the generated text with corresponding input audio. Faithful guidance is introduced into the beam probability by incorporating the cosine similarity between latent representation projections of greedy rolled out intermediate beams and audio clip. We show the efficacy of our algorithm on benchmark datasets and evaluate the proposed scheme against baselines using conventional audio captioning and semantic similarity metrics while illustrating tradeoffs between performance and complexity.
Arvind Krishna Sridhar, Yinyi Guo, Erik Visser, Rehana Mahfuz Qualcomm Technologies
Audio captioning, Hallucination, CLAP
## 1 Introduction
In recent years, there has been extensive research on pushing the boundaries of multimodal-to-text generation tasks like image captioning [1], audio captioning ([2], [3]) etc. Although there has been significant research in improving model performance, two major bottlenecks, hallucination [4] and large memory footprint[5], remain that inhibit the wide scale adoption of such tasks on constrained computing devices. In [4], hallucination is defined as "the generated content that is nonsensical or unfaithful to the provided source content". Their survey documents research on hallucination in multimodal to text generation tasks such as abstractive summarization and vision-language generation. Second, improved performance of large pretrained transformer models on evaluation benchmarks comes at the cost of larger memory footprint[5]. These models are often over-parameterized with respect to the task they are solving, thus necessitating architectural innovations to tackle computational complexity during deployment.
In this paper, we address both of these bottlenecks by proposing automated audio captioning models retrofitted with a hallucination detection and mitigation at decoding stage - the task of generating a relevant audio caption given an audio. To the best of our knowledge, we are the first to make the following contributions in this domain. First, we propose a data augmentation technique to generate hallucinated audio captions using existing audio captioning datasets by leveraging large language models. Second, we provide an intuitive reasoning on why existing audio captioning metrics are not suitable for detecting hallucination. Instead we argue that acoustic similarity of captions needs to be taken into account and introduce a hallucination metric based on an audio-text shared latent representation. Third, we propose an inference time hallucination mitigation algorithm where similarity of the intermediate beams with respect to the input audio is used to faithfully guide the beams during beam decoding. We show that our retrofitting method enables smaller audio captioning models with performance equivalent to much larger models.
## 2 Related Works
### Audio Captioning
Conventional audio captioning systems are based on encoder-decoder architectures where the encoder captures the temporal and acoustic information of the audio and the decoder generates the caption auto-regressively([3], [6]). In addition to the encoded audio representations, keywords are extracted from the audio and provided to the decoder to achieve grounded guidance([2], [3]). The study on lexical diversity and similarity of captions[7] shows that different annotators interpret the same audio using different vocabulary. In this work, we analyze the properties of SOTA audio captioning evaluation metrics when used for semantic and acoustic similarity detection and find that similarity metrics based on audio-text shared latent representation are better suited for such tasks.
### Hallucination in natural language generation and its detection metrics
Mitigating hallucination via training[8] and during inference time([9, 10]) is an ongoing focus of natural language research. FactEdit[8] performs rewriting of the generated summary to avoid hallucination while [9, 10] propose decoding techniques to reduce hallucination on the fly during decoding time. In [1], hallucination is investigated in image captioning and a CHAIR metric is proposed that computes the ratio of generated objects to objects found in image and ground-truth captions. Further, it is observed in [11] that models that perform well on standard captioning metrics like CIDER[6] still produce unfaithful texts. In [12], the problem of hallucination in video captioning is studied. To our knowledge, we are the first to study hallucination in the audio captioning domain.
## 3 Methodology
We divide our proposed methodology into three sections. First, we explain our novel data augmentation technique to generate hallucinated audio captions. Second, we investigate and propose a hallucination metric to measure hallucinations in audio captioning. Third, we explain in detail our proposed faithful decoding algorithm.
### Generating hallucinated data
In order to investigate and study hallucination in audio captioning, we introduce a data augmentation technique that removes zero or more audio events from the original caption and gradually augments it with similar/dissimilar audio events. Table 1 illustrates generated hallucinated captions using this method. First, we randomly select 50 examples from the Clotho[13] dataset. We randomly select one of the five ground truth captions as original caption and paraphrase it using vicuna checkpoint[14] of LLaMA 2[15] to generate the non hallucinated data points. Second, we retrieve audio tags using Audio spectrogram transformer[16] for the corresponding audio clip. We assume the audio tags with low classification scores are acoustically dissimilar from the input audio. So, we randomly select three tags in the range of 30-40(dissimilar) from a list of descending order ranked audio tags based on classification score. We generate a modified audio caption using LLaMA 2[15] with in-context learning by providing the original audio caption, the audio tags to inject and a few examples.
### CLAPScore as hallucination metric
For a metric to be considered suitable for hallucination detection, it should have two properties - 1) detect false positive audio events in the caption 2) account for acoustic similarity between audio events. Contrastive Language Audio Pretraining (CLAP) [17] uses an audio and text encoder to learn a shared embedding space using contrastive learning which shows SOTA results for downstream tasks like audio retrieval, audio captioning [6]. We introduce CLAPScore to compute the cosine similarity over audio-text and text-text pairs as shown in Equation 1.
\[\text{CLAPScore}=\frac{x_{A}\cdot x_{B}}{|x_{A}|\cdot|x_{B}|} \tag{1}\]
Where \(x_{A}\) and \(x_{B}\) represents CLAP projected representations of audio or text. Hereon, we denote the CLAPScore between audio and text as CLAPScore\({}_{at}\) while between two texts as CLAPScore\({}_{tt}\).
We adopt the audio captioning metrics used by [6] including CIDER, SPICE, BLEU, METEOR, ROUGE L scores and text semantic similarity based metric SentBert[18]. For the first condition, we show the performance of standard metrics on the generated hallucinated and non-hallucinated, as described in section 3.1, serving as benchmark. For the second condition, acoustic similarity detection needs to be an inherent property of the metric. Since it is difficult to categorize audio clips into acoustically similar pairs systematically, we perform a qualitative study as shown in Table 3. From Table 2 and 3, We can observe that none of the metrics except CLAPScore\({}_{tt}\) satisfy both these properties. Although the standard metrics perform well on detecting text hallucinations
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Data\textbackslash\{Metrics} & BLEU 1 & METEOR & ROUGE L & CIDER & SPICE & SentBert & CLAPScore\({}_{tt}\) \\ \hline Hallucinated & 0.4338 & 0.223 & 0.3773 & 0.2077 & 0.1491 & 0.5414 & 0.3609 \\ \hline Non Hallucinated & 0.5798 & 0.3408 & 0.5663 & 0.8031 & 0.217 & 0.8627 & 0.701 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of evaluation metrics on audio captioning hallucination benchmark.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Original Caption & Injecting Audio Tags & Hallucinated Caption \\ \hline A campfire in the night time with crickets and other bugs making noise in the background. & Bird, Speech, Outside, urban or manmade & A nighttime campfire with crickets and other bugs chirping in the background, accompanied by the sound of human speech. \\ \hline A crowd of people and a child begin talking as cars beep in the background and then the crowd cheers. & Outside, urban or manmade, Singing, Insect & A child is playing outside in an urban area while singing and insects are heard in the background. \\ \hline \end{tabular}
\end{table}
Table 1: Hallucinated audio captions generated using the proposed data augmentation technique.
in Table 2, they are unable to distinguish between acoustic similar and dissimilar audio captions as shown in Table 3. This is due to them considering only text embedding space during computation. The audio captions - "Horse is trotting." and "Someone is walking on the wood." might seem very different in the text domain whereas in acoustic domain both sounds are acoustically similar to hear. Hence, such audio events should be less penalized by a hallucination metric compared to the acoustically contrasting audio caption pairs such as ("Crowd is applauding the performer.", "Crowd is silent after the performer's show"). Since CLAPScore\({}_{tt}\) satisfies both the properties, in this paper, we adopt CLAP as the hallucination metric.
### Faithful decoding algorithm
In this section, we propose a faithful decoding algorithm that utilizes the ability of CLAP to detect hallucinations (shown in Section 3.2) during inference time.
#### 3.3.1 Greedy rollout
Beam search performs a breadth first search at each decoding step with limited branches from Begin of sentence (BOS) to End of sentence (EOS) [19]. Each path from BOS to EOS are called hypothesis. During beam decoding process, only partial hypothesis or intermediate beams (paths that start at BOS and end before EOS) are available for re-ranking. To compare the intermediate beams against the input audio, we complete them using greedy search[19]. Greedy search samples the token with highest probability at every decoding step as shown in Equation 2. This serves us as a look ahead for how the beam would pan out in case it goes down that direction.
\[P_{\Theta}(y|x)=\prod_{t=1}^{|y|}P_{\Theta}(y_{t}|x,y_{<t}), \tag{2}\]
where \(x\) is the input and \(y_{t}\) is the word generated at \(t^{th}\) decoding step.
#### 3.3.2 Faithfulness scorer
Next, we compute the relevance of greedy rolled out beam with the input audio by taking the CLAP projections of beam text and audio. We normalize the projections and take cosine similarity to compute CLAPScore\({}_{at}\) as the distance between greedy rolled out beam and audio in the shared embedding space (Equation 1).
#### 3.3.3 Beam re-ranker
To incorporate the CLAP\({}_{score}\) into beam decoding we weight it over the model probability P\({}_{i}\) to compute P\({}_{weighted}\) (Equation 3). The modified probability P\({}_{weighted}\) ensures to guide the beam to explore regions faithful to input audio thereby reducing hallucination.
\[\text{P}_{weighted}=(1-\alpha)\text{P}_{i}+\alpha\text{CLAPScore}_{at} \tag{3}\]
where \(P_{i}\) denotes the model probability for \(i^{ith}\) token.
## 4 Experiments
### Datasets
We trained the models on Clotho[13] and AudioCaps[20] audio captioning datasets. Clotho audio captioning dataset comprises of 4981 audio samples with each sample accompanied
Figure 1: Proposed faithful decoding for audio captioning. The proposed components are colored light sky blue.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l||l|} \hline Example pair\textbackslash{Metrics} & \begin{tabular}{l} BLEU- \\ 1 \\ \end{tabular} & \begin{tabular}{l} METEOR \\ \end{tabular} & \begin{tabular}{l} ROUGE- \\ L \\ \end{tabular} & \begin{tabular}{l} CIDER \\ \end{tabular} & \begin{tabular}{l} SPICE \\ \end{tabular} & \begin{tabular}{l} SentBert \\ \end{tabular} &
\begin{tabular}{l} CLAP \\ Score\({}_{tt}\) \\ \end{tabular} \\ \hline Horse is trotting., Someone is tapping on a surface. & 0.3033 & 0.0702 & 0.193 & 0 & 0 & -0.0572 & 0.6476 \\ \hline Horse is trotting., Someone is running on wood. & 0.1947 & 0.0379 & 0.2179 & 0 & 0 & 0.2262 & 0.7223 \\ \hline \hline Crowd is applauding the performer., Crows is silent after the performerβs show. & 0.439 & 0.2374 & 0.5908 & 0.0 & 0.4444 & 0.3596 & 0.3836 \\ \hline \end{tabular}
\end{table}
Table 3: Qualitative analysis of evaluation metrics on acoustic similar and dissimilar audio captions. The first two pairs are similar acoustic captions while the third pair is an example of dissimilar acoustic caption.
by 5 human written captions. The duration of the audio clips are in the range of 15 to 30 sec. Audiocaps[20] consist of 46k human written captions obtained via crowdsourcing with 10 sec duration for each audio clip. For both the datasets, we use the test set for our evaluation. For fixed length transformer encoders like HTSAT BERT[6], we truncate the audio sample to 10 sec and resample at the rate of 32000.
### Experiment Setup
We demonstrate the performance of our model against [21] chosen especially due to its small size. The model is trained on Clotho and AudioCaps datasets from scratch and evaluated correspondingly. It consists of CNN10 PANN pretrained as the encoder and a stack of two transformer decoder layers as decoder. To compare against large models, we show our audio captioning performance on HTSAT-BART[6] which consists of HTSAT as audio encoder and BART[6] as decoder. For the shared embedding space to get projections, we use CLAP[17] and HTSAT BERT[6]. We use the LAION checkpoint[17] for CLAP and WavCaps checkpoint for HTSAT BERT[6]. We use 0.8 and 0.6 as \(\alpha\) value for experiments in Table 4 and Table 5. Clap Beam and Htsatbert Beam refers to the proposed faithful decoding algorithm with CLAP and Htsatbert as the shared embedding space to project audio and text respectively.
## 5 Results
From Table 4, the improvements on CLAPScore\({}_{tt}\) by 0.04 and 0.06 for AudioCaps and Clotho datasets indicate reduced hallucinations. We also observe that the proposed faithful decoding improves the performance of baseline model across all metrics except ROUGE L. This demonstrates that the proposed faithful decoding not only reduces hallucination but improves overall caption quality for smaller models. As a sanity check and to compare the performance against larger models, we perform the same experiments on HTSAT-BART[6]. In Table 5, the proposed faithful decoding slightly outperforms on SPICE and CLAPScore\({}_{tt}\) for AudioCaps while not causing a significant overall change in evaluation metrics. This is expected since the HTSAT-BART, being a large model and trained on Wavcaps[6] a much larger dataset (630k samples) than Clotho and AudioCaps, does not get extra useful information from the shared embedding space.
## 6 Conclusion
We investigated the hallucination problem in audio captioning and proposed a new hallucination augmentation technique which will aid in future research of hallucination mitigation algorithms and metrics. Then, we showed that cosine similarity on audio-text shared embedding is a good hallucination metric. With no further finetuning, we proposed an inference time faithful decoding algorithm that utilizes shared embedding space to guide the beams during decoding time. In the future, we plan to develop a hallucination loss for finetuning stage.
|
2309.05731 | Circuit complexity and functionality: a thermodynamic perspective | Circuit complexity, defined as the minimum circuit size required for
implementing a particular Boolean computation, is a foundational concept in
computer science. Determining circuit complexity is believed to be a hard
computational problem [1]. Recently, in the context of black holes, circuit
complexity has been promoted to a physical property, wherein the growth of
complexity is reflected in the time evolution of the Einstein-Rosen bridge
(``wormhole'') connecting the two sides of an AdS ``eternal'' black hole [2].
Here we explore another link between complexity and thermodynamics for circuits
of given functionality, making the physics-inspired approach relevant to real
computational problems, for which functionality is the key element of interest.
In particular, our thermodynamic framework provides a new perspective on the
obfuscation of programs of arbitrary length -- an important problem in
cryptography -- as thermalization through recursive mixing of neighboring
sections of a circuit, which can be viewed as the mixing of two containers with
``gases of gates''. This recursive process equilibrates the average complexity
and leads to the saturation of the circuit entropy, while preserving
functionality of the overall circuit. The thermodynamic arguments hinge on
ergodicity in the space of circuits which we conjecture is limited to
disconnected ergodic sectors due to fragmentation. The notion of fragmentation
has important implications for the problem of circuit obfuscation as it implies
that there are circuits with same size and functionality that cannot be
connected via local moves. Furthermore, we argue that fragmentation is
unavoidable unless the complexity classes NP and coNP coincide, a statement
that implies the collapse of the polynomial hierarchy of computational
complexity theory to its first level. | Claudio Chamon, Andrei E. Ruckenstein, Eduardo R. Mucciolo, Ran Canetti | 2023-09-11T18:02:21Z | http://arxiv.org/abs/2309.05731v2 | # Circuit complexity and functionality: a thermodynamic perspective
###### Abstract
Circuit complexity, defined as the minimum circuit size required for implementing a particular Boolean computation, is a foundational concept in computer science. Determining circuit complexity is believed to be itself a hard problem [1]. Furthermore, placing general lower bounds on circuit complexity would allow distinguishing computational classes, such as P and NP, an unsolved problem [2]. Recently, in the context of black holes, circuit complexity has been promoted to a physical property, wherein the growth of complexity is reflected in the time evolution of the Einstein-Rosen bridge ("wormhole") connecting the two sides of an AdS "eternal" black hole [3]. Here we explore another link between complexity and physics for circuits of _given_ functionality. Taking advantage of the connection between circuit counting problems and the derivation of ensembles in statistical mechanics, we tie the entropy of circuits of a given functionality and fixed number of gates to circuit complexity. We use thermodynamic relations to connect the quantity analogous to the equilibrium temperature to the exponent describing the exponential growth of the number of distinct functionalities as a function of complexity. This connection is intimately related to the finite compressibility of typical circuits. Finally, we use the thermodynamic approach to formulate a framework for the obfuscation of programs of arbitrary length - an important problem in cryptography - as thermalization through recursive mixing of neighboring sections of a circuit, which can viewed as the mixing of two containers with "gases of gates". This recursive process equilibrates the average complexity and leads to the saturation of the circuit entropy, while preserving functionality of the overall circuit. The thermodynamic arguments hinge on ergodicity in the space of circuits which we conjecture is limited to disconnected ergodic sectors due to _fragmentation_. The notion of fragmentation has important implications for the problem of circuit obfuscation as it implies that there are circuits with same size and functionality that cannot be connected via local moves. Furthermore, we argue that fragmentation is unavoidable unless the complexity classes NP and coNP coincide, a statement that implies the collapse of the polynomial hierarchy of complexity theory to its first level.
## Introduction
During the past decade a novel connection between physics and computer science has emerged in the course of explorations of one of the most fundamental open problems in physics, namely, the development of a theory of quantum gravity that reconciles and unifies general relativity with quantum mechanics. In this context, a bold proposal was made and developed by Susskind and collaborators [3, 4, 5] whereby the formal computer science notion of complexity has acquired specific physical reality, as it is conjectured that the growth of computational complexity represents the growth of the Einstein-Rosen bridge (the "wormhole") of an AdS "eternal" black hole. It is argued that this growth is linear in time and persists for a time scale that is exponential in the physical black hole entropy. One way to formalize this connection is to use heuristic models of black hole dynamics based on quantum circuits of 2-qubit gates [6], in which case one (i) defines "computational complexity" as the minimum number of elementary gates needed to implement a particular unitary operator; and (ii) introduces an intuitive notion of "circuit entropy", determined from the logarithm of the number of possible circuits of a given complexity. Unlike the physical entropy, which saturates to a value linear in the number of qubits, \(n\), the circuit entropy, just as the circuit complexity, grows to its saturation value of \(O(4^{n})\). More precisely, Refs. [3, 4, 5] argue that computational complexity is connected with the entropy of an auxiliary ensemble of \(2^{n}\) classical particles moving on a negatively curved two-dimensional surface of large genus. In analogy with the second law of thermodynamics in physical systems, this connection leads to the natural description of the growth tendency of computational complexity as "the second law of complexity" [5]. It is important to note that in deriving the linear growth of complexity with number of gates, Refs. [4, 5]
ignore the role of "circuit collisions", defined as different sequences of gates that lead to the same unitary, i.e., circuits of the same "functionality."
By contrast, here we focus on the development of a thermodynamic approach that takes into account both complexity _and_ functionality. Ultimately it is the specific computation implemented by a given circuit that is the central object of interested in most computational problems. Our framework is based on reversible computing, which can be implemented either as permutations \(P\) acting on the space of \(2^{n}\) strings of \(n\) bits, or as unitary transformations \(U\) acting on the \(d^{n}\) dimensional Hilbert space of \(n\) qudits with local Hilbert space dimension \(d\). Below, we focus on the permutations because the counting is discrete (and thus simpler), but the results carry over to unitaries with minor modifications. We will establish that there are exponentially many ways to express a given functionality - a permutation \(P\) - in terms of reversible gates. Our framework of circuit thermodynamics allows us to connect the scaling of two seemingly unrelated counting problems: (a) how many \(\mathcal{N}\)-gate circuits can one write for a given functionality, and (b) how many distinct functionalities are there for circuits with given complexity \(\mathcal{K}\). The connection between these quantities is tied to finite compressibility of typical circuits, e.g., those with gates drawn randomly from a given gate set. This finite compressibility highlights the importance of circuit collisions and also implies a linear growth of complexity with number of gates (up to its maximum value exponential in \(n\)) with a slope that is less than unity. As an application of the framework, keeping track of functionality allows us to formulate a thermodynamic approach to the problem of circuit obfuscation, i.e., a form of program encryption that hides one among many circuit implementations of a given functionality.
Below we also introduce the notion of ergodicity in the space of circuits of equal size and functionality, which is implied by the assumption of the uniform covering of the space of circuits assumed in formulating our thermodynamic approach. The discussion of ergodicity requires defining a dynamics in the space of circuits which enables transforming two circuits into one another while preserving size and functionality. We define a set of dynamical rules that we refer to as "\(k\)-local" dynamics according to which one replaces \(k\)-gate subcircuits by equivalent subcircuits of equal size. This dynamical rule conserves both the functionality and size of the original circuit. We argue that, generically, such models lead to fragmentation of the space of circuits into disconnected sectors. Thus, ergodicity holds and the thermodynamic framework only applies within each sector. This conclusion raises interesting questions about circuit obfuscation that are connected with fundamental assumptions of complexity theory in computer science.
## Results
**Counting circuits, entropy inequalities, and the thermodynamics of circuit complexity:** As noted above, there are multiple ways of writing the same permutation \(P\) using reversible gates; the number of ways depends on the gate set \(G\) used. We define the circuit entropy
\[\mathcal{S}(P,\mathcal{N})=\log_{2}\Omega(P,\mathcal{N})\;, \tag{1}\]
where \(\Omega(P,\mathcal{N})\) is the number of circuits realizing permutation \(P\) with exactly \(\mathcal{N}\) gates. This definition immediately implies the sum-rule \(\sum_{P}\;\Omega(P,\mathcal{N})=|G|^{\mathcal{N}}\), where \(|G|\) denotes the cardinality of the gate set used in the implementation of \(P\).
The above counting parallels that used in the formulation of the microcanonical ensemble in statistical mechanics. In this setting, both \(\mathcal{N}\) and the circuit functionality, i.e., the permutation \(P\) implemented by the circuit, are "conserved quantities". Furthermore, we assume that all circuits implementing \(P\) with \(\mathcal{N}\) gates appear with equal weight in the counting, a condition equivalent to the equal probability of microstates in the microcanonical ensemble.
A number of inequalities follow from the definition of the circuit entropy in Eq. (1). The simplest one,
\[\mathcal{S}(P_{1},\mathcal{N}_{1})+\mathcal{S}(P_{2},\mathcal{N}_{2})\leq \mathcal{S}(P_{1}P_{2},\mathcal{N}_{1}+\mathcal{N}_{2})\;, \tag{2}\]
expresses the fact that there may be more ways of implementing the product \(P_{1}P_{2}\) than simply sequentially implementing \(P_{1}\) and then \(P_{2}\). Using this inequality, we can immediately derive a lower bound on the entropy \(\mathcal{S}(P,\mathcal{N})\) in terms of the circuit complexity \(\mathcal{K}(P)\) of the permutation \(P\):
\[\mathcal{S}(P,\mathcal{K}(P))+\mathcal{S}(\mbox{1I},\mathcal{N}-\mathcal{K}(P ))\leq\mathcal{S}(P,\mathcal{N})\;, \tag{3}\]
where \(\rm 1\hskip-2.845276ptl\) denotes the identity permutation, which has zero complexity, i.e., it can be expressed without using any gate. The bound Eq. (3) connects the microcanonical ensemble entropy to the circuit complexity, and provide the foundation to what we refer to as the _thermodynamics of circuit complexity_. (In analogy to statistical physics, by introducing weights for different gates one could have also constructed a canonical ensemble for circuits.) The two terms on the left hand side represent two types of contributions to the circuit entropy: the first accounts for the number of different ways of writing \(P\) within the minimum possible size \({\cal K}(P)\), and is independent of \({\cal N}\); the second term depends on the "free volume" \({\cal N}-{\cal K}(P)\), with the complexity \({\cal K}(P)\) acting as an "excluded volume". The \({\cal S}(\rm 1\hskip-2.845276ptl,{\cal N}-{\cal K}(P))\) contribution depends on \(P\) only through its complexity. We posit that (i) up to subextensive corrections, \({\cal S}(P,{\cal K}(P))\) depends on \(P\) only through its complexity \({\cal K}(P)\); and that (ii) the entropy \({\cal S}(P,{\cal N})\) also only depends on \({\cal N}\) and the complexity \({\cal K}(P)\), namely,
\[{\cal S}(P,{\cal N})\approx\bar{\cal S}({\cal K}(P),{\cal N})\;. \tag{4}\]
Note that, in parallel with the entropy inequality in Eq. (2), the circuit complexity satisfies the opposite inequality,
\[{\cal K}(P_{1})+{\cal K}(P_{2})\geq{\cal K}(P_{1}P_{2})\;, \tag{5}\]
which reflects the obvious fact that there may be shorter circuits implementing \(P_{1}P_{2}\) than the concatenation of \(P_{1}\) and \(P_{2}\).
To exploit the implications of the above counting argument we introduce another entropy function \(\sigma({\cal K},{\cal N})=\log_{2}\omega({\cal K},{\cal N})\). By contrast to \(\Omega({\cal K}(P),{\cal N})\), which counts the number of \({\cal N}\)-gate circuits of _fixed functionality_ with complexity \({\cal K}\), \(\omega({\cal K},{\cal N})\) counts all \({\cal N}\)-gate circuits of _fixed complexity_, \({\cal K}\):
\[\omega({\cal K},{\cal N})=\sum_{P}\delta_{{\cal K},{\cal K}(P)}\;2^{{\cal S}( P,{\cal N})}\approx\sum_{P}\delta_{{\cal K},{\cal K}(P)}\;2^{\bar{\cal S}({ \cal K}(P),{\cal N})}=\nu({\cal K})\;2^{\bar{\cal S}({\cal K},{\cal N})}\;, \tag{6}\]
leading to
\[\sigma({\cal K},{\cal N}) \approx \log_{2}\nu({\cal K})+\bar{\cal S}({\cal K},{\cal N}) \tag{7a}\] \[\approx \alpha{\cal K}+\bar{\cal S}({\cal K},{\cal N})\;\;, \tag{7b}\]
where \(\nu({\cal K})=\sum_{P}\delta_{{\cal K},{\cal K}(P)}\) is a "density of states" that counts the number of possible functionalities (i.e., permutations) implemented by circuits of a fixed complexity, \({\cal K}\). In writing Eq. (7b) we assumed the extensitivity of both sides of Eq. (7a) which implies that \(\log_{2}\nu({\cal K})=\alpha{\cal K}\). The function \(\omega({\cal K},{\cal N})=2^{\sigma({\cal K},{\cal N})}\) defines, for circuits of \({\cal N}\) gates, the circuit complexity weight distribution, which peaks at the extremum of the function \(\sigma({\cal K},{\cal N})\):
\[\left.\frac{\partial\sigma({\cal K},{\cal N})}{\partial{\cal K}}\right|_{{ \cal N}}=0=\alpha+\left.\frac{\partial\bar{\cal S}({\cal K},{\cal N})}{\partial{ \cal K}}\right|_{{\cal N}}\;. \tag{8}\]
Since the entropy decreases with complexity, we write \(\partial\bar{\cal S}({\cal K},{\cal N})/\partial{\cal K}|_{{\cal N}}=-\beta(\kappa)\) where \(\beta(\kappa)\) is a positive function of \(\kappa={\cal K}/{\cal N}\). At the extremum \(\kappa=\kappa^{*}\), \(\alpha=\beta(\kappa^{*})=\beta\), and Eq. (7b) assumes the form,
\[\sigma({\cal K}^{*},{\cal N})=\beta\;{\cal K}^{*}+\bar{\cal S}({\cal K}^{*},{ \cal N})\;\;. \tag{9}\]
It is gratifying that Eq. (9) can be recast in a form familiar from the thermodynamics of physical systems: if we interpret \({\cal K}\) as the negative of the energy, \({\cal E}=-{\cal K}\) (or equivalently, \(\epsilon={\cal E}/{\cal N}=-\kappa\)) and \(\beta=1/T\) as the inverse of the temperature \(T\), Eq. (9) can be rewritten in terms of the equilibrium free energy \(F(T,{\cal N})\),
\[-T\;\sigma_{e}({\cal E},{\cal N})\equiv F(T,{\cal N})={\cal E}-T\;\bar{\cal S} _{e}({\cal E},{\cal N})\;,\;, \tag{10}\]
where subscripts \(e\) represent the respective \(\sigma\) and \(\bar{\cal S}\) functions evaluated at the corresponding negative energies. Within this correspondence, the smallest (large negative) energy state - corresponding to high complexity - would represent a low entropy crystal while the largest (zero) energy state - corresponding to low complexity - would represent a high entropy gas.
The direct analogy with statistical mechanics can be exploited further in the calculation of the probability distribution of complexities for \({\cal N}\)-gate circuits:
\[P_{\cal N}({\cal K})=\frac{\omega({\cal K},{\cal N})}{\sum_{{\cal K}=0}^{{\cal N }}\omega({\cal K},{\cal N})}\;, \tag{11}\]
where the form of \(\omega({\cal K},{\cal N})\) can be obtained by expanding \(\sigma({\cal K},{\cal N})=\log_{2}\omega({\cal K},{\cal N})=\alpha{\cal K}+ \bar{\cal S}({\cal K},{\cal N})\) in Eq. (7b) to second order in \(\Delta{\cal K}=({\cal K}-{\cal K}^{*})\), the departure from the solution of the extremum condition, \(\beta(\kappa^{*})=\alpha\):
\[\log_{2}\omega({\cal K},{\cal N})=\bar{\cal S}({\cal K}^{*},{\cal N})+\alpha{ \cal K}^{*}+\frac{1}{2}\;\Delta{\cal K}^{2}\;\frac{\partial^{2}\bar{\cal S}({ \cal K},{\cal N})}{\partial{\cal K}^{2}}\bigg{|}_{{\cal N},{\cal K}={\cal K}^{* }}+\cdots\;\;. \tag{12}\]
Using the extensivity of the entropy in \({\cal K}\) and \({\cal N}\) we can write \(\bar{\cal S}({\cal K}^{*},{\cal N})=-\beta{\cal K}^{*}-\beta\mu{\cal N}+\cdots\), where we have borrowed the statistical mechanics notation involving the equilibrium "chemical potential" \(\mu\). At the extremum, this leads to the simplification of the first two terms in Eq. (12) to \(\bar{\cal S}({\cal K}^{*},{\cal N})+\alpha{\cal K}^{*}=-\beta\mu{\cal N}\) from which it then follows that:
\[\omega({\cal K},{\cal N})=2^{-\frac{1}{2\lambda{\cal N}^{2}c_{{\cal N}}} \Delta{\cal K}^{2}}\;2^{-\beta\mu{\cal N}}\;. \tag{13}\]
Thus, the probability distribution in Eq. (11) is a Gaussian peaked at \(\bar{\cal K}={\cal K}^{*}=\kappa^{*}{\cal N}\)_linear_ in \({\cal N}\) with a width (root mean square deviation) \(\Delta_{\rm rms}\propto\sqrt{{\cal N}T^{2}c_{{\cal N}}}\), where \(c_{{\cal N}}=-\partial{\cal K}/\partial T|_{{\cal K}^{*},{\cal N}}\) is a positive intensive quantity analogous to the specific heat in thermodynamics, which, in physical systems measures the increase in energy induced by an increase in temperature and controls energy fluctuations around the thermal equilibrium state at temperature \(T\). In our case, note the negative sign in the definition of \(c_{{\cal N}}\): this accounts for the fact that increasing temperature (i.e., increasing entropy) implies a decrease in complexity. Moreover, Eq. (13) and the sum rule \(\sum_{P}\;\Omega(P,{\cal N})=\sum_{{\cal K}=0}^{{\cal N}}\omega({\cal K},{ \cal N})=|G|^{{\cal N}}\) determines the leading behavior of the chemical potential, \(\mu=-T\log_{2}|G|\).
It is important to stress that we expect that generic solutions of the extremum condition \(\alpha=\beta(\kappa^{*})\) for \(\kappa^{*}\), which depend on the gate set through the value of \(\alpha\), are neither \(0\) nor \(1\) but lie in between, \(0<\kappa^{*}<1\). This expectation underscore two important conclusions of our paper, which we conjecture will survive more rigorous treatments: (a) generic circuits display a finite circuit compressibility with a compression factor \(\eta=(1-{\cal K}^{*}/{\cal N})\), with \(0<\eta<1\); and (b) the average complexity grows linearly with the depth of the circuit. We stress that these results hinge on an important and non-trivial feature that emerges from the thermodynamic arguments at the root of Eq. (8) and the resulting condition \(\alpha=\beta(\kappa)\), namely the balance between two competing effects: the exponential increase in the density of states \(\nu({\cal K})\approx 2^{\alpha{\cal K}}\) and the decrease in the entropy \(\bar{\cal S}({\cal K},{\cal N})\) with increasing \({\cal K}\). We also note that, while our extensivity assumption for \(\log_{2}\nu({\cal K})\) and the linear increase of the average complexity with circuit depth should hold up to a maximum complexity exponential in \(n\), \({\cal K}_{\rm max}\sim n\;2^{n}\), our focus is on polynomial (in \(n\)) size circuits with \({\cal N}>>n\).
To further motivate the notion that generic circuits display a finite compressibility with a compression factor \(0<\eta<1\), we consider the following scenario leading to a lower bound on \(\eta\). Consider a random circuit of 3-bit Toffoli gates and imagine "pushing" a gate through the circuit until the gate encounters either (i) a gate with which it does not commute, in which case we stop; or (ii) its inverse, in which case the pair (the gate and its inverse) annihilate, decreasing the size of the circuit by two gates (see Fig. 1). For a Toffoli gate, the probability that it does commute with a gate on its path is of the order \(1-{\cal O}(1/n)\), or equivalently, a gate can be "pushed" through \({\cal O}(n)\) gates before either stopping as in case (i) or annihilating as in case (ii). The overall probability of annihilation is \({\cal O}(1/n^{2})\), accounting for the probability \({\cal O}(1/n^{3})\) that the inverse is met in any of the \({\cal O}(n)\) attempts before stopping. Hence, this process leads to a compression of the circuit by a factor \((1-2\xi/n^{2})\) of its original size, where \(\xi\) is a constant of \({\cal O}(1)\). This implies that circuits with greater than \({\cal O}(n^{2})\) universal (Toffoli) gates are compressible with \(\eta=2\xi/n^{2}\), setting a lower bound for compressibility of random circuits of universal gates. Indeed, since the probability of annihilation of linear gates - NOTs and CNOTs - scales more favorably as \(1/n\) and \(1/n^{2}\) respectively, circuits comprised of gates from the universal set of Toffolis, CNOTs, and NOTs are more compressible with a compression factor \(\eta\) above the Toffoli bound. [It is worth mentioning that since any linear circuit can be implemented with \({\cal O}(n^{2})\) gates, purely linear circuits are highly compressible, with \(\eta\sim(1-n^{2}/{\cal N})\).]
Finally, we note that a compression factor \(0<\eta<1\) also implies that circuit collisions - i.e., multiple circuits implementing the same permutation - are non-negligible. The importance of collisions can also be
argued from a lower bound for \({\cal S}(P,{\cal N})\),
\[[{\cal N}-{\cal K}(P)]\;\log_{2}|G|^{1/2}\leq{\cal S}(P,{\cal N})\;, \tag{14}\]
which highlights the fact that there are exponentially many circuits that realize any \(P\). This result can be derived from a bound on \({\cal S}({\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.5mu l}{\rm 1\mskip-5.0mu l}},{\cal N}-{\cal K}(P))\), which enters Eq. (3). In particular, we proceed by placing bounds on the entropy of identities, \({\cal S}({\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.5mu l}{\rm 1\mskip-5.0mu l}},{\cal N})\). By expressing \({\cal N}=\sum_{\ell}a_{\ell}\;2^{\ell}\), where \(a_{\ell}=0,1\) are the binary coefficients in the expansion of \({\cal N}\) in base 2 (we shall assume that \({\cal N}\) is even), and then using Eq. (2) multiple times, it follows that \({\cal S}({\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.5mu l}{\rm 1\mskip-5.0mu l}},{\cal N})\geq\sum_{\ell}a_{\ell}\;{\cal S}({ \mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1 \mskip-5.0mu l}},2^{\ell})\geq\sum_{\ell}a_{\ell}\;2^{\ell-1}{\cal S}({ \mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1 \mskip-5.0mu l}},2)=\frac{{\cal N}}{2}{\cal S}({\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1\mskip-5.0mu l}},2)=\frac{{\cal N}}{2} \log_{2}|G|\), where we used that a two-gate identity can be written as the product of any gate \(g\) and its inverse \(g^{-1}\). This bound on the entropy of identities, together with Eq. (3), lead to Eq. (14).
**Thermodynamic mixing of circuits:** The thermodynamic approach presented above relies on the microcanonical assumption, namely that all \({\cal N}\)-gate circuits of a given functionality \(P\) appear with equal weight in the count \(\Omega(P,{\cal N})\). This assumption hinges on "ergodicity" in the space of circuits of a given functionality, a concept which implies a "microscopic" dynamical process and "equilibration" of collections of gates in a circuit, analogous to the thermalization induced by microscopic collisions of atoms or molecules in a gas. We shall return to the complex and interesting question of microscopic dynamics below. Here we concentrate on a coarse-grained model for thermodynamic mixing on a "macroscopic" scale by dividing the circuit into smaller pieces, and assuming that thermalization occurs on a "mesoscopic" scale. In particular, the equilibration of an arbitrary (polynomial size) circuit is established by connecting a string of short \(m\)-gate segments/subcircuits. These "mesoscopic" subcircuits are assumed to be large enough to obey the laws of circuit thermodynamics introduced above but small enough so that an appropriate set of "microscopic" dynamical rules leads to rapid equilibration in a time \(\tau_{eq}\).
More precisely, consider the situation depicted in Fig. 2, in which two \(m\)-gate subcircuits of functionality \(P_{1}\) and \(P_{2}\), respectively, are allowed to exchange gates and functionality via some dynamical rules. Thermalization at the mesoscopic scale implies that, after a time scale \(\tau_{eq}\), (a) the counting of individual subcircuits (Fig. 2a) satisfy the microcanonical assumption, and (b) the concatenated circuit with \(2m\) gates and functionality \(P_{1}P_{2}\) satisfies the thermodynamic inequalities Eqs. (2) and (5).
Given the mesoscopic thermalization assumption, we are now in position to define a coarse-grained model for thermodynamic mixing that takes as input a circuit \(C\) that is split into \(M\) subcircuits, each comprising of \(m={\cal N}/M\) gates: \(C=c_{1}c_{2}...c_{M}\). A subcircuit \(c_{i}\) (\(i=1,\ldots,M\)) can be thought of as a degree of freedom in a \(d\)-dimensional space with \(d=|G|^{m}\) states, i.e., a dit; and thus a circuit can be viewed as a string of \(M\) such dits. The coarse-grained mixing of the full circuit \(C\) is implemented as a circuit acting on dit-strings, i.e., a "circuit acting on circuits" - hereafter referred to as a C-circuit - built out of gates acting on dits, i.e., "gates acting on circuits" - referred to as C-gates. Fig. 3 depicts a brickwall C-circuit of C-gates acting on a pair of neighboring dits \(c_{i-1}(\tau)\) and \(c_{i}(\tau)\) in layer \(\tau\) and evolving them into \(c_{i-1}(\tau+1)\) and \(c_{i}(\tau+1)\) after one "time"
Figure 1: An example of βpushingβ a Toffoli gate \(g\) past gates \(h_{1},h_{2},\ldots,h_{p}\) so as to annihilate \(g\) with its inverse \(\tilde{g}\) (identical to \(g\) for a Toffoli gate), thereby reducing the circuit size by two gates. Typically, in a random circuit over \(n\) bitlines, a Toffoli gate can travel past \({\cal O}(n)\) gates with which it commutes before encountering either a gate with which it does not commute or, with probability \({\cal O}(1/n^{3})\), its inverse. The process depicted leads to a compression of the circuit size by a factor \((1-2\xi/n^{2})\), where \(\xi\) is a constant of \({\cal O}(1)\).
step, i.e., into layer \(\tau+1\) of the C-circuit. The brickwall C-circuit is a scrambler of circuits, much as usual brickwall circuits of dit- or qudit-gates are scramblers of states of dits or qudits (see for example Ref. [7] and references therein).
The action of an individual C-gate, which takes place on a time scale longer than \(\tau_{eq}\), is based on the mesoscopic equilibration assumption, and is implemented in three steps that parallel those depicted in Fig. 2, as follows: (a) take \(c_{i-1}(\tau)\) and \(c_{i}(\tau)\), with functionalities \(P(c_{i-1}(\tau))\) and \(P(c_{i}(\tau))\); (b) draw a circuit \(c_{\rm aux}\) uniformly out of the \(\Omega(P,2m)\)\(2m\)-gate circuits with functionality \(P=P(c_{i-1}(\tau))\,P(c_{i}(\tau))\); and (c) split the \(2m\)-gate circuit \(c_{\rm aux}\) into two \(m\)-gate circuits \(c_{i-1}(\tau+1)\) and \(c_{i}(\tau+1)\). We note that the action of a C-gate on the two dits \(c_{i-1}\) and \(c_{i}\) preserves functionality of the product of the two associated sub-circuits, a "conservation law" that maintains the functionality of the overall circuit. (We also note that the stochastic process defined above could be replaced by the action of a C-circuit built from deterministic C-gates with given substitution truth tables chosen randomly.)
As alluded to above, the action of the brickwall C-circuit progressively expands the "local" equilibrium within each of the \(m\)-gate subcircuits into an thermodynamically mixed equilibrium state for a full circuit \(C\) of any size \(\mathcal{N}\). The equilibration process induced via the C-circuit is analogous to the equilibration of connected thermodynamic systems (e.g., containers of gas molecules) that were initially isolated from one another. More specifically, this thermodynamic mixing of circuits reflects three properties of the explicit C-circuit implementation: (i) the circuit entropy after the application of each C-gate never decreases, but increases or remains the same; (ii) through subsequent layers of the scrambling process, functionalities of individual subcircuits change but the functionality of the overall concatenated circuit is preserved; and, most importantly, (iii) the thermalization in the space of dits defining the action of a stochastic C-gate and the layer-by-layer evolution of the circuit leads to the branching into a multitude of paths, which implies that memory of the initial circuit is lost, i.e., the scrambling process is irreversible.
Given the one-dimensional brickwall arrangement of gates acting on \(M\) dits, such as the C-circuit in Fig. 3, the number of layers required for scrambling the initial dit-string (in our case the initial circuit \(C\)) should scale as \(M^{\gamma}\). In random circuits acting on dits without conservation laws \(\gamma=1\)[6, 8, 9, 7] while in the presence of locally conserved quantities we expect \(\gamma\geq 2\)[10, 11, 12]. For the case of scrambling by C-circuits, a C-gate acting on two subcircuits of functionalities \(P_{1}\) and \(P_{2}\), respectively, may change \(P_{1}\) and \(P_{2}\) but preserves \(P_{1}P_{2}\). This more complicated "non-linear" conservation law has not yet been analyzed in detail but
Figure 2: The elementary C-gate represents the equilibration between two circuits of equal sizes \(\mathcal{N}_{1}=\mathcal{N}_{2}=m\). The two individual \(m\)-gate circuits, depicted in (a), have functionalities \(P_{1}\) and \(P_{2}\), respectively. In (b) they are brought into contact and exchange gates and functionality, realizing a combined functionality \(P_{1}P_{2}\), while preserving the total number of gates \(\mathcal{N}_{1}+\mathcal{N}_{2}=2m\). In (c), following equilibration (symbolized by the red double-arrow in (b)) the \(2m\)-gate circuit is split in the middle such that each of the partitions contains \(m\) gates and represents, respectively, functionalities \(P_{1}^{\prime}\) and \(P_{2}^{\prime}\), with \(P_{1}^{\prime}\,P_{2}^{\prime}=P_{1}\,P_{2}\).
we expect that both the saturation of the entropy to its maximum attainable value and the state of uniform average complexity, \(\bar{\mathcal{K}}_{i}=\mathcal{K}(P)/M\), for each of the \(M\) subcircuits of a C-circuit \(C\) are reached within a time polynomial in \(M\).
_An application to Circuit Obfuscation_: The C-circuit-based thermodynamic mixing of circuits presented above provides a conceptual framework for circuit obfuscation. Circuit obfuscation can be viewed as a gedanken experiment: take two \(\mathcal{N}\)-gate circuits, \(C_{1}\) and \(C_{2}\), with the same functionality \(P\), and apply to each of them a functionality preserving scrambling procedure that runs in polynomial time. As defined in Ref. [13], Indistinguishability Obfuscation (IO) holds if at the end of the process an adversary with polynomial resources cannot distinguish whether a given obfuscated circuit originated from \(C_{1}\) or \(C_{2}\). Indeed, as we have seen from the discussions above, a sufficiently large C-circuit is a good circuit obfuscator since the scrambling process removes all information about initial circuits and thus an adversary with polynomial resources would not be able to distinguish which scrambled circuit originated from which initial circuit.
We note that the general line of reasoning presented so far makes certain assumptions, some of which will be challenged below. In particular, the notion of fragmentation of the space of circuits of a given size and functionality into disconnected sectors, which we introduce shortly, will restrict the thermodynamic framework to individual sectors. As discussed below, fragmentation of circuit space has conceptual implications for the problem of circuit obfuscation.
**Microscopic dynamics of circuits:** All thermodynamics-based arguments presented thus far rely on the assumption of ergodicity, namely that some microscopic dynamical rules that connect circuits of same size and functionality lead to a uniform covering of the space of all such circuits. Moreover, we assumed that equilibration across the space of circuits is achieved in polynomial time, i.e., that, given the dynamical rules, connecting any two circuits can be achieved with a number of steps that scales polynomially in the number of gates, \(\mathcal{N}\). This assumption raises a number of interesting and up to now unexplored questions.
To begin with, by contrast to motion of molecules in a gas in the course of collisions, which is governed by physical laws, there is no unique or natural dynamics for moving/colliding gates in a circuit in ways that preserve functionality. A naive notion of gate collisions, analogous to collisions of gas particles, must take into account the non-commutative algebra of gates in a universal set. If one defines a collision as an interchange of gates \(g_{1}\) and \(g_{2}\) acting on shared bitlines, then preserving functionality before and after the collision
Figure 3: A brickwall C-circuit that progressively expands the βmesoscopicβ (local) equilibrium established between pairs of \(m\)-gate subcircuits (depicted by the gray boxes) into a thermodynamically mixed equilibrium state for the full circuit, while preserving the functionality of the original concatenation of gates. Neighboring \(m\)-gate subcircuits are brought into local thermal equilibrium via the exchange (depicted by the two-headed red arrows) of gates and functionality (while preserving both the number of gates and the combined functionality of the subcircuits). A subcircuit is paired with either its neighbor to the left or to the right, alternating in each time step (following the pattern of blue arrows).
implies, generically, a substitution \(g_{1}\,g_{2}\leftrightarrow g_{2}\,D\,g_{1}\), where \(D\) is a "debris" gate needed so that algebraically \(g_{1}\,g_{2}=g_{2}\,D\,g_{1}\). An example of such a collision is illustrated in Fig. 4a for two Toffoli gates. A macroscopic number of such debris-generating collisions would inevitably lead to an irreversible increase in the size of the circuit, violating the constraint of a fixed number of gates.
A more fruitful direction is to define a dynamics in the space of circuits based on gate-substitution rules that exchange a string of gates with an alternate string with same size and functionality. One can view an \(\mathcal{N}\)-gate circuit as a quasi-1D system, or a chain of \(\mathcal{N}\) sites, in which a gate \(g_{i}\) (a non-Abelian group element) is placed at each site \(i\). Global functionality is determined by \(P=g_{1}\,\,g_{2}\cdots g_{N}\), and a _local_ microscopic dynamics must preserve this overall functionality. The functionality-preserving local dynamical model we have in mind involves the following substitution of a string of \(k\) consecutive gates:
\[\big{(}g_{i},g_{i+1},\ldots,g_{i+k}\big{)} \longleftrightarrow\big{(}g^{\prime}_{i},g^{\prime}_{i+1}, \ldots,g^{\prime}_{i+k}\big{)} \tag{15a}\] \[g_{i}\,g_{i+1}\,\ldots\,g_{i+k} =\,\,\,g^{\prime}_{i}\,g^{\prime}_{i+1}\,\ldots\,g^{\prime}_{i+k}\;. \tag{15b}\]
An example of a \(k=3\) circuit identity involving Toffoli and CNOT gates is shown in Fig. 4b. Substitution rules for fixed (and small) \(k\) can be built from a catalog of strings of \(k\) gates that multiply to the same permutation. Transition probabilities among the \(k\)-length strings, in the case the catalog is exhaustive, can be chosen to be
\[T_{\big{(}g_{i},\ldots,g_{i+k}\big{)},\big{(}g^{\prime}_{i}, \ldots,g^{\prime}_{i+k}\big{)}}=\frac{1}{\Omega(g_{i}\,\ldots\,g_{i+k},k)}\; \delta_{g_{i}\,\ldots\,g_{i+k},g^{\prime}_{i}\,\ldots\,g^{\prime}_{i+k}}\;. \tag{16}\]
We note that the stochastic C-gate used above can be implemented via such a transition matrix element with \(k=2m\). Alternatively, one can dilute the connectivity associated with the \(T\)-matrix so that not all pairs of \(k\)-strings satisfying Eqs. (15) are connected via a matrix element. We note that since the number of circuits with \(k\) gates is \(|G|^{k}\), enumerating the equivalence rules for large \(k\) becomes prohibitive.
While to our knowledge this type of dynamical model has not been discussed in the literature and a detailed study of the model is outside the scope of this paper, we can already point to a set of fundamental issues that have important implications for the discussion of circuit thermodynamics. In particular, our intuition suggests that the space of circuits with functionality \(P=g_{1}\,\,g_{2}\cdots g_{\mathcal{N}}\) evolving via \(k\)-range rules will generically fragment into a number of disconnected sectors. A simple example that supports the notion of fragmentation is to consider a functionality-preserving dynamics that only connects 2-strings if and only if two neighboring gates \(g_{i}\) and \(g_{i+1}\) commute, in which case we exchange \((g_{i},g_{i+1})\leftrightarrow(g_{i+1},g_{i})\) with probability \(1/2\). This dynamics allows a gate to move left and right through the list of gates in the circuit by passing other gates with which it commutes, but not past those gates with which it does not commute. This dynamics preserves the number of gates of each type in the circuit and thus does not allow one to connect the two equivalent
Figure 4: (a) An example of a collision (the interchange) \(g_{1}\,g_{2}\leftrightarrow g_{2}\,D\,g_{1}\), where the debris gate \(D\) is needed to preserve the functionality of the initial two-gate segment of the circuit. (b) An example of a substitution of a segment with \(k=3\) gates, \(g_{1}\,g_{2}\,g_{3}\leftrightarrow g^{\prime}_{1}\,g^{\prime}_{2}\,g^{\prime} _{3}\). The same arrangements in (b) can be used as examples of two functionally equivalent circuits with \(\mathcal{N}=3\) that cannot be connected via a \(k=2\) substitution rule.
circuits in Fig. 4b. A less restricted dynamics with \(k=2\) in this same example would still not allow the two sequences of three gates in Fig. 4b to be connected.
Fragmentation implies that a particular dynamics is ergodic only within individual sectors, and thus all the thermodynamic results would apply, but only within each disconnected sector. Implicit in this statement is that, within a given sector, ergodicity is reached within a number of steps defining the particular dynamics that is polynomial in \(\mathcal{N}\). In this case, the finite compressibility of generic circuits is an example of a property that survives in the fragmented system, where the compression factor should be determined by some weighted average over fragments.
The above intuition concerning fragmentation and polynomial thermalization is further supported by arguments from the theory of computational complexity. According to these arguments, connecting _any_ two circuits of the same functionality and size via a polynomial number of _local_ functionality preserving dynamical moves would imply the equality NP = coNP of two complexity classes, namely NP - decision problems for which polynomial time solutions are not known but for which a YES solution can be checked in polynomial time; and coNP - decision problems for which polynomial time solutions are not known but for which a NO solution can be checked in polynomial time. Even though it is widely believed that NP \(\neq\) coNP, rigorously justifying this belief (or its negation) is an open problem in computer science. In order to understand the above claim consider the Circuit Equivalence problem, the problem of deciding whether two (polynomial-sized) circuits are equivalent. Circuit Equivalence is known to be a problem in coNP, since a NO solution can be easily checked by comparing the outputs of the two circuits on a given input. At the same time, if any two circuits could be connected by a polynomial number of dynamical moves, even if difficult to find, this set of moves can be used to verify a YES solution of the Circuit Equivalence problem in polynomial time, placing Circuit Equivalence also in NP. Similarly, a parallel set of arguments can be used to argue that Circuit Inequvalence, i.e., the problem of deciding that two (polynomial-sized) circuits are inequivalent, which is a problem in NP, is also in class coNP since the connectivity of any two circuits via a polynomial number of moves could be used to verify the NO solution of Circuit Inequvalence (i.e., equivalence) in polynomial time. Moreover, it is also well known that Circuit Equivalence and Circuit Inequvalence are among the hardest problems in their respective coNP and NP classes, i.e., they are in classes coNP-complete and NP-complete, respectively. "Completeness" indicates that any problem in coNP or NP can be reduced, respectively, to Circuit Equivalence or Circuit Inequvalence in polynomial time. Given that a polynomial number of dynamical moves placed Circuit Equivalence in NP and Circuit Inequivalence in coNP the conclusions of the above line of argumentation are that: (i) all problems in coNP are in NP (coNP \(\subset\) NP); (ii) all problems in NP are in coNP (NP \(\subset\) coNP); and thus that (iii) NP = coNP. As already alluded to above, this conclusion contradicts widely accepted beliefs in computational complexity and implies that, in our context, fragmentation is unavoidable, regardless of whether the polynomial sequence of moves connecting pairs of circuits is easy or hard to find.
Clearly, fragmentation and the accompanying broken ergodicity significantly alters the discussion of circuit obfuscation. For a system with multiple sectors, the relevant question becomes: given two circuits \(C_{1}\) and \(C_{2}\), can one decide in polynomial time whether they belong to the same ergodic (thermalized) sector or not? Physical intuition based on the scrambling of information, irreversibility, and chaos in closed systems with large number of degrees of freedom leads to a natural conjecture that, for non-trivial dynamical rules, this is a hard (NP) decision problem. If this is the case, then the thermodynamic framework does in fact provide a path to Indistinguishability Obfuscation of _any_ two circuits, \(C_{1}\) and \(C_{2}\). Otherwise the thermodynamic framework could only establish IO for circuits in the same sector.
### Discussion and future directions
This paper presents a thermodynamic framework for describing course-grained properties of large \(\mathcal{N}\)-gate reversible classical circuits with \(\mathcal{N}\gg n\) (with \(\mathcal{N}\) polynomial in \(n\), the number of bitlines of the circuit) and a given functionality, defined by the permutation \(P\) implemented by the circuit. Our construction of circuit thermodynamics is based on three assumptions that underpin the logical consistency of the approach: (i) the functionality \(P\) only appears through the circuit complexity \(\mathcal{K}(P)\), i.e., the minimum number of gates required for the implementation of the permutation \(P\); (ii) the entropy defined by counting of the number of possible \(\mathcal{N}\)-gate circuits implementing \(P\) is extensive in \(\mathcal{N}\) and \(\mathcal{K}(P)\); and (iii) ergodicity in the space of circuits, which as a result of fragmentation can only occur in disconnected sectors, requires a "time" (i.e., number of dynamical moves) that is polynomial in \(\mathcal{N}\), the size of the circuit.
The fragmentation of the space of circuits suggests a number of questions we expect to address through
more detailed analytical and computational studies: (i) is there is a critical value \(k_{c}\) such that if \(k_{c}\leq k\leq\mathcal{N}\) the space of circuits of size \(\mathcal{N}\) and functionality \(P\) becomes fully connected, and how does this value scales with the number of bitlines \(n\)? (ii) If the space is fragmented, how does the number of fragments scale with \(k\) and \(\mathcal{N}\) and (possibly) the complexity of \(P\)? (iii) can one make more precise statements about the hardness of deciding whether any two circuits belong to the same or different sectors? We note that even though we raise these questions in the context of the permutation group \(S_{2^{n}}\), the thermodynamic framework and the issues it raises can be generalized to other groups.
In summary, the thermodynamic perspective to complexity and functionality of circuits provides a framework that may stimulate new ways of thinking and new problems at the interface between physics and computer science. In particular, the issue of fragmentation, which is a topic of much current interest to the physics communities working on classical and quantum dynamics [14, 15, 16, 17, 18], may raise new questions for the computer science community, which to our knowledge have not been explored. Conversely, the question of the scaling of information scrambling rates with system size for systems with "multiplicative" rather than additive conservation laws (as is the case with the functionality of circuits) may intrigue and inspire physicists interested in classical and quantum dynamics.
### Acknowledgments
We are grateful to Alexsey Khudorozhkov and Guilherme Delfino for insightful discussions. This work was supported in part by DOE Grant DE-FG02-06ER46316 (C.C.) and a Grant from the Mass Tech Collaborative Innovation Institute (A.E.R.). R.C., C.C., and A.E.R. also acknowledge the Quantum Convergence Focused Research Program, funded by the Rafik B. Hariri Institute at Boston University.
|
2309.17335 | Asynchronous Graph Generator | We introduce the asynchronous graph generator (AGG), a novel graph attention
network for imputation and prediction of multi-channel time series. Free from
recurrent components or assumptions about temporal/spatial regularity, AGG
encodes measurements, timestamps and channel-specific features directly in the
nodes via learnable embeddings. Through an attention mechanism, these
embeddings allow for discovering expressive relationships among the variables
of interest in the form of a homogeneous graph. Once trained, AGG performs
imputation by \emph{conditional attention generation}, i.e., by creating a new
node conditioned on given timestamps and channel specification. The proposed
AGG is compared to related methods in the literature and its performance is
analysed from a data augmentation perspective. Our experiments reveal that AGG
achieved state-of-the-art results in time series imputation, classification and
prediction for the benchmark datasets \emph{Beijing Air Quality},
\emph{PhysioNet ICU 2012} and \emph{UCI localisation}, outperforming other
recent attention-based networks. | Christopher P. Ley, Felipe Tobar | 2023-09-29T15:46:41Z | http://arxiv.org/abs/2309.17335v3 | # Asynchronous Graph Generators
###### Abstract
We introduce the asynchronous graph generator (AGG), a novel graph neural network architecture for multi-channel time series which models observations as nodes on a dynamic graph and can thus perform data imputation by transductive node generation. Completely free from recurrent components or assumptions about temporal regularity, AGG represents measurements, timestamps and metadata directly in the nodes via learnable embeddings, to then leverage attention to learn expressive relationships across the variables of interest. This way, the proposed architecture implicitly learns a causal graph representation of sensor measurements which can be conditioned on unseen timestamps and metadata to predict new measurements by an expansion of the learnt graph. The proposed AGG is compared both conceptually and empirically to previous work, and the impact of data augmentation on the performance of AGG is also briefly discussed. Our experiments reveal that AGG achieved state-of-the-art results in time series data imputation, classification and prediction for the benchmark datasets _Beijing Air Quality_, _PhysioNet Challenge 2012_ and _UCI localisation_.
## 1 Introduction
Incomplete time series data are ubiquitous in a number of applications (Miao et al., 2019), including medical logs, meteorology records, traffic monitoring, financial transactions and IoT sensing. Missing records may be due to various reasons which include failures either in the acquisition or transmission systems, privacy protocols, or simply because the data are collected asynchronously in time. Missing data is an issue in itself but also hinders applications, for example, the public dataset PhysioNet (Silva et al., 2012) has a 78% average missing rate which makes it challenging to extract useful information from the dataset for, e.g., for predicting mortality. In this setting, imputation refers to filling in the missing values using the available sparse observations (Little and Rubin, 2019), and can be achieved by methods that exploit both temporal and spatial dependencies (Yoon et al., 2017; Yi et al., 2016).
Existing approaches (Cao et al., 2018) to imputation in multi-sensor time series often assume temporal regularity of the data, which is a consequence of representing the values of the series through a matrix with missing entries as shown in Fig. 1a. This representation implicitly produces two critical assumptions: i) the notion of order (causality), e.g., \(x_{1}\) precedes \(x_{2}\), and ii) a fixed sampling rate implying synchronous data acquisition. We assert that this representation is detrimental to successfully learn latent dynamics generating the (sparse) observations, therefore, we propose to relax these stringent assumptions and represent observations as nodes in an asynchronous directed graph, such as that depicted in Fig. 1b. This approach is robust to the occurrence of missing data and exploits the permutation invariance of multiple sensors to perform imputation as a transductive node generation operation over graph embeddings as depicted in Fig. 1c. We refer to the proposed representation as asynchronous graph generator (AGG).
Deep-learning-based approaches to imputation of missing data have become increasingly popular in the last five years (Yoon et al., 2018; Liu et al., 2019; Cao et al., 2018). However, in general these methods rely on slight modifications of standard neural architectures tailored for discrete-time complete data and are thus unable to fully incorporate available relational information related to, e.g., temporal, spatial or operating conditions (Bai et al., 2018; Chung et al., 2014). We argue that
continuous-time graphs are a promising resource for incorporating stronger inductive biases in the analysis of multivariate signals, in particular with applications to data imputation. We assume no data regularity beyond what is explicitly observed through each sensor, all with the aim to learn the latent dynamics as agnostically as possible. Using an asynchronous graph is pivotal to fulfil this aim as it allows us to identify expressive relationships among measurements in large and incomplete sensor networks, as those found in real-world applications.
## 2 Related Work
The literature addressing missing value imputation in time series is vast. Enormous work has been dedicated to attempting imputation using classical (non-deep learning) approaches (Beretta & Santaniello, 2016; Troyanskaya et al., 2001; Ghahramani & Jordan, 1993; Nelwamondo et al., 2007; Durbin & Koopman, 2012; Kihoro et al., 2013; Cichocki & Phan, 2009; Cai et al., 2011; Rao et al., 2015; Mei et al., 2017; Yu et al., 2016; Yi et al., 2016).
More recently, deep learning models have been successfully developed for multi-sensor time series imputation, in particular, using recurrent neural networks (RNNs) (Cao et al., 2018; Yoon et al., 2018; Lipton et al., 2016; Che et al., 2018; Luo et al., 2018). Notably, GRU-D (Che et al., 2018) analyses sequences with missing data by controlling the decay of the hidden states of a gated RNN, while BRITS (Cao et al., 2018) implements a bidirectional GRU-D that incorporates cross-channel correlation to perform spatial imputation. These RNN-based methods assume temporal regularity of data, i.e., a fixed sampling rate.
Adversarial strategies have also been applied to imputation. GAIN (Yoon et al., 2018) uses GANs (Goodfellow et al., 2020) to perform imputation in the i.i.d. setting where dependencies among sensors are neglected, while Luo et al. (2018, 2019) trains models to generate realistic synthetic sequences. Miao et al. (2021) used an approach similar to GAIN but conditioned the generator on the predicted label to reconstruct missing values. Lastly, Liu et al. (2019) addressed the imputation problem for multi-scale highly-sparse series using hierarchical models.
Concurrently, graph neural networks (GNN) have found applications in spatio-temporal forecasting, where the idea underpinning most methods is the extension of RNN architectures to the graph domain. For instance, Seo et al. (2018) implemented GRU cells as nodes combined with spectral GNN operations (Defferrard et al., 2016), while Li et al. (2018) replaced spectral GNNs with a diffusion-convolutional network (Atwood & Towsley, 2016). Scarselli et al. (2008); Li et al. (2016); Yu et al. (2017); Wu et al. (2019, 2020) propose, instead, spatio-temporal graph convolutional networks that alternate convolutions on temporal and spatial dimensions. Similar approaches have focused on spatio-temporal data by combining Transformer-like architectures with RNNs (Cai et al., 2020; Zhang et al., 2018). Temporal graph networks (Rossi et al., 2020; Cini et al., 2022) learn node embeddings in dynamical graphs but again heavily relying on RNNs to extract temporal encodings. Lastly, recent works used GNNs for imputation of missing features in the i.i.d. case: Spinelli et al. (2020) trained GNNs for the data reconstruction task, while You et al. (2020) proposed a bipartite graph representation for feature imputation.
To the best of our knowledge, no previous GNN-based method approaches the imputation problem from the perspective of an asynchronous graph. They rely on RNNs in some form and thus implicitly adopt the strong assumptions about sample regularity as a consequence.
Figure 1: (a) Matrix time-series representation (Cao et al., 2018). (b) Asynchronous directed graph representing observations and causal relationships through directed edges; colours represent different metadata encoding. (c) Imputation performed by generating a new nodes, in this case, node \(\bar{h}_{6}\)
## 3 The AGG Architecture
Asynchronous graphs are a subclass of continuous-time dynamic graphs (CTDG) and are generally represented as a timed list of events, i.e., operations over edges and nodes including addition, deletion or feature transformations (Rossi et al., 2020). The proposed AGG considers each new sensor measurement as an expansion of the graph--or node additions--with the directed edges representing the temporal (causal) relationship among new and past measurements. Being a sequence of time-stamped events, we denote the graph by \(\mathcal{G}=\{x_{1},x_{2},\ldots\}\).
The main objective of AGG is to perform transductive node generation, that is, given a set of observations composed of values, timestamps and additional measurements referred to as _metadata_, AGG generates the value for a set of new nodes conditional on any timestamp and metadata. We emphasise the timestamps need not be uniformly sampled or even ordered.
Transductive node generation, as seen in Fig. 1c, is a node addition to the existing asynchronous graph. When a node is added to a graph--which is permutation invariant (Bronstein et al., 2021)--it has no notion of position but only relationship to other nodes via edges. It is through the temporal encoding that we condition the node to have the idea of order within the graph. If the encoding places the new node within the temporal "neighbourhood" of the other nodes in the graph, we refer to data imputation, whereas if the new node comes after the known temporal encodings we refer to prediction. Furthermore we can condition the graph to generate nodes with continuous values (regression) or discrete values (classification). We can see that the class of node generation is arbitrary and, given a flexible notion of encoding, allows the AGG to used for a wide variety of tasks from imputation to anomaly detection.
Data imputation can also be seen as a type of self-supervised pre-training through masked data augmentation (Balestriero et al., 2022). After performing imputation, the graph embeddings can leverage their expressive representation for regression, classification and even anomaly detection in the same way that masked pre-training is leveraged in architectures like BERT (Devlin et al., 2019). Our self-supervised approach splits observations into inputs and targets--see Fig. 2--to then organise them into batches for training a graph attention-based architecture. We next present the data treatment and the proposed architecture.
### Problem formulation and data preparation
For clarity of presentation, we assume the existence of continuous-time latent signals which are only measured through a finite set of observations \(\mathcal{D}=\{x_{n}\}_{n=1}^{N}\). The \(i\)-th measurement is given by
\[x_{n}=[y_{n},t_{n},m_{n}]\in\mathbb{R}^{d_{p}+1+d_{m}}, \tag{1}\]
where \(y_{n}\in\mathbb{R}^{d_{p}}\) is the **value**, \(t_{n}\in\mathbb{R}\) is the **timestamp** and \(m_{n}\in\mathbb{R}^{d_{m}}\) is all the available **metadata** including--but not limited to--type, location and operating conditions of the measurement. Our aim is to extract knowledge from \(\mathcal{D}\) to predict **values** corresponding to a set of **timestamps** and **metadata** previously unseen. To exemplify the role of this notation consider the Beijing dataset, where **metadata** captures the measurements' type (e.g., PM2.5, pressure, temperature) as well as their location. Our formulation stems from the assumption that values across the graph are related not only by their timestamps but also by additional features such as channel id and sensor location. Explicitly encoding this metadata in the nodes allows the graph to learn in a way that exploits the interactions among the relevant variables, e.g., sensors of different types should interact differently as should different physical locations. Our hypothesis is that by encoding this metadata the graph can be fully context aware and thus performant.
The process of leveraging the data to train AGG is described next, refer to Fig. 2 for an illustration of a particular case. First, the dataset \(\mathcal{D}\) in equation 1 is obtained via an acquisition system (Fig. 2a) and each measurement is considered as a node in a graph. Then, we order the nodes wrt their timestamps and randomly split the dataset into input and target samples (blue and red in Fig. 2b). Lastly, the dataset is divided into samples of \(L\) inputs and 1 output by sequentially passing through the observations with a stride of \(\Delta\) (Fig. 2c).
### Learnable embeddings for value, time-stamps and metadata
**Temporal embedding.** Graphs are naturally permutation invariant so in order to learn flexible representations of temporal differences, such as periodicities and long-range dynamics, we must encode the temporal position along with nodes features. Following Kazemi et al. (2019), we use the learnable temporal encoding **t2v** and then use these learnt representation in a similar vein as positional encoding in Vaswani et al. (2017). For a \(x_{n}\) as defined in equation 1, this embedding is parametrised as
\[\textbf{t2v}(\tau_{n})=\left[\omega_{0}\tau+\varphi_{0},\mathcal{F}\left( \omega_{1}\tau_{n}+\varphi_{1}\right),\mathcal{F}\left(\omega_{2}\tau_{n}+ \varphi_{2}\right),\dots,\mathcal{F}\left(\omega_{D_{t}-1}\tau_{n}+\varphi_{D_{ t}-1}\right)\right]^{\top}\in\mathbb{R}^{D_{t}}, \tag{2}\]
where \(\tau_{n}\) is the temporal difference between \(x_{n}\) and last-observed node \(x_{N}\), i.e., \(\tau_{n}=t_{N}-t_{n}\geq 0\); \(\{\omega_{k}\}_{k}\) and \(\{\varphi_{k}\}_{k}\) are learnable parameters; and \(\mathcal{F}\) is a periodic function. Inspired by Kazemi et al. (2019), we choose \(\mathcal{F}(\cdot)=\sin\left(\cdot\right)\) in all implementations of AGG.
**Metadata embedding.** In order to utilise measurements of different nature (defined by the metadata) one could be tempted to represent all interactions via a heterogeneous graph and build specific models for each interaction of nodes and edges. However, this would require us to cater for all possible relationships among nodes with minimal weight sharing throughout the model. To circumvent this challenge, AGG is modelled as a homogeneous graph instead, where a single learnable form of interaction operates over values \(y_{n}\), time stamps \(t_{n}\) and metadata \(m_{n}\) provided by the sensor measurement. In the same vein as the temporal embedding, the metadata is represented by a set of learnable embeddings, a practice that has become prevalent in the field of natural language programming for _learnable word embeddings_ beginning with Bengio et al. (2000). This way, we aim to include all available information as a form of inductive bias (Bronstein et al., 2021) into the model, and leave the graph structure to exploit rich relationships among features and values via an attention mechanism.
AGG builds metadata embeddings based on whether they are discrete or continuous: discrete metadata (e.g. categorical data) are embedded via hashing, that is, a matrix of learnable weights is sliced at the index of the relevant category. Similarly, continuous metadata is embedded into higher dimensions through a learnable projection matrix. The complete embedding of the metadata (considering both discrete and continuous parts) is denoted \(\text{embed}(m_{n})\in\mathbb{R}^{D_{m}}\)
To enhance the representation power of the overall architecture, we follow Velickovic et al. (2018) and also include a learnable projection for the value denoted \(\text{embed}(y_{n})\in\mathbb{R}^{D_{y}}\). Thus the AGG is a heterogeneous graph \(\mathcal{G}\) with \(n\)-th node containing
\[h_{0}=\text{Concat}\left[\text{embed}(y_{n}),\textbf{t2v}(\tau_{n}),\text{ embed}(m_{n})\right]\in\mathbb{R}^{D_{y}+D_{t}+D_{m}}, \tag{3}\]
where the explicit dependence on the index \(n\) is dropped unless necessary.
Observe that we denoted the original dimensions in lowercase (\(d_{y}\) and \(d_{m}\)) and the embedded ones in uppercase (\(D_{y}\), \(D_{t}\) and \(D_{m}\)). Also, following equation 3 we define \(d_{\text{encoder}}=\text{dim}(h_{0})=D_{y}+D_{t}+D_{m}\), where the notation \(h_{0}\) will be clarified in the next section.
Figure 2: Illustration of the data preparation process to train AGG for a 3-channel signals (colour-coded) with \(n=17\) observations, \(\approx 35\%\) of samples removed (red), block length \(L=3\) and stride \(\Delta=2\). There are 6 samples in this batch, where the targets 5 and 6 constitute 2 separate samples.
Fig. 3 illustrates the embedding procedure under the title **learnable embeddings**. The embeddings then enter a sequence of encoder and decoder blocks comprising attention and fully connected layers with layer-norms and skip connections through addition. The next two sections present the encoder and the generator stages.
### Asynchronous Graph Encoding
Towards improved performance and expressibility (Brody et al., 2022; Velickovic et al., 2018; Vaswani et al., 2017), the encoder features a multi-head self-attention layer representing the interactions among values, timestamps, and metadata.
Following equation 3, for a single node we denote \(h_{i-1}\) and \(h_{i}\) the input and output of the \(i\)-th encoder block respectively (\(i\geq 1\)). However, recall from Sec. 3.1 that AGG takes \(L\) nodes simultaneously, thus, we denote \(\mathbf{h}_{i}\) as the concatenation of the \(h_{i}\)'s coming from these \(L\) nodes. Therefore, each \(\mathbf{h}_{i}\in\mathbb{R}^{L\times d_{\text{mask}}}\) is a tensor comprising \(L\) node embeddings.
The \(j\)-th head of the \(i\)-th attention layer is thus given by:
\[\text{Attention}(\mathbf{Q}_{ij},\mathbf{K}_{ij},\mathbf{V}_{ij})=\mathrm{softmax}(\mathbf{ M}\circ\mathbf{E}_{ij})\mathbf{V}_{ij}\in\mathbb{R}^{L\times d_{w}}, \tag{4}\]
where \(\circ\) is the Hadamard (or element-wise) product and
* \(\mathbf{Q}_{ij}=\mathbf{h}_{i-1}\mathbf{W}_{j}^{Q}\in\mathbb{R}^{L\times d_{q}}\), \(\mathbf{K}_{ij}=\mathbf{h}_{i-1}\mathbf{W}_{j}^{K}\in\mathbb{R}^{L\times d_{w}}\), \(\mathbf{V}_{ij}=\mathbf{h}_{i-1}\mathbf{W}_{j}^{V}\in\mathbb{R}^{L\times d_{w}}\) are the query, key and value embeddings respectively.
* \(\mathbf{W}_{i}^{Q}\in\mathbb{R}^{d_{\text{mask}}\times d_{q}}\), \(\mathbf{W}_{i}^{K}\in\mathbb{R}^{d_{\text{mask}}\times d_{k}}\), \(\mathbf{W}_{i}^{V}\in\mathbb{R}^{d_{\text{mask}}\times d_{w}}\) are the projection matrices.
* \([M]_{jk}=\mathbf{1}_{t_{i}\leq t_{k}}\) is a temporal mask ensuring the operation of AGG is over _causal_ graphs. Dropout (Hinton et al., 2012) is applied to the mask during training to promote sparsity and redundancy in the graphs representation by randomly severing connections.
* \(\mathbf{E}_{ij}=d_{k}^{1/2}\mathbf{Q}_{ij}\mathbf{K}_{ij}^{\top}\in\mathbb{R}^{L\times L}\) is the dot product attention Vaswani et al. (2017) matrix which is equivalent to a fully connected weighted graph (Velickovic, 2023) pruned via \(\mathbf{M}\). Under the graph interpretation, \(\mathbf{E}\) is the weighted adjacency matrix for the \(L\) nodes in the asynchronous graph, where the weight represents the relevance of neighbouring nodes in determining the features of any other node.
Then, the \(i\)-th multihead attention layer is simply the weighted concatenation of its attention heads:
\[\text{MultiHead}_{i}=\text{Concat}\left[\text{Attention}(\mathbf{Q}_{i1},\mathbf{K}_{ i1},\mathbf{V}_{i1}),\dots,\text{Attention}(\mathbf{Q}_{il},\mathbf{K}_{il},\mathbf{V}_{il})\right]\mathbf{W}^{O} \in\mathbb{R}^{L\times d_{\text{mask}}}. \tag{5}\]
Lastly, the output of the \(i\)-th multi-head attention is normalised via a layer normalisation (Ba et al., 2016) followed by a multi-layer perceptron (MLP). The MLP consists of a 2-layer feed forward
Figure 3: AGG architecture: The sections of the network are indicated at the top of the figure. Inputs and target are represented as blue and red circles respectively, fixed operations are denoted by white blocks and learnable transformations in green blocks.
network with a LeakyReLU (Maas et al., 2013) activation and Dropout (Hinton et al., 2012) in the hidden layer, followed by a linear activation layer. The MLP has layer sizes of \([d_{\text{encode}},l\times d_{\text{encode}},d_{\text{encode}}]\), with \(l\) is the number of heads. Throughout each block there is extensive use of skip connections following inspiration from the Transformer (Vaswani et al., 2017) and the original introduction of the residual connections, ResNet (He et al., 2016).
The output of the \(i\)-th block is then calculated by:
\[\mathbf{u}_{i} =\mathbf{h}_{i-1}+\text{MultiHead}_{i} \tag{6}\] \[\mathbf{h}_{i} =\text{LayerNorm}\left[\mathbf{u}_{i}+\text{MLP}\left(\text{LayerNorm }\left[\mathbf{u}_{i}\right]\right)\right]. \tag{7}\]
Therefore, equations 4 - 7 completely define the sequence of outputs from the asynchronous graph encoder blocks \(\mathbf{h}_{0},\dots,\mathbf{h}_{l}\).
### Asynchronous Graph Generation
AGG leverages cross attention--see Fig. 3--between the output of the last asynchronous encoder block \(\mathbf{h}_{l}\) and the concatenation of (target) temporal/metadata embeddings for conditional generation, the latter denoted by
\[\mathbf{g}_{0}=\text{Concat}[\mathbf{t2v}(\tau_{t}),\text{embed}(m_{t}))]\in\mathbb{R }^{d_{g}}, \tag{8}\]
where \(d_{g}=D_{m}+D_{t}\). Transductive node generation, conditioned on the timestamps and metadata, defines where in the graph the new node should be located.
Conditional generation also leverages multiple attention heads, which, akin to equations 4 & 5, is given by
\[\text{CrossMultiHead}=\text{Concat}\left[\text{Attention}(\mathbf{Q}_{1},\mathbf{K}_{1 },\mathbf{V}_{1}),\dots,\text{Attention}(\mathbf{Q}_{l},\mathbf{K}_{l},\mathbf{V}_{l})\right] \mathbf{W}^{O}\in\mathbb{R}^{L\times d_{\text{encode}}}, \tag{9}\]
where
* \(\mathbf{Q}_{j}=\bar{\mathbf{g}}_{0}\overline{\mathbf{W}}_{j}^{Q}\in\mathbb{R}^{d_{g}},\mathbf{ K}_{j}=\bar{\mathbf{h}}_{l}\overline{\mathbf{W}}_{j}^{K}\in\mathbb{R}^{L\times d_{k}}\), \(\mathbf{V}_{j}=\bar{\mathbf{h}}_{l}\overline{\mathbf{W}}_{j}^{V}\in\mathbb{R}^{L\times d_ {u}}\) are the query, key and value respectively, and \(\bar{\mathbf{g}}_{0}=\text{LayerNorm}[\mathbf{g}_{0}]\) and \(\bar{\mathbf{h}}_{l}=\text{LayerNorm}[\mathbf{h}_{l}]\).
* \(\overline{\mathbf{W}}_{j}^{Q}\in\mathbb{R}^{d_{g}\times d_{q}},\overline{\mathbf{W}}_ {j}^{K}\in\mathbb{R}^{d_{\text{encode}}\times d_{k}},\overline{\mathbf{W}}_{j}^ {V}\in\mathbb{R}^{d_{\text{encode}}\times d_{g}},\mathbf{W}^{O}\in\mathbb{R}^{L \times d_{g}}\) are the projection matrices.
* \(\mathbf{E}_{j}=d_{k}^{1/2}\mathbf{Q}_{j}\mathbf{K}_{j}^{T}\in\mathbb{R}^{L}\).
**Remark**.: _The cross attention block does not include a causal mask, it implements a fully connected attention graph over all embeddings; \(\mathbf{M}=\mathbf{1}\); Dropout is applied during training._
Additionally, similar to the asynchronous encoder block, the generator follows the cross attention layer with a set of LayerNorms, skip connections and an MLP, such that:
\[\bar{\mathbf{u}} =\mathbf{g}_{0}+\text{CrossMultiHead} \tag{10}\] \[\mathbf{g}_{1} =\text{LayerNorm}\left[\bar{\mathbf{u}}+\text{MLP}\left(\text{LayerNorm }\left[\bar{\mathbf{u}}\right]\right)\right]. \tag{11}\]
Lastly, depending on the task, we use the generated decoding \(\mathbf{g}_{1}\) and fit a trainable head to purpose, e.g., a classification head or a regression head, which consists of an MLP that projects \(\mathbf{g}_{1}\) to the desired value \(\hat{y}_{n}\), such that:
\[\hat{y}_{n}=\text{MLP}(\mathbf{g}_{1}). \tag{12}\]
**Remark**.: _Preliminary experimental evaluation of AGG using a single generator block as presented here provided satisfactory results. The choice to maintain this architecture follows Occam's razor._
Fig. 3 shows a diagram of the entire AGG architecture identifying the connections, inputs, targets, as well as fixed and trainable blocks.
## 4 Relationship to previous methods
Our work is conceptually closer to those of Cini et al. (2022); Rossi et al. (2020) albeit with some key differences. They propose bidirectional RNNs encapsulated in GNNs, where a series of RNNs
are interconnected through gates controlled by message passing NNs. These works consider the time series as a sequence of weighted directed graphs, thus assuming each node to be identified and labelled with a unique id and consistently available at all evenly-sampled timestamps. Therefore, their graphs have a fixed topology over time and thus the methods operate mainly by exploiting of network homophily. Furthermore, the temporal dynamics are firmly delegated to the RNN, as a consequence, the known drawbacks of RNNs hinder the applicability of the methods for imputation, namely long-term memory retention and temporal dependencies, vanishing gradients, memory staleness, hidden-state bottleneck, to name a few (Rossi et al., 2020).
The proposed AGG does not use recurrent architectures and learns long-term dependencies directly via a graph over the nodes features (measurements). The node features are embedded into a high-dimensional space to represent their position in space and time, then their relationships are captured by a learnable graph whose connections are defined via conditional dot product attention. Additionally, the causal relationship of the nodes is enforced through the masked attention mechanism. The AGG has no recurrence so memory staleness (Rossi et al., 2020) is inherently avoided and, as a consequence, the range of temporal dependencies that can be learnt are only limited by the context window of the AGG input sequence and not the model. A critical feature of AGG that should not be overlooked is its ability to leverage past measurements of adjacent sensors, which we believe to be a significant shortcoming of recurrent message-passing neural networks proposed by Rossi et al. (2020), then expanded by Cini et al. (2022). The AGG, on the other hand, is able to look at past measurements of adjacent sensors in order to compute each node embedding, this is a key component to encoding both the _coherence_ and _phase_ relationship (Granger, 1969), which quantify the similarity and delay between a pair of time series. We argue that models that consider time series as a set of sequential graphs ignore the coherence and phase components of a dynamic system, while by leveraging attention over past measurements of adjacent nodes the AGG is able to effectively capture the phase and coherence dynamics of the system as a whole.
## 5 Experimental Evaluation
**Benchmark models and datasets.** AGG was compared against state-of-the-art models SSGAN (Miao et al., 2021), BRITS (Cao et al., 2018), NAOMI (Liu et al., 2019), GP-VAE (Fortuin et al., 2020) on three datasets for imputation: the _Beijing Air Quality_(Yi et al., 2016), _PhysioNet Challenge 2012_(Silva et al., 2012) and _UCI Localization Data for Person Activity_(Kaluza et al., 2014). The first two datasets were also used for classification and regression of mortality and PM2.5 respectively. All data were standardised per channel. See Appendix A.1 for additional details.
**Implementation details.** A common AGG architecture was implemented without hyper-parameter tuning for all datasets. We considered two encoder layers (Sec. 3.3) and one generator layer (Sec. 3.4), followed by a regression or classification head depending on the task. All embeddings were 16 dimensions per feature with 8 attention heads. The MLPs in equations 7 and 11 featured 2 layers: an input layer of dimension \(5\times 16=80\) and a hidden layer of dimension equal to number of heads \(\times\) embedding dimension \(=8\times 80=640\), which was then reduced back to the embedding dimension (80). During training, we used a Dropout rate of 0.2 for both the MLP layers and the attention masking. As a result, the model has 378k trainable parameters with a standard context length of \(L=100\) nodes, which are padded if the context length exceeds the dataset such as some samples in the _Physionet_ dataset. Refer to Fig. 3 for more details of the AGG architecture.
**Infrastructure.** AGG was implemented on PyTorch (Paszke et al., 2019) using an Nvidia RTX Titan GPU with 24GB of VRAM and 4608 CUDA Cores, and an Intel Core i9-9900K with 16 cores and 32GB of RAM running Ubuntu 22.04 64bit. Code is available1.
Footnote 1: [https://github.com/ChristopherLey/AsyncGraphGenerator](https://github.com/ChristopherLey/AsyncGraphGenerator)
### Data imputation
Following Miao et al. (2021), we addressed the unsupervised imputation task by randomly splitting the data into \(r\%\) for targets and \((1-r)\%\) for inputs (see Figs. 2 and 3), with the targets split again in \(80\%-20\%\) for training and validation respectively. We chose \(r\in\{10,30,50,70,90\}\) and evaluated the imputation performance using the Root Mean Square Error (RMSE). This setting replicates an
extremely-sparse imputation scheme, to be addressed via transductive node generation (Fig. 1c). See Appendices A.1.1 for details about the Beijing dataset and A.2 for data removal and batching.
Table 1 shows the performance of the methods considered, alongside the baseline Mean imputation method and AGG's performance improvement over current state-of-the-art SSGAN. Across all values of removed data (\(r\)), AGG outperformed all benchmarks and exhibited an average improvement of 21.3% on PhysioNet, 59.6% on Beijing PM2.5 dataset, and 69.5% on UCI (wrt SSGAN). A keen observer would note that unlike past methodologies the AGG does not decrease its performance monotonically with \(r\), in fact under some circumstances it improves with \(r\) (note the improvement of \(r=30\%\) vs \(r=10\%\) on the Beijing dataset). We attribute this behaviour to two key characteristics of the AGG, the first being the invariance of the architecture to sparsity of the data, such that the model sees little change in the underlying signal with \(r\leq 50\%\). The second is the sensitivity of the AGG to data augmentation (see Sec. 6): it seems that \(r=30\%\) is an inflection point whereby there has been sufficient data removed to properly train AGG but not enough that the information (in an _information theoretic_ (Shannon, 1949) sense) of the underlying dynamics has been diminished.
### Classification and Regression
Following the methodologies of Cao et al. (2018); Miao et al. (2021), the model pretrained on the imputation task was used to predict in-hospital mortality on Physionet. Specifically, we fine-tuned the model pretrained with \(10\%\) of data removed as explained above and, similarly to BRITS, we performed \(k\)-fold (\(k=5\)) cross validation with the entire dataset. AGG achieved an average \(\mathbf{AUC}=\mathbf{0.862}\), thus improving over BRITS which reported \(\text{AUC}=0.850\)(Silva et al., 2012). Though SSGAN did not report an exact performance index for this experiment, from Fig.4a in Miao et al. (2021) SSGAN appeared to perform on par with BRITS with \(\text{AUC}\simeq 0.85\).
AGG was then used to predict PM2.5 (Beijing dataset) and compared against the two best-scoring methodologies encountered in the literature following the setting in Yi et al. (2016) regarding the test/train split and the use of MAE. AGG scored a PM2.5 prediction \(\mathbf{MAE}=\mathbf{3.64}\) thus outperforming both BRITS (Cao et al., 2018) and GRIN (Cini et al., 2022) as showed in Table 2. We conjecture that the considerable improvement of AGG (\(64.4\%\)) wrt GRIN can be explained by its strong in
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Removed (\(r\)) & Mean & GP-VAE & NAOMI & BRITS & SSGAN & AGG & Improvement \\ \hline \multirow{4}{*}{UCI} & 10\% & 0.813 & 0.670 & 0.641 & 0.621 & 0.600 & **0.195** & 67.5\% \\ & 30\% & 0.873 & 0.726 & 0.724 & 0.686 & 0.666 & **0.221** & 66.8\% \\ & 50\% & 0.933 & 0.796 & 0.794 & 0.786 & 0.759 & **0.222** & 70.8\% \\ & 70\% & 0.943 & 0.846 & 0.854 & 0.836 & 0.803 & **0.234** & 70.9\% \\ & 90\% & 0.963 & 0.882 & 0.897 & 0.867 & 0.841 & **0.241** & 71.3\% \\ \hline \multirow{4}{*}{PhysioNet} & 10\% & 0.799 & 0.677 & 0.632 & 0.611 & 0.598 & **0.494** & 17.4\% \\ & 30\% & 0.863 & 0.707 & 0.703 & 0.672 & 0.670 & **0.535** & 20.1\% \\ & 50\% & 0.916 & 0.787 & 0.783 & 0.779 & 0.762 & **0.532** & 30.2\% \\ & 70\% & 0.936 & 0.837 & 0.835 & 0.809 & 0.782 & **0.589** & 24.7\% \\ & 90\% & 0.952 & 0.879 & 0.865 & 0.850 & 0.818 & **0.702** & 14.2\% \\ \hline \multirow{4}{*}{Beijing} & 10\% & 0.763 & 0.522 & 0.522 & 0.531 & 0.435 & **0.176** & 59.5\% \\ & 30\% & 0.806 & 0.562 & 0.558 & 0.561 & 0.461 & **0.157** & 65.9\% \\ \cline{1-1} & 50\% & 0.866 & 0.602 & 0.602 & 0.581 & 0.490 & **0.197** & 59.8\% \\ \cline{1-1} & 70\% & 0.898 & 0.709 & 0.701 & 0.641 & 0.603 & **0.225** & 62.7\% \\ \cline{1-1} & 90\% & 0.919 & 0.771 & 0.762 & 0.720 & 0.660 & **0.329** & 50.2\% \\ \hline \end{tabular}
\end{table}
Table 1: Time series imputation performance (RMSE) for all models considered under different percentage of removed data (\(r\)). Improvement denotes (as a percentage): AGG vs SSGAN.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Method & PhysioNet ICU mortality (AUC) & Beijing PM2.5 regression (MAE) \\ \hline GRIN & N/A & \(10.23\) \\ BRITS & \(0.850\pm 0.002\) & \(11.56\) \\
**AGG** & \(\mathbf{0.862\pm 0.0075}\) & \(\mathbf{3.64}\) \\ \hline \end{tabular}
\end{table}
Table 2: Performance of pre-trained models on classification (left) & regression (right)
ductive bias resulting from the spatial encoding, which captures the inner dynamics of spatially and temporally correlated data, thus effectively learning the phase shift among locations.
## 6 Discussion: On the effectiveness of data augmentation
Conceptually, the distinguishing features of the AGG are its invariance to sparsity (missing data) and its ability to exploit translation equivariance of the signal. It is widely accepted that data augmentation regularises a model towards the transformations that are applied (Balestriero et al., 2022; Neyshabur, 2017; Neyshabur et al., 2014). If these transforms align with the geometric priors (Bronstein et al., 2021) they can be exploited to can create a much more expressive representation of features in the signal space. This would allow the model to capture relevant interacting dynamics between channels, while ignoring superfluous information. It is expected that this inductive bias introduces some form of capacity control (Neyshabur, 2017) which in turn allows for successful generalisation.
Data augmentation should then emphasise geometric priors in our model to fully learn a generalisable representation of the signal of interest. Our choice of augmentation is inspired by self supervised learning (SSL) (Misra and Maaten, 2020; Zbontar et al., 2021) in computer vision, where augmentations exploit the translation equivariance in images through shift operations. In the same vein, we randomly remove samples from the training set to promote sparsity in our dataset and shift the inputs (relative to targets) in order to leverage the translation equivariance.
We studied the effect of this approach to data augmentation on the imputation task with \(10\%\) of the data removed (as defined in Sec. 5). To this end, we varied the stride length of each sample: the finer the stride, the more data samples are generated from the same training data--more details in Appendices A.2 and A.3. Fig. 4 shows the effect of the number of augmented samples of each block on the imputation performance via RMSE over the validation set, as defined by Yi et al. (2016).
The validation RMSE of AGG decreased sharply up to approximately 60x augmented samples, thus confirming the existence of a threshold for data augmentation in AGG after which complexity cost increases without gain in performance. This is consistent with Balestriero et al. (2022) who found empirically that 50x augmented samples were required to estimate their closed form of the loss. In general cases this threshold should be determined based on the sampling theorem (Shannon, 1949), which relates the observation rate with the dynamic content of the signals (for the stationary case).
## 7 Conclusions
We have presented asynchronous graph generators (AGGs), a family of attention-based models for multichannel time series that represents observations as nodes of a dynamic graph without assuming temporal regularity or recurrence. Using data-augmentation techniques inspired from computer vision and learnable embeddings from language models, we have shown that AGG can be successfully trained under missing-data regimes to discover rich relationships among variables of interest. Once trained, AGG can be used for data imputation--and as a consequence classification and prediction--by means of a conditional transductive node generation operation, that is, by generating a new node in the graph at a given timestamp (and metadata). We have experimentally validated the superiority of AGG against the state of the art on three relevant datasets and different rates of missing values. Our simulations confirm the robustness of AGG to sparsity and sample asynchronicity, thus making it well suited for real-world applications involving incomplete multi-channel time-series data.
Figure 4: AGG performance (RMSE) vs number of training samples produced from the _same_ dataset through augmentation.
## Acknowledgements
This work was partially funded by Google and the following ANID-Chile grants: Center for Mathematical Modeling FB210005, Advanced Center for Electrical and Electronic Engineering FB0008, and Fondecyt-Regular 1210606.
|
2301.13771 | The TouchΓ©23-ValueEval Dataset for Identifying Human Values behind
Arguments | We present the Touch\'e23-ValueEval Dataset for Identifying Human Values
behind Arguments. To investigate approaches for the automated detection of
human values behind arguments, we collected 9324 arguments from 6 diverse
sources, covering religious texts, political discussions, free-text arguments,
newspaper editorials, and online democracy platforms. Each argument was
annotated by 3 crowdworkers for 54 values. The Touch\'e23-ValueEval dataset
extends the Webis-ArgValues-22. In comparison to the previous dataset, the
effectiveness of a 1-Baseline decreases, but that of an out-of-the-box BERT
model increases. Therefore, though the classification difficulty increased as
per the label distribution, the larger dataset allows for training better
models. | Nailia Mirzakhmedova, Johannes Kiesel, Milad Alshomary, Maximilian Heinrich, Nicolas Handke, Xiaoni Cai, Barriere Valentin, Doratossadat Dastgheib, Omid Ghahroodi, Mohammad Ali Sadraei, Ehsaneddin Asgari, Lea Kawaletz, Henning Wachsmuth, Benno Stein | 2023-01-31T17:15:33Z | http://arxiv.org/abs/2301.13771v1 | # The Touche23-ValueEval Dataset for
###### Abstract
We present the Touche23-ValueEval Dataset for Identifying Human Values behind Arguments. To investigate approaches for the automated detection of human values behind arguments, we collected 9324 arguments from 6 diverse sources, covering religious texts, political discussions, free-text arguments, newspaper editorials, and online democracy platforms. Each argument was annotated by 3 crowdworkers for 54 values. The Touche23-ValueEval dataset extends the Webis-ArgValues-22. In comparison to the previous dataset, the effectiveness of a 1-Baseline decreases, but that of an out-of-the-box BERT model increases. Therefore, though the classification difficulty increased as per the label distribution, the larger dataset allows for training better models.
## 1 Introduction
Why might one person find an argument more persuasive than someone else? One answer to this question is rooted in the values they hold. Although people might share a set of values, the priority they give to these values can be different (e.g. should _having privacy_ be considered more important than _having a safe country?_). Such differences in priority can prevent people from finding common ground on a debatable topic or cause even more dispute. Moreover, differences in value priorities exist not only between individuals but also between cultures, which can cause disagreements.
Within computational linguistics, human values can provide context to categorize, compare, and evaluate argumentative statements, allowing for several applications: to inform social science research on values through large-scale datasets; to assess argumentation; to generate or select arguments for a target audience; and to identify opposing and shared values on both sides of a controversial topic. Probably the most widespread value categorization used in NLP is that of Schwartz (1994), shown (adapted) in Figure 1, and used in the paper at hand.
Figure 1: The employed value taxonomy of 20 value categories and their associated 54 values (shown as black dots), the levels 2 and 1 from Kiesel et al. (2022). Categories that tend to conflict are placed on opposite sites. Illustration adapted from Schwartz (1994)
In order to tackle the challenges of human value identification--such as the wide variety of values, their often implicit use, and their ambiguous definition--we previously developed the practical foundations for AI-based identification systems (Kiesel et al., 2022): a consolidated multi-level taxonomy based on extensive taxonomization by social scientists and an annotated dataset of 5 270 arguments, the Webis-ArgValues-22. However, the existing dataset has two main shortcomings: (i) it is comparably small for training or tuning a machine learning model that needs to capture the (yet unknown) linguistic features that identify each human value; (ii) 95% of its arguments stem from a single background (the USA), thus hindering the development of cross-cultural value detection models.
In this work, we aim to fill these gaps for the automatic human value identification task by proposing an extension to the existing dataset: the Touche23-ValueEval. It contains 9 324 arguments on a variety of statements written in different styles, including religious texts (Nahj al-Balagha), political discussions (Group Discussion Ideas), free-text arguments (IBM-ArgQ-Rank-30KArgs), newspaper articles (The New York Times), community discussions (Zhihu), and democratic discourse (Conference on the Future of Europe). Moreover, we broaden the variety of arguments in terms of represented cultures and territories, as well as in terms of historical perspective. The proposed dataset was collected and annotated for the SemEval 2023 Task 4. ValueEval: Identification of Human Values behind Arguments1 and is publicly available online.2
Footnote 1: [https://touche.webis.de/semeval23/touche23-web](https://touche.webis.de/semeval23/touche23-web)
Footnote 2: Dataset: [https://doi.org/10.5281/zenodo.6814563](https://doi.org/10.5281/zenodo.6814563)
## 2 Collecting Arguments
To investigate approaches for the automated detection of human values behind arguments, we collected a dataset of 9324 arguments. As in our previous publication on human value detection (Kiesel et al., 2022), each argument consists of one premise, one conclusion, and a stance attribute indicating whether the premise is in favor of (pro) or against (con) the conclusion. About half of the arguments (4 569; 49%) are taken from the existing Webis-ArgValues-22 dataset (Kiesel et al., 2022). The other half comprises new arguments, partially taken from the same sources as the Webis-ArgValues-22 (3 298; 69%), with the remaining arguments being from entirely new sources (1 457; 31%).
Table 1 provides key figures for the data, both for the main dataset used for the main ValueEval'23 leaderboard and for the supplementary dataset used for checking the robustness of approaches.
For the main leaderboard, we provide the main dataset as three separate sets as it is customary in machine-learning tasks, namely one set each for training, validation, and testing. The main dataset is compiled of arguments from three sources (see below), with approximately the same distribution in training, validation, and testing. To avoid train-test leakage from argument similarity, we ensured that all arguments with the same conclusions (but different premises) were in the same set. The ground-truth for the test dataset has been kept secret from participants for the duration of the ValueEval'23 competition.
In addition to the main dataset, we collected a supplementary dataset of arguments that are quite
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline \hline \multirow{2}{*}{**Argument source**} & \multirow{2}{*}{**Year**} & \multicolumn{4}{c}{**Arguments**} & \multicolumn{4}{c}{**Unique conclusions**} \\ \cline{3-10} & & **Train** & **Validation** & **Test** & \(\sum\) & **Train** & **Validation** & **Test** & \(\sum\) \\ \hline _Main dataset_ & & & & & & & & \\ IBM-ArgQ-Rank-30KArgs & 2019β20 & 4576 & 1526 & 1266 & 7368 & 46 & 15 & 10 & 71 \\ Conf. on the Future of Europe & 2021β22 & 591 & 280 & 227 & 1098 & 232 & 119 & 80 & 431 \\ Group Discussion Ideas & 2021β22 & 226 & 90 & 83 & 399 & 54 & 23 & 16 & 93 \\ \(\sum\) (main) & & 5393 & 1896 & 1576 & 8865 & 332 & 157 & 106 & 595 \\ \hline _Supplementary dataset_ & & & & & & & & \\ Zhihu & 2021 & - & 100 & - & 100 & - & 12 & - & 12 \\ Nahj al-Balagha & 900β1000 & - & - & 279 & 279 & - & - & 81 & 81 \\ The New York Times & 2020β21 & - & - & 80 & 80 & - & - & 80 & 80 \\ \(\sum\) (supplementary) & & - & 100 & 359 & 459 & - & 12 & 161 & 173 \\ \hline \(\sum\) (complete) & & 5393 & 1996 & 1935 & 9324 & 332 & 169 & 267 & 768 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Key statistics of the main and supplementary dataset by argument source. Additional 1047 arguments have been collected from religious sources, but are excluded here as they have not been annotated yet (cf. Section 2.5).
different from the ones in the main dataset in terms of both written form and ethical reasoning. We kept this dataset separate from the main dataset to evaluate model performance both in the same setting as it was trained on and, as a challenge of generalizability, in a different setting.
The following sections describe for each source the source itself, our collection process, and our preprocessing of the arguments. For illustration, Table 2 provides one example argument per source.
### IBM-ArgQ-Rank-30kArgs
The original Webis-ArgValues-22 dataset contains 5 020 arguments from the IBM-ArgQ-Rank-30kArgs dataset (Gretz et al., 2020). We expand the dataset by including 2 999 more arguments from this source. However, to avoid train-test leakage as mentioned above, we also had to exclude 651 arguments of the Webis-ArgValues-22 for which the conclusion is contained in the new test set.
SourceFor the IBM dataset, crowdworkers were tasked to write one supporting and one contesting argument for one of 71 common controversial topics. The dataset totals 30 497 arguments, each of which is annotated by crowdworkers for quality. The employed notion of high quality is: "if a person preparing a speech on the topic will be likely to use the argument as is in [their] speech." (Gretz et al., 2020)
Collection processWe adopted the process that we used for the Webis-ArgValues-22: We sampled from the IBM dataset only arguments where at least half of crowdworkers agreed that they are of high quality. We used the topics as conclusions and the "arguments" as respective premises.
PreprocessingWe also adopted the same preprocessing approach: We manually corrected encoding errors in the text body of each argument, ensured a uniform character set for punctuation, and formatted arguments to be HTML compatible.
### Conference on the Future of Europe
The CoFE subpart consists of 1 098 arguments for 431 unique conclusions, collected from the Conference on the Future of Europe portal.3
Footnote 3: [https://futureeu.europa.eu](https://futureeu.europa.eu)
SourceConference on the Future of Europe was an online participatory democracy platform intended to involve citizens, experts and EU institutions in a dialogue focused on the future direction and legitimacy of Europe. CoFE was designed as a user-led series of debates, where anyone could give a proposal in any of the EU24 languages. For each of the proposals, any other user could endorse or criticize the proposals (similar to a like button), comment on them or reply to other comments.
Collection ProcessIn our work, we used the CoFE dataset (Barriere et al., 2022), which con
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Argument** & **Value categories** & **Source** \\ \hline \(\circ\) & Con βWe should end the use of economic sanctionsβ: & Security: societal, & IBM-ArgQ- \\ Economic sanctions provide security and ensure that citizens are treated fairly. & Universalism: concern & Rank-30kArgs \\ \(\circ\) & Pro βWe need a better migration policy.β: & Universalism: concern & Conf. on the \\ Discussing what happened in the past between Africa and Europe is useless. All slaves and their owners died a long time ago. You cannot blame the grandchildren. & Future of \\ \(\circ\) & Con βRapists should be torturedβ: & Security: societal, & Group \\ Throughout India, many false rape cases are being registered these days. Torturing all of the accused persons causes torture to innocent persons too. & Universalism: concern & Discussion \\ \(\circ\) & Con βWe should secretly give our help to the poorβ: & Benevolence: caring, & Nahj \\ By showing others how to help the poor, we spread this work in the society. & Universalism: concern & al-Balagha \\ \(\circ\) & Con βWe should crack down on unreasonably high incomes.β: & Security: personal, & Zhihu \\ If the key to an individualβs standard of living does not lie in income, then it is useless to simply regulate income. & Universalism: concern & The New York \\ \(\circ\) & Pro βAll of this is a sharp departure from a long history of judicial solicitude toward state powers during epidemics.β: & Power: dominance, & Universalism: concern \\ In the past, when epidemics have threatened white Americans and those with political clout, courts found ways to uphold broad state powers. & Universalism: concern & Times \\ \hline \hline \end{tabular}
\end{table}
Table 2: Six example arguments (stance, conclusion, and premise) and their annotated value categories. We selected these to showcase different ways for resorting to _be just_, which is a value of the category _Universalism: concern_.
tains more than 20 thousand comments on around 4.2 thousand proposals in 26 languages. English, German, and French are the main languages of the platform. All the texts are automatically translated into any of the EU24 languages. A subset of the comments in the dataset (\(\approx\)35%) was labelled by users themselves, expressing their stance towards the proposition, around 6% was annotated by experts, while the rest of the comments remain unlabeled.
PreprocessingDue to the limited time available, we focused on the proposals originally written in English. Out of 6 985 available comment/proposal pairs containing user-annotations in the CoFE dataset, we preprocessed 1 098 comments coming from 431 debates. We manually identified a conclusion in each of the proposals and one or more premises in the corresponding comments. We manually ensured that the resulting arguments had a similar length and structure to those in the Webis-ArgValues-22 dataset.
### Group Discussion Ideas
We extended the 100 arguments of the "India" part of the Webis-ArgValues-22, collected from the Group Discussion Ideas web page4 by including 299 new arguments from the same source.
Footnote 4: [https://www.groupdiscussionideas.com](https://www.groupdiscussionideas.com)
SourceThis web page collects pros and cons on various topics covered in Indian news to help users support discussions in English. As the web page says, its goal is "to provide all the valid points for the trending topics, so that the readers will be equipped with the required knowledge" for a group discussion or debate. The web page currently lists a team of 16 authors. We received permission to distribute the arguments.
Collection processWe crawled the web page and semi-automatically extracted arguments. For the original 100 arguments, we used a section of the web page called "controversial debate topics 2021." For the additional 299 arguments, we extended our scope to include all topics from 2022.
PreprocessingWe manually ensured that the arguments had a similar structure to those in the Webis-ArgValues-22 dataset by rewording and shortening them slightly if necessary.
### Zhihu
We used the 100 arguments that were already part of the Webis-ArgValues-22 as-is. These had been manually paraphrased from the recommendation and hotlist section of this Chinese question-answering website5 and then manually translated into English.
Footnote 5: [https://www.zhihu.com/explore](https://www.zhihu.com/explore)
### Nahj al-Balagha
We collected and annotated 279 arguments from the Nahj al-Balagha, a collection of Islamic religious texts. These arguments are part of a larger dataset of 1 326 arguments we collected from two Islamic sources, featuring advice and arguments on moral behavior. The remaining 1 047 arguments have not been annotated yet due to time constraints.
SourceThe books Nahj al-Balagha and Ghurar al-Hikam wa Durar al-Kalim contain moral aphorisms and eloquent content attributed to Ali ibn Abi Talib (600 CE, though published centuries later), who is known as one of the main Islamic elders. The Nahj al-Balagha includes more than 200 sermons, 80 letters, and 500 sayings. The Ghurar al-Hikam wa Durar al-Kalim contains 11 000 pietistic and ethical short sayings. The two books were originally written in Arabic and have been subsequently translated into different languages. We employ standard translations of the books into Farsi.
Collection processWe first manually extracted 302 premises from the Nahj al-Balagha: 181 were extracted verbatim and 121 were distilled from the text. The conclusions were deduced manually, with similar conclusions being unified. To balance the stance distribution, a few of the distilled premises were rephrased so that they are against the conclusion. The 279 annotated arguments are all taken from this set of 302 arguments; 23 unclear arguments were omitted from the annotation.
To enlarge the dataset for future uses, we implemented a semi-automated extraction pipeline, which we use to extract additional 1 047 arguments from the texts. 878 of these were collected from Ghurar al-Hikam wa Durar al-Kalim, while the rest come from Nahj al-Balagha. We finetuned a pre-trained Persian BERT (Farahani et al., 2021) language model over the extracted arguments and used it to identify potential further arguments, which were then checked and extracted like the ones mentioned above.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline
**Level** & & & & & & & & & & \\ \hline
PreprocessingWe manually translated the arguments into English and had another annotator check the whole dataset to remove ambiguous arguments.
### The New York Times
We collected 80 arguments from news articles published in The New York Times.6 At the time of writing, we are in the process of obtaining permission to publish the arguments. Until then, we provide Python software that extracts the arguments from the Internet Archive.7
Footnote 6: [https://www.nytimes.com](https://www.nytimes.com)
Footnote 7: [https://github.com/touche-webis-de/touche-code/tree/main/seneval23/human-value-detection/nyt-download](https://github.com/touche-webis-de/touche-code/tree/main/seneval23/human-value-detection/nyt-download)
SourceThe New York Times is a renowned US-American daily newspaper that is available in print and via an online subscription.
Collection processWe selected 12 editorials, published between July 2020 and May 2021, with at least one of the New York Times keywords _coronavirus (2019-ncov)_, _vaccination and immunization_, and _epidemics._ We manually selected texts with an overall high quality of argumentation, as assessed by three linguistically trained annotators.
PreprocessingThe premises, conclusions, and stances were manually annotated by four annotators (three per text), and these annotations were curated by two linguist experts. The test set does not comprise all arguments identified in the twelve texts, but rather a selection of especially clear ones, as established by the curators.
## 3 Crowdsourcing the Annotation of Human Values behind Arguments
We re-used the crowdsourcing setup of 3 human annotators per argument of Kiesel et al. (2022) (Webis-ArgValues-22). For illustration, we reprint the screenshots of the annotation interface in Appendix A. As the screenshots show, the interface contains annotation instructions (cf. Figure 6) and uses yes/no questions for labeling each argument for each of the 54 level 1 values (cf. Figure 7). Though the ValueEval'23 task uses only level 2 value categories, we kept the tried and tested annotation process both for consistency and to allow for approaches that work on level 1. We restricted annotation to the 27 annotators who passed the selection process for Webis-ArgValues-22, of which 13 returned to work under the same payment. In total, the annotators made 774 360 yes/no annotations for 4 780 new arguments. Like for Webis-ArgValues-22, we employed MACE (Hovy et al., 2013) to fuse the annotations into a single ground truth. For quality assurance, we inspected all annotations for arguments from the Nahj al-Balagha and the New York Times, as well as those for which MACE's confidence was about 50:50. For this check, we analyzed 727 arguments, for which we changed the annotation if necessary. This check focused on the two supplementary test sets, as in these datasets the conclusion also often references values, which confused some crowdworkers.
## 4 Analyzing the Dataset
This section first presents an overview of the main statistics of our dataset, then highlights the similarities and differences among value distributions of the used sources. Finally, we report on the results of baseline experiments that investigate the influence of dataset extension on the task at hand.
Overview statisticsThe dataset consists of 9 324 unique premise-conclusion pairs. Each of the arguments is annotated for multiple values on two levels of granularity. As Figure 2 shows, 94% of the arguments have at least 2 values, and 89% have more than 2 value categories assigned to them. A total of 18 arguments (~0.19%) have no assigned value to them (i.e., they resort to no ethical judgement). The most frequent values in the dataset are _Be just_, _Have a stable society_, and _Have a safe country_. More fine-grained distribution statistics for each of the values are shown in Table 3. The average length of a premise is 23.53 words, and that of a conclusion is 6.48 words. The stance distribution is generally balanced, with an approximate 10% skew, however, towards the _pro_ label (cf. Table 4).
Value distributionsFigures 3 and 4 depict the distribution of value categories (Level 2 in Figure
Figure 2: Fraction of arguments in the complete dataset having a specific number of assigned values (out of 54) or value categories (out of 10) or more.
1) across the train/validation/test splits, as well as within each of the data sources. As for the sources used in the _main_ dataset, Figure 3 demonstrates that all three sources share similar value categories distribution with slight fluctuations. For instance, discussion boards (Group Discussion Ideas, Conference on the Future of Europe) seem to value _Universalism: Objectivity_ considerably more than respondents for IBM-ArgQ-Rank-30kArgs. Besides that, the most common category for all three sources is _Universalism: Concern_, with the least frequent being _Hedonism_ and _Humility_. In Figure 4(a), we can observe that the categories are similarly distributed across the main dataset splits, with some minor exceptions which can be attributed to the fact that IBM-ArgQ-Rank-30kArgs is the main source of arguments in our dataset and we ensured that no same conclusion occurs in different splits. When it comes to individual data sources from the _supplementary_ evaluation splits, since all of the supplementary datasets are unique in terms of genre and moral reasoning, it is also reflected in the distribution of value categories within the arguments (cf. Figure 4b-d). Thus, _Achievement_ and _Security: Societal_ categories manifest themselves in the question-answering forum dataset, Zhihu. The NYT part also reflects value categories specific to the topics covered in it, with _Security: Personal_ appearing in more than 30% of the arguments. In contrast, Nahj al-Balagha appears to be the most balanced data subset in terms of value categories. Despite the described similarities and differences, we do not claim any of the parts as representative of the respective culture. In this case, we can only state that these distributions are descriptive of our dataset.
Baseline experimentsTo assess the impact of dataset extension, we used the classification ap
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{**Mean length**} & \multicolumn{2}{c}{**Arguments**} \\ \cline{2-5}
**Argument source** & **Concl.** & **Premise** & **Pro** & **Con** \\ \hline IBM-ArgQ-Rank-30kArgs & 5.55 & 19.84 & 3824 & 3544 \\ Conf. on the Future of Europe & 11.35 & 39.59 & 750 & 348 \\ Group Discussion Ideas & 7.87 & 45.27 & 250 & 149 \\ Zhihu & 8.19 & 27.51 & 59 & 41 \\ Nahj al-Balagha & 5.58 & 22.40 & 224 & 55 \\ The New York Times & 20.20 & 22.87 & 69 & 11 \\ \hline \(\sum\) (complete) & 6.48 & 23.53 & 5176 & 4148 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Mean length (number of space-separated tokens) in conclusions and premises and the stance distribution per source of the TouchE23-ValueEval dataset.
Figure 3: Distribution of value categories across the sources in the _main_ dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{8}{c}{**Values (Level 1)**} & \multicolumn{8}{c}{**Value categories (Level 2)**} \\ \cline{2-13} & \multicolumn{3}{c}{**Webis-ArgValues-22**} & \multicolumn{3}{c}{**Touche\(\hat{\epsilon}\)23-ValueEval**} & \multicolumn{3}{c}{**Webis-ArgValues-22**} & \multicolumn{3}{c}{**Touche\(\hat{\epsilon}\)23-ValueEval**} \\ \cline{2-13} & **P** & **R** & \(\mathbf{F}_{1}\)** & **Acc** & **P** & **R** & \(\mathbf{F}_{1}\)** & **Acc** & **P** & **R** & \(\mathbf{F}_{1}\)** & **Acc** & **P** & **R** & \(\mathbf{F}_{1}\)** & **Acc** \\ \hline BERT & 0.40 & **0.19** & 0.25 & 0.92 & **0.43** & **0.19** & **0.26** & **0.94** & 0.39 & 0.30 & 0.34 & 0.84 & **0.59** & **0.35** & **0.44** & **0.88** \\
1-Baseline & **0.08** & **1.00** & **0.16** & **0.08** & 0.07 & **1.00** & 0.13 & 0.07 & **0.18** & **1.00** & **0.28** & **0.18** & 0.15 & **1.00** & 0.26 & 0.15 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of macro precision (P), recall (R), \(\mathbf{F}_{1}\)-score (\(\mathbf{F}_{1}\)), and accuracy (Acc) on respective test sets of Webis-ArgValues-22 and Touche\(\hat{\epsilon}\)23-ValueEval by level.
Figure 4: Distribution of value categories across the training, validation and testing splits, as well as within the sources of the _supplementary_ dataset.
proaches listed in Kiesel et al. (2022). We trained and tested the models on the respective splits of the _main_ dataset. In comparison to the Webis-ArgValues-22, the effectiveness of a 1-Baseline (assigns each value to all of the arguments) decreases but that of an out-of-the-box BERT model increases across all evaluation metrics. A comparison of different evaluation metrics on the two datasets is demonstrated in Table 5. Therefore, although the classification difficulty increased as per the label distribution, the larger dataset allows for training better models.
## 5 Conclusion
We presented the Touche23-ValueEval Dataset for Identifying Human Values behind Arguments, comprising 9 324 arguments manually labelled for 54 values and 20 value categories. We detailed its construction and its complementary nature to the Webis-ArgValues-22 dataset. We expanded the previous dataset in terms of argument count, cultural variety, and writing style. Finally, we reported baseline classification results that suggest that the expansion of the dataset allows for better learning of concepts by a vanilla BERT model. We hope that this dataset allows for more elaborate approaches for successful value detection, even beyond the ValueEval'23 task.
## 6 Ethics Statement
Since this work is a direct continuation of our earlier work Kiesel et al. (2022), the same statement applies and we repeat it here for completeness.
Identifying values in argumentative texts could be used in various applications like argument faceted search, value-based argument generation, and value-based personality profiling. In all these applications, an analysis of values has the opportunity to broaden the discussion (e.g., by presenting a diverse set of arguments covering a wide spectrum of personal values in search or inviting people with underrepresented value-systems to discussions). At the same time, a value-based analysis could risk to exclude people or arguments based on their values. However, in other cases, for example hate speech, such an exclusion might be desirable.
While we tried to include texts from different cultures in our dataset, it is important to note that these samples are not representative of their respective culture, but intended as a benchmark for measuring classification robustness across sources. A more significant community effort is needed to collect more solid datasets from a wider variety of sources. To facilitate the inclusivity of different cultures, we adopted a personal value taxonomy that has been developed targeting universalism and tested across cultures. However, in our study, the annotations have all been carried out by annotators from a Western background. Even though the value taxonomy strives for universalism, a potential risk is that an annotator from a specific culture might fail to correctly interpret the implied values in a text written by people from a different culture.
Finally, we did not gather any personal information in our annotation studies, and we ensured that all our annotators get paid more than the minimum wage in the U.S.
|
2309.17453 | Efficient Streaming Language Models with Attention Sinks | Deploying Large Language Models (LLMs) in streaming applications such as
multi-round dialogue, where long interactions are expected, is urgently needed
but poses two major challenges. Firstly, during the decoding stage, caching
previous tokens' Key and Value states (KV) consumes extensive memory. Secondly,
popular LLMs cannot generalize to longer texts than the training sequence
length. Window attention, where only the most recent KVs are cached, is a
natural approach -- but we show that it fails when the text length surpasses
the cache size. We observe an interesting phenomenon, namely attention sink,
that keeping the KV of initial tokens will largely recover the performance of
window attention. In this paper, we first demonstrate that the emergence of
attention sink is due to the strong attention scores towards initial tokens as
a "sink" even if they are not semantically important. Based on the above
analysis, we introduce StreamingLLM, an efficient framework that enables LLMs
trained with a finite length attention window to generalize to infinite
sequence lengths without any fine-tuning. We show that StreamingLLM can enable
Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language
modeling with up to 4 million tokens and more. In addition, we discover that
adding a placeholder token as a dedicated attention sink during pre-training
can further improve streaming deployment. In streaming settings, StreamingLLM
outperforms the sliding window recomputation baseline by up to 22.2x speedup.
Code and datasets are provided at https://github.com/mit-han-lab/streaming-llm. | Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis | 2023-09-29T17:59:56Z | http://arxiv.org/abs/2309.17453v4 | # Efficient Streaming Language Models
###### Abstract
Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely _attention sink_, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of _attention sink_ is due to the strong attention scores towards initial tokens as a "sink" even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a _finite length_ attention window to generalize to _infinite sequence length_ without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2\(\times\) speedup. Code and datasets are provided in the link.
## 1 Introduction
Large Language Models (LLMs) (Radford et al., 2018; Brown et al., 2020; Zhang et al., 2022; OpenAI, 2023; Touvron et al., 2023;b) are becoming ubiquitous, powering many natural language processing applications such as dialog systems (Schulman et al., 2022; Taori et al., 2023; Chiang et al., 2023), document summarization (Goyal and Durrett, 2020; Zhang et al., 2023), code completion (Chen et al., 2021; Roziere et al., 2023) and question answering (Kamalloo et al., 2023). To unleash the full potential of pretrained LLMs, they should be able to efficiently and accurately perform long sequence generation. For example, an ideal ChatBot assistant can stably work over the content of recent day-long conversations. However, it is very challenging for LLM to generalize to longer sequence lengths than they have been pretrained on, e.g., 4K for Llama-2 Touvron et al. (2023).
The reason is that LLMs are constrained by the attention window during pre-training. Despite substantial efforts to expand this window size (Chen et al., 2023; kaikednev, 2023; Peng et al., 2023) and improve training (Dao et al., 2022; Dao, 2023) and inference (Pope et al., 2022; Xiao et al., 2023; Anagnostidis et al., 2023; Wang et al., 2021; Zhang et al., 2023) efficiency for lengthy inputs, the acceptable sequence length remains intrinsically _finite_, which doesn't allow persistent deployments.
In this paper, we first introduce the concept of LLM streaming applications and ask the question:
_Can we deploy an LLM for infinite-length inputs without sacrificing efficiency and performance?_
1. During the decoding stage, Transformer-based LLMs cache the Key and Value states (KV) of all previous tokens, as illustrated in Figure 1 (a), which can lead to excessive memory usage and increasing decoding latency (Pope et al., 2022).
2. Existing models have limited length extrapolation abilities, i.e., their performance degrades (Press et al., 2022; Chen et al., 2023) when the sequence length goes beyond the attention window size set during pre-training.
An intuitive approach, known as window attention (Beltagy et al., 2020) (Figure 1 b), maintains only a fixed-size sliding window on the KV states of most recent tokens. Although it ensures constant memory usage and decoding speed after the cache is initially filled, the model collapses once the sequence length exceeds the cache size, i.e., _even just evicting the KV of the first token_, as illustrated in Figure 3. Another strategy is the sliding window with re-computation (shown in Figure 1 c), which rebuilds the KV states of recent tokens for each generated token. While it offers strong performance, this approach is significantly slower due to the computation of quadratic attention within its window, making this method impractical for real-world streaming applications.
To understand the failure of window attention, we find an interesting phenomenon of autoregressive LLMs: a surprisingly large amount of attention score is allocated to the initial tokens, irrespective of their relevance to the language modeling task, as visualized in Figure 2. We term these tokens "**attention sinks**". Despite their lack of semantic significance, they collect significant attention scores. We attribute the reason to the Softmax operation, which requires attention scores to sum up to one for all contextual tokens. Thus, even when the current query does not have a strong match in many previous tokens, the model still needs to allocate these unneeded attention values somewhere so it sums up to one. The reason behind _initial_ tokens as sink tokens is intuitive: initial tokens are visible to almost all subsequent tokens because of the autoregressive language modeling nature, making them more readily trained to serve as attention sinks.
Based on the above insights, we propose StreamingLLM, a simple and efficient framework that enables LLMs trained with a finite attention window to work on text of infinite length without fine-tuning. StreamingLLM exploits the fact that attention sinks have high attention values, and preserving them can maintain the attention score distribution close to normal. Therefore, StreamingLLM simply keeps the attention sink tokens' KV (with just 4 initial tokens surficing) together with the sliding window's KV to anchor the attention computation and stabilize the model's performance. With StreamingLLM, models including Llama-2-[7, 13, 70]B, MPT-[7, 30]B, Falcon-[7, 40]B, and Pythia
Figure 1: **Illustration of StreamingLLM _vs._ existing methods. The language model, pre-trained on texts of length \(L\), predicts the \(T\)th token (\(T\gg L\)). (a) Dense Attention has \(O(T^{2})\) time complexity and an increasing cache size. Its performance decreases when the text length exceeds the pre-training text length. (b) Window Attention caches the most recent \(L\) tokensβ KV. While efficient in inference, performance declines sharply once the starting tokensβ keys and values are evicted. (c) Sliding Window with Re-computation rebuilds the KV states from the \(L\) recent tokens for each new token. While it performs well on long texts, its \(O(TL^{2})\) complexity, stemming from quadratic attention in context re-computation, makes it considerably slow. (d) StreamingLLM keeps the _attention sink_ (several initial tokens) for stable attention computation, combined with the recent tokens. Itβs efficient and offers stable performance on extended texts. Perplexities are measured using the Llama-2-13B model on the first book (65K tokens) in the PG-19 test set.**
When applying LLMs for infinite input streams, two primary challenges arise:
1. During the decoding stage, Transformer-based LLMs cache the Key and Value states (KV) of all previous tokens, as illustrated in Figure 1 (a), which can lead to excessive memory usage and increasing decoding latency (Pope et al., 2022).
2. Existing models have limited length extrapolation abilities, i.e., their performance degrades (Press et al., 2022; Chen et al., 2023) when the sequence length goes beyond the attention window size set during pre-training.
[2.9,6,9,12]B can reliably model 4 million tokens, and potentially even more. Compared with the only viable baseline, sliding window with recomputation, StreamingLLM achieves up to 22.2\(\times\) speedup, realizing the streaming use of LLMs.
Finally, we confirm our attention sink hypothesis and demonstrate that language models can be pre-trained to require only a single attention sink token for streaming deployment. Specifically, we suggest that an extra learnable token at the beginning of all training samples can serve as a designated attention sink. By pre-training 160-million parameter language models from scratch, we demonstrate that adding this single sink token preserves the model's performance in streaming cases. This stands in contrast to vanilla models, which necessitate the reintroduction of multiple initial tokens as attention sinks to achieve the same performance level.
## 2 Related Work
Extensive research has been done on applying LLMs to lengthy texts, with three main areas of focus: **Length Extrapolation**, **Context Window Extension**, and **Improving LLMs' Utilization of Long Text**. While seemingly related, it's worth noting that progress in one direction doesn't necessarily lead to progress in the other. For example, extending the context size of LLMs doesn't improve the model's performance beyond the context size, and neither approach ensures effective use of the long context. Our StreamingLLM framework primarily lies in the first category, where LLMs are applied to text significantly exceeding the pre-training window size, potentially even of infinite length. We do not expand the attention window size of LLMs or enhance the model's memory and usage on long texts. The last two categories are orthogonal to our focus and could be integrated with our techniques.
**Length extrapolation** aims to enable language models trained on shorter texts to handle longer ones during testing. A predominant avenue of research targets the development of relative position encoding methods for Transformer models, enabling them to function beyond their training window. One such initiative is Rotary Position Embeddings (RoPE) (Su et al., 2021), which transforms the queries and keys in every attention layer for relative position integration. Despite its promise, subsequent research (Press et al., 2022; Chen et al., 2023) indicated its underperformance on text that exceeds the training window. Another approach, ALiBi (Press et al., 2022), biases the query-key attention scores based on their distance, thereby introducing relative positional information. While this exhibited improved extrapolation, our tests on MPT models highlighted a breakdown when the text length was vastly greater than the training length. Current methodologies, however, have yet to achieve infinite length extrapolation, causing no existing LLMs to fit for streaming applications.
**Context Window Extension** centers on expanding the LLMs' context window, enabling the processing of more tokens in one forward pass. A primary line of work addresses the training efficiency problem. Given the attention to computation's quadratic complexity during training, developing a long-context LLM is both a computational and memory challenge. Solutions have ranged from system-focused optimizations like FlashAttention (Dao et al., 2022; Dao, 2023), which accelerates attention computation and reduces memory footprint, to approximate attention methods (Zaheer et al., 2020; Beltagy et al., 2020; Wang et al., 2020; Kitaev et al., 2020) that trade model quality for efficiency. Recently, there has been a surge of work on extending pre-trained LLMs with RoPE (Chen et al., 2023; kaiokendev, 2023; bloc97, 2023; Peng et al., 2023), involving position interpolation and fine-tuning. However, all the aforementioned techniques only extend LLMs' context window to a limited extent, which falls short of our paper's primary concern of handling limitless inputs.
**Improving LLMs' Utilization of Long Text** optimizes LLMs to better capture and employ the content within the context rather than merely taking them as inputs. As highlighted by Liu et al.
Figure 2: Visualization of the _average_ attention logits in Liama-2-7B over 256 sentences, each with a length of 16. Observations include: (1) The attention maps in the first two layers (layers 0 and 1) exhibit the βlocalβ pattern, with recent tokens receiving more attention. (2) Beyond the bottom two layers, the model heavily attends to the initial token across all layers and heads.
and Li et al., success in the previously mentioned two directions does not necessarily translate to competent utilization of lengthy contexts. Addressing this effective usage of prolonged contexts within LLMs is still a challenge. Our work concentrates on stably harnessing the most recent tokens, enabling the seamless streaming application of LLMs.
## 3 StreamingLLM
### The Failure of Window Attention and Attention Sinks
While the window attention technique offers efficiency during inference, it results in an exceedingly high language modeling perplexity. Consequently, the model's performance is unsuitable for deployment in streaming applications. In this section, we use the concept of _attention sink_ to explain the failure of window attention, serving as the inspiration behind StreamingLLM.
Identifying the Point of Perplexity Surge.Figure 3 shows the perplexity of language modeling on a 20K token text. It is evident that perplexity spikes when the text length surpasses the cache size, led by the exclusion of initial tokens. This suggests that the initial tokens, regardless of their distance from the tokens being predicted, are crucial for maintaining the stability of LLMs.
Why do LLMs break when removing _initial tokens' KV?_We visualize attention maps from all layers and heads of the Llama-2-7B and models in Figure 2. We find that, beyond the bottom two layers, the model consistently focuses on the initial tokens across all layers and heads. The implication is clear: removing these initial tokens' KV will remove a considerable portion of the denominator in the SoftMax function (Equation 1) in attention computation. This alteration leads to a significant shift in the distribution of attention scores away from what would be expected in normal inference settings.
\[\text{SoftMax}(x)_{i}=\frac{e^{x_{i}}}{e^{x_{1}}+\sum_{j=2}^{N}e^{x_{j}}}, \quad x_{1}\gg x_{j},j\in 2,\dots,N \tag{1}\]
There are two possible explanations for the importance of the initial tokens in language modeling: (1) Either their semantics are crucial, or (2) the model learns a bias towards their absolute position. To distinguish between these possibilities, we conduct experiments (Table 1), wherein the first four tokens are substituted with the linebreak token "n". The observations indicate that the model still significantly emphasizes these initial linebreak tokens. Furthermore, reintroducing them restores the language modeling perplexity to levels comparable to having the original initial tokens. This suggests that the absolute position of the starting tokens, rather than their semantic value, holds greater significance.
LLMs attend to Initial Tokens as Attention Sinks.To explain why the model disproportionately focuses on initial tokens--regardless of their semantic relevance to language modeling, we introduce the concept of "_attention sink_". The nature of the SoftMax function (Equation 1) prevents all attended tokens from having zero values. This requires aggregating some information from other tokens across all heads in all layers, even if the current embedding has sufficient self-contained information for its prediction. Consequently, the model tends to dump unnecessary attention values to specific tokens. A similar observation has been made in the realm of quantization outliers (Xiao et al., 2023; Bondarenko et al., 2023), leading to the proposal of SoftMax-Off-by-One (Miller, 2023) as a potential remedy.
Figure 3: Language modeling perplexity on texts with 20K tokens across various LLM. Observations reveal consistent trends: (1) Dense attention fails once the input length surpasses the pre-training attention window size. (2) Window attention collapses once the input length exceeds the cache size, i.e., the initial tokens are evicted. (3) StreamingLLM demonstrates stable performance, with its perplexity nearly matching that of the sliding window with re-computation baseline.
\begin{table}
\begin{tabular}{l c} \hline \hline Llama-2-13B & PPL (\(\downarrow\)) \\ \hline
0 + 1024 (Window) & 5158.07 \\
4 + 1020 & 5.40 \\
4βnβ+1020 & 5.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Window attention has poor performance on long text. The perplexity is restored when we reintroduce the initial four tokens alongside the recent 1020 tokens (4+1020). Substituting the original four initial tokens with linebreak tokens "n" (4βn"+1020) achieves comparable perplexity restoration. Cache config x+y denotes adding x initial tokens with y recent tokens. Perplexities are measured on the first book (65K tokens) in the PG19 test set.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Cache Config & 0+2048 & 1+2047 & 2+2046 & 4+2044 & 8+2040 \\ \hline Falcon-7B & 17.90 & 12.12 & 12.12 & 12.12 & 12.12 \\ MPT-7B & 460.29 & 14.99 & 15.00 & 14.99 & 14.98 \\ Pythia-12B & 21.62 & 11.95 & 12.09 & 12.09 & 12.02 \\ \hline \hline Cache Config & 0+4096 & 1+4095 & 2+4094 & 4+4092 & 8+4088 \\ \hline Llama-2-7B & 3359.95 & 11.88 & 10.51 & 9.59 & 9.54 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effects of reintroduced initial token numbers on StreamingLLM. (1) Window attention (0+\(y\)) has a drastic increase in perplexity. (2) Introducing one or two initial tokens usually doesnβt suffice to fully restore model perplexity, indicating that the model doesnβt solely use the first token as the attention sink. (3) Introducing four initial tokens generally suffices; further additions have diminishing returns. Cache config x+y denotes adding x initial tokens to y recent tokens. Perplexities are evaluated on 400K tokens in the concatenated PG19 test set.
Figure 4: The KV cache of StreamingLLM.
\begin{table}
\begin{tabular}{l c c} \hline \hline Llama-2-13B & PPL (\(\downarrow\)) \\ \hline
0 + 1024 (Window) & 5158.07 \\
4 + 1020 & 5.40 \\
4βnβ+1020 & 5.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Window attention has poor performance on long text. The perplexity is restored when we reintroduce the initial four tokens alongside the recent 1020 tokens (4+1020). Substituting the original four initial tokens with linebreak tokens βnβ (4βnβ+1020) achieves comparable perplexity restoration. Cache config x+y denotes adding x initial tokens with y recent tokens. Perplexities are measured on the first book (65K tokens) in the PG19 test set.
### Pre-Training LLMs with Attention Sinks
As elaborated in Section 3.1, a significant reason for the model's excessive attention to multiple initial tokens is the absence of a designated sink token to offload excessive attention scores. Due to this, the model inadvertently designates globally visible tokens, primarily the initial ones, as attention sinks. A potential remedy can be the intentional inclusion of a global trainable attention sink token, denoted as a "Sink Token", which would serve as a repository for unnecessary attention scores. Alternatively, replacing the conventional SoftMax function with a variant like SoftMax-off-by-One (Miller, 2023),
\[\text{SoftMax}_{1}(x)_{i}=\frac{e^{x_{i}}}{1+\sum_{j=1}^{N}e^{x_{j}}}, \tag{2}\]
which does not require the attention scores on all contextual tokens to sum up to one, might also be effective. Note that this SoftMax alternative is equivalent to using a token with an all-zero Key and Value features in the attention computation. We denote this method as "Zero Sink" to fit it consistently in our framework.
For validation, we pre-train three language models with 160 million parameters from scratch under identical settings. The first model utilizes the standard SoftMax attention (Vanilla), the second replaced the regular attention mechanism with SoftMax\({}_{1}\) (Zero Sink), and one prepending a learnable placeholder token (Sink Token) in all training samples. As shown in Table 3, while the zero sink alleviates the attention sink problem to some extent, the model still relies on other initial tokens as attention sinks. Introducing a sink token is highly effective in stabilizing the attention mechanism. Simply pairing this sink token with recent tokens sufficiently anchors the model's performance, and the resulting evaluation perplexity is even marginally improved. Given these findings, we recommend training future LLMs with a sink token in all samples to optimize streaming deployment.
## 4 Experiments
We evaluate StreamingLLM using four prominent recent model families: Llama-2 (Touvron et al., 2023b), MPT (Team, 2023), PyThia (Biderman et al., 2023), and Falcon (Almazrouei et al., 2023). Notably, Llama-2, Falcon, and Pythia incorporate RoPE (Su et al., 2021), whereas MPT employs ALiBi (Press et al., 2022) -- two of the most influential position encoding techniques in recent research. Our diverse model selection ensures the validity and robustness of our findings. We benchmark StreamingLLM against established baselines such as dense attention, window attention, and the sliding window approach with re-computation. In all subsequent experiments with StreamingLLM, we default to using four initial tokens as attention sinks unless stated otherwise.
### Language Modeling on Long Texts Across LLM Families and Scales
We firstly evaluate StreamingLLM's language modeling perplexity using the concatenated PG19 (Rae et al., 2020) test set, which contains 100 long books. For Llama-2 models, the cache size is set at 2048, while for Falcon, Pythia, and MPT models, it's set at 1024. This is half the pre-training window size chosen to enhance visualization clarity.
Figure 3 illustrates that StreamingLLM can match the oracle baseline (sliding window with re-computation) in terms of perplexity on texts spanning 20K tokens. Meanwhile, the dense attention technique fails when the input length exceeds its pre-training window, and the window attention technique struggles when the input length surpasses the cache size, leading to the eviction of the initial tokens. In Figure 5, we further substantiate that StreamingLLM can reliably handle exceptionally extended texts, encompassing more than 4 million tokens, across a spectrum of model families and scales. This includes Llama-2-[7,13,70]B, Falcon-[7,40]B, Pythia-[2.8,6.9,12]B, and MPT-[7,30]B.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Cache Config & 0+1024 & 1+1023 & 2+1022 & 4+1020 \\ \hline Vanilla & 27.87 & 18.49 & 18.05 & 18.05 \\ Zero Sink & 29214 & 19.90 & 18.27 & 18.01 \\ Learnable Sink & 1235 & **18.01** & 18.01 & 18.02 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of vanilla attention with prepending a zero token and a learnable sink token during pre-training. To ensure stable streaming perplexity, the vanilla model required several initial tokens. While Zero Sink demonstrated a slight improvement, it still needed other initial tokens. Conversely, the model trained with a learnable Sink Token showed stable streaming perplexity with only the sink token added. Cache config \(x\)+\(y\) denotes adding \(x\) initial tokens with \(y\) recent tokens. Perplexity is evaluated on the first sample in the PG19 test set.
### Results of Pre-Training with a Sink Token
To validate our suggestion that introducing a sink token to all pre-training samples improves streaming LLMs, we trained two language models, each with 160 million parameters, under identical conditions. While one model adhered to the original training settings, the other incorporated a sink token at the start of every training sample. Our experiments employed the Pythia-160M (Biderman et al., 2023) codebase and followed its training recipe. We train the models on an 8xA6000 NVIDIA GPU server using the deduplicated Pile (Gao et al., 2020) dataset. Apart from reducing the training batch size to 256, we retained all Pythia training configurations, including learning rate schedules, model initialization, and dataset permutations. Both models were trained for 143,000 steps.
Convergence and Normal Model Performance.Including a sink token during pre-training has no negative impact on model convergence and subsequent performance on a range of NLP benchmarks. As depicted in Figure 6, models trained with a sink token exhibit similar convergence dynamics compared to their vanilla counterparts. We evaluate the two models on seven diverse NLP benchmarks, including ARC-[Challenge, Easy] (Clark et al., 2018), HellaSwag (Zellers et al., 2019), LAMBADA (Paperno et al., 2016), OpenbookQA (Mihaylov et al., 2018), PIQA (Bisk et al., 2020), and Winogrande (Sakaguchi et al., 2019). As shown in Table 4, the model pre-trained with a sink token performs similarly to that trained using the vanilla approach.
Streaming Performance.As illustrated in Table 3, the streaming perplexities differ between models trained using traditional methods and those augmented with a sink token. Remarkably, the vanilla model requires the addition of multiple tokens as attention sinks to maintain stable streaming perplexity. In contrast, the model trained with a sink token achieves satisfactory streaming performance using just the sink token.
Attention Visualization.Figure 7 contrasts attention maps for models pre-trained with and without a sink token. The model without the sink token, similar to Llama-2-7B (Figure 2), shows early-layer local attention and deeper-layer focus on initial tokens. In contrast, models trained with a sink token consistently concentrate on the sink across layers and heads, indicating an effective attention offloading mechanism. This strong focus on the sink, with reduced attention to other initial tokens, explains the sink token's efficacy in enhancing model's streaming performance.
### Results on Streaming Question Answering with Instruction-tuned Models
To show StreamingLLM's real-world applicability, we emulate multi-round question-answering using instruction-tuned LLMs, commonly used in real-world scenarios.
We first concatenate all question-answer pairs from the ARC-[Challenge, Easy] datasets, feed the continuous stream to Llama-2-[7,13,70]B-Chat models, and assess model completions at each answer position using an exact match criterion. As table 5 indicates, dense attention results in Out-of-Memory (OOM) errors, showing it unsuitable for this setting. While the window attention method works efficiently, it exhibits low accuracy due to random outputs when the input length exceeds the cache size. Conversely, StreamingLLM excels by efficiently handling the streaming format, aligning with the one-shot, sample-by-sample baseline accuracy.
Highlighting a more fitting scenario for StreamingLLM, we introduce a dataset, StreamEval, inspired by the LongEval (Li et al., 2023) benchmark. As depicted in Figure 8, diverging from LongEval's single query over a long-span setup, we query the model every 10 lines of new information. Each query's answer is consistently 20 lines prior, reflecting real-world instances where questions typically pertain to recent information. As illustrated in Figure 9, LLMs employing StreamingLLM maintain reasonable accuracy even as input lengths approach 120K tokens. In contrast, both dense and window attention fail at the pre-training text length and the KV cache size, respectively. Additionally, we utilize two context-extended models, LongChat-7b-v1.5-32k (Li et al., 2023) and Llama-2-7B-32K-Instruct (Together, 2023), to show that StreamingLLM can complement context extension techniques. Within StreamingLLM, context extension means broadening the maximum cache size of streaming LLMs, enabling the capture of broader local information.
### Ablation Studies
Numbers of Initial Tokens.In Table 2, we ablate the effect of adding varying numbers of initial tokens with recent tokens on the streaming perplexity. The results show the insufficiency of introducing merely one or two initial tokens, whereas a threshold of four initial tokens appears enough, with subsequent additions contributing marginal effects. This result justifies our choice of introducing 4 initial tokens as attention sinks in StreamingLLM.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & \multicolumn{3}{c}{Llama-2-7B-Chat} & \multicolumn{3}{c}{Llama-2-13B-Chat} & \multicolumn{3}{c}{Llama-2-70B-Chat} \\ \cline{2-7} Dataset & Arc-E & Arc-C & Arc-E & Arc-C & Arc-E & Arc-C \\ \hline One-shot & 71.25 & 53.16 & 78.16 & 63.31 & 91.29 & 78.50 \\ \hline Dense & \multicolumn{3}{c}{OOM} \\ Window & 3.58 & 1.39 & 0.25 & 0.34 & 0.12 & 0.32 \\ StreamingLLM & 71.34 & 55.03 & 80.89 & 65.61 & 91.37 & 80.20 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Accuracy (in %) on the ARC-[Easy, Challenge] datasets. Questions were concatenated and answered in a streaming manner to mimic a real-world chat setting. The dense baseline fails due to Out-of-Memory (OOM) errors. Window attention has poor accuracy. StreamingLLM has comparable results with the one-shot sample-by-sample baseline. Window attention and StreamingLLM use cache sizes of 1024.
Figure 7: Visualization of average attention logits over 256 sentences, each 16 tokens long, comparing models pre-trained without (left) and with (right) a sink token. Both maps show the same layers and heads. Key observations: (1) Without a sink token, models show local attention in lower layers and increased attention to initial tokens in deeper layers. (2) With a sink token, there is clear attention directed at it across all layers, effectively collecting redundant attention. (3) With the presence of the sink token, less attention is given to other initial tokens, supporting the benefit of designating the sink token to enhance the streaming performance.
**Cache Sizes.** In Table 6, we evaluate cache size's impact on StreamingLLM's perplexity. Contrary to intuition, increasing the cache size doesn't consistently lower the language modeling perplexity. This inconsistency shows a potential limitation where these models might not maximize the utility of the entire context they receive. Future research efforts should target enhancing these models' capabilities to utilize extensive contexts better.
### Efficiency Results
We benchmark its decoding latency and memory usage against the sliding window with re-computation, which is the only baseline with acceptable performance. Both methods are implemented using the Huggingface Transformers library (Wolf et al., 2020) and tested on a single NVIDIA A6000 GPU using the Llama-2-7B and Llama-2-13B models. As depicted in Figure 10, as the cache size increases, StreamingLLM's decoding speed demonstrates a linear growth. The sliding window with re-computation baseline has a quadratic rise in decoding latency. Thus, StreamingLLM achieves an impressive speedup, reaching up to \(22.2\times\) per token. Despite its reduced latency, StreamingLLM sustains a memory footprint consistent with the re-computation baseline.
## 5 Conclusion
Deploying LLMs in streaming applications is urgently needed but comes with challenges due to efficiency limitations and reduced performance with longer texts. Window attention provides a partial solution, but its performance plummets when initial tokens are excluded. Recognizing the role of these tokens as "attention sinks", we introduced StreamingLLM --a simple and efficient framework that enables LLMs to handle unlimited texts without fine-tuning. By adding attention sinks with recent tokens, StreamingLLM can efficiently model texts of up to 4 million tokens. We further show that pre-training models with a dedicated sink token can improve the streaming performance. StreamingLLM firstly decouples the LLM's pre-training window size and its actual text generation length, paving the way for the streaming deployment of LLMs.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Cache & 4+252 & 4+508 & 4+1020 & 4+2044 \\ \hline Falcon-7B & 13.61 & 12.84 & **12.34** & 12.84 \\ MPI-7B & **14.12** & 14.25 & 14.33 & 14.99 \\ Pythia-12B & 13.17 & 12.52 & **12.08** & 12.09 \\ \hline \hline Cache & 4+508 & 4+1020 & 4+2044 & 4+4092 \\ \hline Llama-2-7B & 9.73 & 9.32 & **9.08** & 9.59 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Effects of cache size on StreamingLLMβs performance. Increasing the cache size in StreamingLLM doesnβt consistently yield a decrease in perplexity across all models, suggesting these models may not be fully exploiting the provided context. Cache config \(x\)+\(y\) denotes adding \(x\) initial tokens with \(y\) recent tokens. Perplexity is evaluated on 400K tokens in the concatenated PG19 test set.
Figure 10: Comparison of per-token decoding latency and memory usage between the sliding window approach with re-computation baseline and StreamingLLM, plotted against the cache size (attention window size) on the X-axis. StreamingLLM delivers a remarkable speedup of up to \(22.2\times\) per token and retains a memory footprint similar to the re-computation baseline.
Figure 9: Performance on the StreamEval benchmark. Accuracies are averaged over 100 samples. |
2308.16464 | MaintainoMATE: A GitHub App for Intelligent Automation of Maintenance
Activities | Software development projects rely on issue tracking systems at the core of
tracking maintenance tasks such as bug reports, and enhancement requests.
Incoming issue-reports on these issue tracking systems must be managed in an
effective manner. First, they must be labelled and then assigned to a
particular developer with relevant expertise. This handling of issue-reports is
critical and requires thorough scanning of the text entered in an issue-report
making it a labor-intensive task. In this paper, we present a unified framework
called MaintainoMATE, which is capable of automatically categorizing the
issue-reports in their respective category and further assigning the
issue-reports to a developer with relevant expertise. We use the Bidirectional
Encoder Representations from Transformers (BERT), as an underlying model for
MaintainoMATE to learn the contextual information for automatic issue-report
labeling and assignment tasks. We deploy the framework used in this work as a
GitHub application. We empirically evaluate our approach on GitHub
issue-reports to show its capability of assigning labels to the issue-reports.
We were able to achieve an F1-score close to 80\%, which is comparable to
existing state-of-the-art results. Similarly, our initial evaluations show that
we can assign relevant developers to the issue-reports with an F1 score of
54\%, which is a significant improvement over existing approaches. Our initial
findings suggest that MaintainoMATE has the potential of improving software
quality and reducing maintenance costs by accurately automating activities
involved in the maintenance processes. Our future work would be directed
towards improving the issue-assignment module. | Anas Nadeem, Muhammad Usman Sarwar, Muhammad Zubair Malik | 2023-08-31T05:15:42Z | http://arxiv.org/abs/2308.16464v1 | # MaintainoMATE: A GitHub App for Intelligent Automation of Maintenance Activities
###### Abstract.
**Background:** Software development projects rely on issue tracking systems at the core of tracking maintenance tasks such as bug reports, and enhancement requests. Incoming issue-reports on these issue tracking systems must be managed in an effective manner. First, they must be labelled and then assigned to a particular developer with relevant expertise. This handling of issue-reports is critical and requires thorough scanning of the text entered in an issue-report making it a labor-intensive task. **Objective:** In this paper, we present a unified framework called MaintainoMATE, which is capable of automatically categorizing the issue-reports in their respective category and further assigning the issue-reports to a developer with relevant expertise. **Method:** We use the Bidirectional Encoder Representations from Transformers (BERT), as an underlying model for MaintainoMATE to learn the contextual information for automatic issue-report labeling and assignment tasks. We deploy the framework used in this work as a GitHub application. **Results:** We empirically evaluate our approach on GitHub issue-reports to show its capability of assigning labels to the issue-reports. We were able to achieve an F1-score close to 80%, which is comparable to existing state-of-the-art results. Similarly, our initial evaluations show that we can assign relevant developers to the issue-reports with an F1 score of 54%, which is a significant improvement over existing approaches. **Conclusion:** Our initial findings suggest that MaintainoMATE has the potential of improving software quality and reducing maintenance costs by accurately automating activities involved in the maintenance processes. Our future work would be directed towards improving the issue-assignment module. More specifically, we plan to study adding external features i.e. activity-based features, developer profile-based features to evaluate if incorporating these features improve the overall results of the issue assignment module.
datasets, neural networks, gaze detection, text tagging +
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
FootnoteFootnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
FootnoteFootnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
FootnoteFootnote β : copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote: copyrighted: none
+
Footnote β : copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
FootnoteFootnote: copyrighted: none
+
Footnote: copyrighted: none
address the aforementioned issues, we use a transformer-based neural network called RoBERTa (Rosenberg et al., 2017). RoBERTa has the ability to understand the context, we fine-tune this language representation model to use as an underlying model for our framework MaintaoMATE. MaintaoMATE has the following two modules: (1) Issue-Report Labelling: which categorizes the issue-report into its respective category i.e. 'Bug-Report', 'Enhancement', 'Question' (2) Issue-Report Assignment: which assigns the issue-report to the relevant developer. To evaluate Issue-Report Labelling module, we randomly sampled 55,000 issue-reports associated with the top 200 repositories across 55 popular languages. We were able to achieve promising results with an F1-Score nearing 80% which is comparable to the existing state of the art results. In a similar manner, to evaluate our Issue-Report Assignment module, we trained the model on issue-reports along with the assigned developers of 'TensorFlow' repository on GitHub. Our model shows promising results i.e. 54% which is a significant improvement over the existing state of the art results.
Previous research works (Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016) aimed to automate various tasks of software maintenance, lack in industry adoption which is a result of scarcity of practical tools available to the software industry. Our research is aimed towards bridging this gap between industry expectations and scientific methods. To the best of our knowledge, our tool undertakes a novel approach towards being a full suite of modules curated for intelligently automating various steps involved in software maintenance. We release MaintaianoMATE 4 as a GitHub application which allows its easy integration with any software repository on GitHub.
Footnote 4: [https://github.com/apps/maintanomate](https://github.com/apps/maintanomate) - Made private for blind review as GitHub apps show name of the developer
In this paper we claim the following contributions:
* We present an automated software maintenance framework 'MaintaoMATE'. It comprises of two components: 'Issue-Report Labelling', and 'Issue-Report Assignment'.
* We present our complete framework for Issue-Report Labelling component, which performs the classification of issue-reports into their respective categories i.e. 'Bug-report', 'Question', and 'Enhancement'. Our approach has comparable results with the state-of-the-art studies with an F1-score of 80%.
* We discuss the initial findings of utilizing a novel approach for Issue-Report Assignment component, which assigns issue-reports to the respective developers based on their experiences. Our model shows promising results i.e. 54% F1 score, which is a significant improvement over the existing state of the art results.
**Organization.** The paper is structured as follows: Section 2 discusses the relevant previous research works. Section 3 discusses the different components of the our proposed MaintaoMATE framework. Section 4 discusses the evaluation of our approach. Section 5 discusses threats to the validity of our work. Section 6 discusses the future avenues of our work. Finally, section 7 concludes our paper.
## 2. Related Work
This section discusses the previous studies focused on solving such problems. Primarily, this section presents the previous studies on (1) Issue-Report Labeling (2) Issue-Report Assignment (3) Transformer (4) Industry tools in software maintenance.
### Related Work: Issue-Report Labeling
Several research works have presented methods ranging from using keyword-based approaches to machine learning-based models. We discuss such research works along with their limitations in order to lay the grounds for our research.
Kallis et al. (Kallis et al., 2016) presented a tool i.e. TicketTagger to classify the GitHub issue-reports. They utilized a fastText (Kallis et al., 2016) based model to classify the GitHub issue into their respective categories i.e. bug, enhancement, and question. Over these three categories, they achieved a F1-score of 82%. However, they have used a multi-class setting to solve the problem. In multi-class setting, one issue-report can belong to a single label at a time. Our work used similar labels as used by Kallis et al. (Kallis et al., 2016). However, we solve this problem as multi-label classification problem, where an issue-report can be assigned more than one label at the same time.
Fan et al. (Fan et al., 2016) proposed a machine learning based approach for classifying the issue-reports. They evaluated their approach on over 252,000 issue reports from 80 popular GitHub projects. Further, they evaluated four traditional machine learning methods i.e. Naive Bayes, Logistic Regression, Random Forest Tree, and Support Vector Machine. Our work, in contrast, leverages the off-the-shelf transformer-based model which achieves state-of-the-art results without requiring a large number of training set.
Fazayeli et al. (Fazayeli et al., 2016) used the traditional machine learning-based classification methods to categorize the GitHub issue-reports. However, they only evaluated their approach using one repository i.e. 'git-for-windows'. Moreover, they solved the problem as a binary classification problem with bug and non-bug labels. Our framework assigns multiple labels to a GitHub issue-report at a time. Also, we evaluated our approach on a dataset consisting of multiple projects from various programming languages.
Previously discussed studies suffer from a high rate of false-positives as they are utilizing keyword-based features to categorize the issue-reports, The primary reason for such a high error rate is lack of consideration of contextual information of the text during classification. Also, these studies solved the issue classification problem in a multi-class classification setting. In real-world scenarios, an issue can be associated with more than one label at a time and should be solved in a multi-label classification setting.
### Related Work: Issue-Report Assignment
Assigning issues to developers based on their expertise, commonly referred to as bug triaging. Bug triaging has received interest from numerous researchers in the past. Previous researches in this area have mainly relied on key word-based approaches. The majority of these research works solely rely on textual data of the issue report (Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016; Kallis et al., 2016) while some of them attempt to complement the data from other sources such as developer source code commits (Kavli et al., 2016), and contributions on Q&A platforms i.e. StackOverflow (Kavli et al., 2016).
Kevic et al. (Kevic et al., 2016) utilized a multi-phased approach for their analysis. In addition to the textual similarity of issue reports, they analyzed the changeset associated with the issue report. For any incoming issue-report, they first compute the tf-idf feature vectors
and then compute the closeness between those vectors based on cosine similarity to recommend the developers who fixed similar reports. They presented a prototype tool that for a given report recommends a list of developers and is aimed to be used in scrum-based meetings. In comparison, our approach is targeted towards a fully automated solution for the assignment task.
Matter et al. (Matter et al., 2017) presented a keyword-based recommender system that models the expertise of the developers using the vocabulary used in their source code commits. Further, they compared the commits' vocabulary to the vocabulary used in the issue-reports in order to assign them to developers. Their evaluations on the Eclipse project achieved 33.6% precision. In comparison, we rely on textual data included in past issue reports only.
Alenczi et al. (Alenczi et al., 2017) explored the use of term selection methods in order to identify discriminating terms used in a report to assign the issues. Their results found X2 term selection method to outperform their baselines by over 20% on some projects.
### Related Work: Transformers
Attention-based networks derived from human intuition have resulted in significant improvement in various natural language processing tasks(Han et al., 2017). This led to wide adoption of self-attention based Transformers (Kal
treated our issue-report labelling problem as multi-label classification problem, where an issue-report can have more than one label at a time. We used simpletransformers 7 to create our model.
Footnote 7: [https://github.com/ThilinaBajapakse/simpletransformers/](https://github.com/ThilinaBajapakse/simpletransformers/)
_Model Training_: To train our model we randomly sampled 55,000 (approx) issue-reports. Further, we divided the dataset into 80% and 20% training and testing dataset split respectively. We trained our model for 5 epochs with a learning rate of 4e-05, a maximum sequence length of 128, and a batch size of 8. We train our model on Google Colab 8, a free Jupyter notebook based environment that allows to run scripts. It took an hour to train the model with 12 GB RAM, Tesla T4 14 GB GPU and 2 Intel Xeon CPUs @ 2.2GHz specification.
Footnote 8: [https://github.com/ThilinaBajapakse/simpletransformers/](https://github.com/ThilinaBajapakse/simpletransformers/)
### Issue-Report Assignment
This section presents the methodology used in Issue-Report Assignment module of MaintainoMATE. Issue-report assignment module is to assign the issue-reports to the respective developers based on the previously assigned issues.
_Data Collection and Transformation:_ For issue-report assignment module we sampled 4,500 issue reports (approx) along with the assignee developers from TensorFlow project on GitHub, which is a top ranked (Bordes et al., 2017) industrial project. To account for software team changes, we carefully sample reports from developers that have had at least 50 issue reports assigned in the year 2021. This reduces the probability of sampling any developers are not working on the project any longer. Our analysis show that 14 developers actively maintain the TensorFlow project. This dataset9 is also made available to the public as a means of enabling future analysis
Footnote 8: [https://colab.research.google.com/](https://colab.research.google.com/)
_Model:_ We treated the issue-report assignment as a multi-class classification problem where an issue-report can only belong to a single label at a time. Here, the issue-reports textual features i.e. title, description are fed as features while the developers assigned to these issue-reports are the labels of the model. We used the 'roberta-base' model similar to the one used in section 3.1.
_Model Training:_ To train our model, we divide our dataset into 80% training and 20% testing datasets. Further, we feed these issue-reports into multi-class RoBERTa classifier and train it for 5 epochs with similar hyper-parameters as discussed in section 3.1. We also used Google Colab 9 to train our model and it took us an hour to train the model.
Footnote 9: [https://zenodo.org/record/5110986](https://zenodo.org/record/5110986)
Footnote 9: [https://colab.research.google.com/](https://colab.research.google.com/)
### Deployment
MaintainoMATE is deployed as a GitHub application that allows easy integration of the tool in any software project hosted on GitHub. Figure 1 demonstrates an overview of our approach and the deployment. This GitHub application consists of the following two components: (1) Issue-Report Labelling Component (2) Issue-Report Assignment Component.
_Issue-Report Labelling Component_ The Issue-Report Labelling Component of MaintainoMATE automatically assigns labels to issue reports on a repository having the tool installed. When integrated, any new issue is automatically assigned a label out of Bug, Enhancement or Question. Figure 2 shows a demonstration of our app assigning a label to a newly created report which was replicated from a real issue report11.
Footnote 11: [https://github.com/dotnet/core/issues/3407](https://github.com/dotnet/core/issues/3407)
Consider also providing the sha256 hash in releases.json #80
Figure 1. Upon submission of a new report by a user, MaintainoMATE automatically uses the trained models to predict labels and an expert developer to resolve the issue, and finally assigns it back to the report on the issue tracker
Figure 2. A demonstration of MaintainoMATE, automatically assigning label to a report replicated from a real issue
_Issue-Report Assignment Component_ The Issue-Report Assignment Component, when completed would be able to automatically assign experienced developers to issue reports. Although the work on this component is under progress, we have empirically studied and presented a baseline approach with our initial results.
## 4. Results and Discussion
This section discusses the evaluation of results of our framework. We used the following metrics to evaluated our framework:
* **Precision** is the ratio of true positives to the sum of true positives and false positives. A precision value closer to 1 is the most desirable.
* **Recall** is the ratio of true positives to the sum of true positives and false positives. A recall value of 1 indicates the best performance.
* **F1-Score** is defined as harmonic mean of precision and recall. F1-Score closer to 1 is the most preferable.
_Issue-Report Labeling:_ The evaluation of the issue labeling module over a large data set of issue-reports from GitHub gives us an F1-Score of 81%, 74% and 80% for bug, enhancement, and question labels respectively which is comparable to previous studies. For instance, Kallis et al.(Kallis et al., 2017) reported F1-Scores of 83.1%, 82.3%, 82.5% for bug, enhancement and questions category respectively. The slight degradation is expected as we proposed a multi-label classification approach as compared to the multi-class approach presented by Kallis et al.(Kallis et al., 2017). Table 1 gives the complete evaluations of our results.
_Issue-Report Assignment:_ The evaluation of our initial approach to the issue assignment task over the issue-reports from the TensorFlow project shows promising results. Our studies show significant improvement to existing keyword-based approaches. For instance, the approach used by Anvik et al.(Anvik et al., 2017) assigns developers to issues with precision score of 64% however they suffer from low recall score of 10%. Our approach results in comparable precision values while resulting in a significantly improved recall of 52%, precision of 59%, and F1-Score of 54%. Table 2 shows the evaluations of our results.
## 5. Threats to Validity
This section highlights some threats in validating our work.
* **Bias:** We evaluate MaintainoMATE over dataset containing popular repositories of well-maintained projects. This might induce a popularity bias in our results since some of the non-popular repositories might not be well maintained.
* **Cold Start Problem:** The current methodology for issue assignment component of MaintainoMATE requires a repository to have a number of issue-reports in order to learn what patterns are followed in assigning issues-reports to the developers.
* **Language Imbalance:** Our dataset implies some languages are more popular on GitHub than others resulting in our dataset being dominated by issue reports from issue trackers of repositories of these popular languages.
## 6. Future Work
The future work of our research would be directed towards improving MaintainoMATE and adding more use-cases for our GitHub application. Specifically, we would work on: (1) Issue-Report Labelling (2) Issue-Report Assignment (3) Incorporating Other Use-cases
### Future Work: Issue-Report Labelling
_Adding More Labels:_ Although our labelling module is capable of categorizing bugs, questions and enhancement with high accuracy. We further plan to explore other default GitHub labels such as wontfix, help-wanted, duplicate and analyze if they can be incorporated into MaintainoMATE.
### Future Work: Issue-Report Assignment
_Hyperparameter Tuning:_ We plan to further tune the hyperparameters to figure out the best configuration of our transformer based model, which would help us to improve the results
_Developer Profiling_ Currently, we used issue-report textual features to assign the developers to their respective issue-report. In future, we plan to add other features such as activity-based features developers profile based features etc. Following the feature set we are planning to incorporate into our model:
* _Activity-based Features:_ This set of features would include features related to the developers' previous interactions with source-code files and with resolving issue-reports.
* _Profile-based Features:_ This category of features would contain all the features related to developer's programming activities and their correlation with the issue-reports. For instance, we can include the developers' commit messages. Commit messages can be a rich source for profiling developer expertise as the language used in commit messages is closer to that used in issue-reports (Bias et al., 2018).
### Future Work: Incorporating Other Use-cases
In addition to automating the labelling and assignment of the issue-reports, we plan to explore more applications of automation in software maintenance processes. We plan to incorporate a real-time dashboard which would provide real-time analytics to the project mangaer such as number of bug-report issues, number of issue-reports assigned to a particular developer etc. We plan to generate various reports aimed at providing maintenance analytics for various team roles.
\begin{table}
\begin{tabular}{l|l l l} \hline & Precision & Recall & F1-Score \\ \hline Bug & 81\% & 81\% & 81\% \\ Enhancement & 78\% & 72\% & 74\% \\ Question & 79\% & 81\% & 80\% \\ Macro-Average & 79\% & 78\% & 78\% \\ \hline \end{tabular}
\end{table}
Table 1. Evaluation results of our label assignment study
\begin{table}
\begin{tabular}{l l} \hline Metric & Score \\ \hline Precision & 59\% \\ Recall & 52\% \\ F1 Score & 54\% \\ \hline \end{tabular}
\end{table}
Table 2. Evaluation results of our developer assignment study
* _Developer Report:_ The developer report can show the number of issue-reports along with the categories assignment to the developer. The primary goal is to analyze the maintenance activities across the development team, e.g., who has been assigned to fix the bug, and who has been assigned to implement a new feature.
* _Project Report:_ The project report can show the categorized issue-reports along with source code files affected by them across the project. The goal is to provide holistic real-time software maintenance surveillance of the project to managers.
## 7. Conclusion
We have proposed a MaintainoMATE framework that aims to automate various tasks for maintenance of a software project. We have released this framework as GitHub application and can be easily integrated with any GitHub repository. MaintainoMATE is capable of: (1) categorizing an issue-report into its respective category (2) assigning an issue-report to the relevant developer. MaintainoMATE is fully-capable of automatically assigning labels to the issue-reports to identify the nature of these reports as bugs, enhancements or question with state-of-the-art results near an F1 score of 80%. We have also presented initial results from our transformer-based model for the automatic bug assignment task i.e. 54% F1 score and have discussed our vision towards improving these results. The goal of this work is to promote real-life adaptation of software maintenance model for software development projects. MaintainoMATE would help to improve overall software quality by allowing efficient resource allocation while also lowering maintenance costs by automating various tasks where manual effort is required. Future of research would be directed towards the improvement of the current modules as well as incorporating other automation techniques to facilitate software maintenance by adding more functionality such as report generation and real-time analytics of the project using a dashboard.
|
2305.19973 | A Suggested Final Configuration for the Very Large Array | If the construction of the ngVLA begins in 2026, its sensitivity is expected
to match that of the VLA by late 2029. At that juncture it is anticipated that
open-skies observing will cease on the VLA and commence on the ngVLA. We
suggest that during 2026-2029 the VLA be held in a customized final
configuration encompassing portions of its standard A, B, C and D
configurations. Such a final VLA configuration would (1) help minimize the cost
of VLA operations and maximize the pace of ngVLA construction and
commissioning; (2) help VLA users pivot to the high-resolution, high-frequency
research topics that are projected to headline the ngVLA science program; and
(3) help mitigate the effects of source confusion during responses to
transients in the era of the Rubin Observatory and LIGO A+. | J. M. Wrobel, R. C. Walker | 2023-05-31T16:01:32Z | http://arxiv.org/abs/2305.19973v1 | # A Suggested Final Configuration for the Very Large Array
###### Abstract
If the construction of the ngVLA begins in 2026, its sensitivity is expected to match that of the VLA by late 2029. At that juncture it is anticipated that open-skies observing will cease on the VLA and commence on the ngVLA. We suggest that during 2026-2029 the VLA be held in a customized final configuration encompassing portions of its standard A, B, C and D configurations. Such a final VLA configuration would (1) help minimize the cost of VLA operations and maximize the pace of ngVLA construction and commissioning; (2) help VLA users pivot to the high-resolution, high-frequency research topics that are projected to headline the ngVLA science program; and (3) help mitigate the effects of source confusion during responses to transients in the era of the Rubin Observatory and LIGO A+.
Interferometry (808) 0000-0002-8000]J. M. Wrobel
0000-0002-4880-7000]R. C. Walker
## 1 Context and Motivation
The next-generation VLA (ngVLA) is envisaged to be an interferometric array operating at frequencies between 1.2 and 116 GHz, with ten times the sensitivity and angular resolution of the VLA and ALMA (Murphy et al., 2018; Selina et al., 2018; McKinnon et al., 2019). If the construction of the ngVLA begins in 2026, its sensitivity is expected to approximately match that of the VLA by late 2029.1 At that juncture it is expected that PI-driven, open-skies observing will cease on the VLA and commence on the ngVLA as Early Science (Ford et al., 2019). The NRAO has begun working with the community to identify and evaluate possible options for such a transition.2 In the interim we are guided by some draft concepts mentioned by Chandler et al. (2019), notably the possibility that the VLA continue to operate at a reduced level during 2026-2029.
Footnote 1: [https://ngvla.nrao.edu/](https://ngvla.nrao.edu/)
Footnote 2: [https://science.nrao.edu/enews/15.5/](https://science.nrao.edu/enews/15.5/)
Here, we explore a hypothetical reduction in one capability of the VLA, namely its reconfigurability. Specifically, we suggest that the VLA be held in a customized final configuration encompassing portions of its standard A, B, C and D configurations. Such a final VLA configuration would (1) help minimize the cost of VLA operations and maximize the pace of ngVLA construction and commissioning, by freeing staff from VLA reconfiguration activities; (2) help VLA users pivot to the high-resolution, high-frequency research topics that are projected to headline the ngVLA science program (Murphy et al., 2018; Wrobel and Murphy, 2020; Wrobel et al., 2020); and (3) help mitigate the effects of host-galaxy and cosmological confusion during responses to transients in the era of the Rubin Observatory and LIGO A+ (Ivezic et al., 2019; Reitze et al., 2019).
## 2 Approach and Results
For each of its standard A, B, C and D configurations, the VLA offers a power-law spacing of the nine antennas placed along each of its three equiangular arms (Thompson et al., 1980; Napier et al., 1983). The distance \(d_{n}\) from the center of the array of the \(n^{th}\) antenna per arm, counting outward from the center, is proportional to \(n^{1.716}\). The different standard configurations have different proportionality constants. The values chosen for those constants offer two advantages. First, it means that some antenna pads can be shared among the configurations, so only 24 pads per arm are needed to accommodate all standard configurations. Those pad locations are shown in Figure 1. Below, we will make use of each arm's 24 standard pad identifiers \(p\) that span 1 to 72 with
gaps (see Table 1 in Thompson et al., 1980). Second, the scaling between the standard configurations resembles that between three of the VLA's original observing bands, facilitating imaging at matched angular resolutions among those bands. With the advent of complete frequency coverage between 1 and 50 GHz on the VLA (Perley et al., 2011) and robust data-weighting schemes (Briggs, 1995), this second advantage has become less significant.
We suggest that during 2026-2029 the VLA be held in a customized final configuration encompassing portions of all its standard configurations. We opt to avoid populating the two innermost pads per arm, \(p=1\) and \(p=2\), as such short spacing information can be obtained with single dish facilities. We also opt to avoid populating the outermost pad per arm, \(p=72\), as this will help reduce the operational burden. This leaves us with a set of 21 pad locations per arm that we wish to populate with nine antennas per arm. To do so, we take two steps.
First, we seek a power-law spacing of the nine antennas per arm, spread between the innermost antenna's \(d_{1}=89.9\) m on pad \(p=3\) and the outermost antenna's \(d_{9}=17160.8\) m on pad \(p=64\). These extremes define a power-law exponent \(log_{10}(d_{9}/d_{1})/log_{10}(9)=2.390\) and lead to the set of nine desired distances \(d_{n}^{desired}\) given in Table 1.
Second, for simplicity we invoke the VLA's power-law model for its pad locations per arm. We then search among pads \(p=4,...,56\) per arm to find the seven pads that come closest to achieving the desired distances \(d_{n}^{desired}\) for seven additional antennas. The seven closest-pad identifiers plus the two end-defining pads are given in Table 1, along with their closest-pad distances \(d_{n}\).
The closest-pad identifiers in Table 1 define our suggestion for the VLA's final configuration. Armed with those identifiers, we use SCHED to access their catalogued locations and generate \((u,v)\) coverage plots for short (0.2 hour) and long (8.0 hour) tracks, subject to an antenna elevation limit of 15 degrees. The \((u,v)\) coverage plots are shown on A-configuration scales in Figures 2 and 3, on B-configuration scales in Figures 4 and 5, on C-configuration scales in Figures 6 and 7, and on D-configuration scales in Figures 8 and 9. These figures indicate reasonable \((u,v)\) coverage on the standard scales long familiar to VLA users.
## 3 Summary and Next Steps
We explored a hypothetical reduction in one capability of the VLA, namely its reconfigurability, during the years of a VLA-to-ngVLA transition. We identified a power-law configuration for the VLA that involves portions of its standard A, B, C and D configurations. We suggested that the VLA be held in this final, customized configuration during the transition years, and mentioned some operational and scientific advantages of doing so.
A specific next step is to use simulations to study the performance parameters and image fidelity of our suggested final configuration for the VLA. We look forward to learning the community's reaction to our suggestion. We also look forward to learning about the alternate ideas that will emerge as the NRAO engages with community stakeholders to identify and evaluate possible options for the VLA/VLBA-to-ngVLA transition.
## Acknowledgments
We thank Joe Carilli for generating Figure 1. The NRAO is a facility of the National Science Foundation (NSF), operated under cooperative agreement by AUI. The ngVLA is a design and development project of the NSF operated under cooperative agreement by AUI.
|
2309.04349 | Suppression of Chemotactic Blowup by Strong Buoyancy in
Stokes-Boussinesq Flow with Cold Boundary | In this paper, we show that the Keller-Segel equation equipped with zero
Dirichlet Boundary condition and actively coupled to a Stokes-Boussinesq flow
is globally well-posed provided that the coupling is sufficiently large. We
will in fact show that the dynamics is quenched after certain time. In
particular, such active coupling is blowup-suppressing in the sense that it
enforces global regularity for some initial data leading to a finite-time
singularity when the flow is absent. | Zhongtian Hu, Alexander Kiselev | 2023-09-08T14:21:14Z | http://arxiv.org/abs/2309.04349v2 | # Suppression of Chemotactic Blowup by Strong Buoyancy in Stokes-Boussinesq Flow with Cold Boundary
###### Abstract
In this paper, we show that the Keller-Segel equation equipped with zero Dirichlet Boundary condition and actively coupled to a Stokes-Boussinesq flow is globally well-posed provided that the coupling is sufficiently large. We will in fact show that the dynamics is quenched after certain time. In particular, such active coupling is blowup-suppressing in the sense that it enforces global regularity for some initial data leading to a finite-time singularity when the flow is absent.
## 1 Introduction
The Keller-Segel equation is a well known model of chemotaxis [18, 24]. It describes a population of bacteria or slime mold that move in response to attractive external chemical that they themselves secrete. The equation has interesting analytical properties: its solutions can form mass concentration singularities in dimension greater than one (see e.g. [23]) where further references can be found). Often, chemotactic processes take place in ambient fluid. One natural question is then how the presence of fluid flow can affect singularity formation. In the case where the ambient flow is passive - prescribed and independent of the bacteria density - it has been shown that presence of the flow can suppress singularity formation. The flows that have been analyzed include some flows with strong mixing properties [19], shear flows [3], hyperbolic splitting flow [13], and some cellular flows [17]. In a similar vein, [8] explored advection induced regularity has been for the Kuramoto-Sivashinsky equation.
The case where the fluid flow is active - satisfies some fluid equation driven by force exerted the bacteria - is very interesting but harder to analyze. There have been many impressive works that analyzed such coupled systems, usually via buoyancy force; see for example [9, 10, 21, 20, 22, 25, 5, 11, 27, 26] where further references can be found. in some cases results involving global existence of regular solutions (the precise notion of their regularity is different in different papers) have been proved. These results, however, apply either in the settings where the initial data satisfy some smallness assumptions (e.g. [10, 22, 5]) or in the systems where both fluid and chemotaxis equations may not form a singularity if not coupled (e.g. [25, 27, 26]). Recently, in [14] and [28], the authors analyzed Patlak-Keller-Segel equation coupled to the Navier-Stokes equation near Couette flow. Based on ideas of blowup suppression in shear flows and stability of the Couette flow, the authors proved that global regularity can be enforced if the amplitude of the Couette flow is dominantly large and if the initial flow is very close to it. The density/fluid coupling in these works
is not by buoyancy force but instead involves a model of the swimmer's effect on fluid that leads to special algebraic properties of the system.
In the recent work of the authors joint with Yao [15], the two dimensional Keller-Segel equation coupled with the incompressible porous media via buoyancy force has been analyzed. It has been proved that in this case, an arbitrary weak coupling constant (i.e, gravity) completely regularizes the system, and the solutions become globally regular for any reasonable initial data. At the heart of the proof is the analysis of potential energy, whose time derivative includes a coercive "main term" \(\|\partial_{x_{1}}\rho\|^{2}_{H_{0}^{-1}}\) (where \(\rho\) is the bacteria density). Essentially, this \(H_{0}^{-1}\) norm has to become small, and intuitively this implies mixing in the \(x_{1}\) direction. Hence the solution becomes in some sense quasi-one-dimensional and this arrests singularity formation.
Our goal in this paper is to analyze the Keller-Segel equation in an arbitrary smooth domain in dimensions two and three coupled to the Stokes flow via buoyancy force:
\[\begin{cases}\partial_{t}\rho+u\cdot\nabla\rho-\Delta\rho+\operatorname{div}( \rho\nabla(-\Delta)^{-1}\rho)=0,&x\in\Omega,\\ \partial_{t}u-\Delta u+\nabla p=g\rho e_{z},\;\operatorname{div}u=0,&x\in \Omega,\\ u(0,x)=u_{0}(x),\;\rho(0,x)=\rho_{0}(x),\;\rho_{0}(x)\geq 0,&\\ u|_{\partial\Omega}=0,\;\rho|_{\partial\Omega}=0.&\end{cases} \tag{1.1}\]
Here \(\Omega\) is a smooth, compact domain in \(\mathbb{R}^{d}\), \(d=2\) or \(3\). \(e_{z}\) denotes the unit vector \((0,1)\) when \(d=2\) or \((0,0,1)\) when \(d=3\). \(g\in\mathbb{R}^{+}\) is the Rayleigh number representing the buoyancy strength. Moreover, the operator \((-\Delta)^{-1}\) denotes the inverse homogeneous Dirichlet Laplacian corresponding to the domain \(\Omega\). In the case of the Stokes flow, the fluid velocity is more regular, and the equation includes time derivative that complicates matters, partly due to a loss of a "Biot-Savart law" that relates \(\rho\) and \(u\) directly. We are unable to prove global regularity for all \(g\), and we are not sure if it is true. Our main result is global regularity for strong buoyancy. The proof is completely different from [15]: it relies on softer arguments and the analysis of the large buoyancy limit.
The first part of this paper addresses the local well-posedness of regular solutions to (1.1). Before we make precise of the notion of a _regular solution_, we shall first introduce the following useful function spaces: to study the regularity properties of \(\rho\), we consider
\[H_{0}^{1} :=\text{completion of }C_{c}^{\infty}(\Omega)\text{ with respect to }H^{1}\text{ norm},\] \[H_{0}^{-1} :=\text{dual space of }H_{0}^{1}.\]
Moreover, we use the traditional notation \(W^{k,p}(\Omega)\) to denote Sobolev spaces equipped with norm \(\|\cdot\|_{k,p}\) in domain \(\Omega\). If \(p=2\), we in particular write \(H^{s}(\Omega)=W^{s,2}(\Omega)\) equipped with norm \(\|\cdot\|_{s}\). We will write \(W^{k,p}\) (or \(H^{s}\)) instead of \(W^{k,p}(\Omega)\) (or \(H^{s}(\Omega)\)) for simplicity if there is no confusion over the domain involved. We also say an \(n\)-vector field \(v=(v_{i})_{i}\in H^{s}\) if \(v_{i}\in H^{s}\) for \(i=1,\ldots,n\).
As we also need to work with Stokes equation, it is standard to introduce the following spaces:
\[C_{c,\sigma}^{\infty}:=\{u\in C_{c}^{\infty}(\Omega)\;|\;\operatorname{div}u= 0\},\]
\[H:=\text{completion of }C_{c,\sigma}^{\infty}\text{ with respect to }L^{2}\text{ norm},\]
\[V:=H\cap H_{0}^{1}(\Omega),\;V^{*}:=\text{dual space of }V,\]
where \(V\) is equipped with \(H_{0}^{1}\) norm, and \(V^{*}\) is equipped with the standard dual norm. We also recall the following useful operators: the Leray projector \(\mathbb{P}:L^{2}\to H\) and the Stokes operator \(\mathcal{A}:=-\mathbb{P}\Delta:D(\mathcal{A})=H^{2}\cap V\to H\). We refer the readers to [6] for a more thorough treatment
of such operators. As a common practice in the study of Stokes equation, one may equivalently rewrite the fluid equation as:
\[\partial_{t}u+\mathcal{A}u=g\mathbb{P}(\rho e_{z}), \tag{1.2}\]
We will often use this formulation in regularity estimates for the rest of this work.
Now we give a rigorous definition of a _regular solution_ to (1.1).
**Definition 1.1**.: _Given initial data \(\rho_{0}\in H^{1}_{0}\), \(u_{0}\in V\), and some \(T>0\), we say the pair \((\rho(t,x),u(t,x))\) is a regular solution to (1.1) on \([0,T]\) if_
\[\rho\in C^{0}([0,T];H^{1}_{0})\cap L^{2}((0,T);H^{2}\cap H^{1}_{0}),\,u\in C^{ 0}([0,T];V)\cap L^{2}((0,T);H^{2}\cap V),\]
\[\partial_{t}\rho\in C^{0}([0,T];H^{-1}_{0}),\,\partial_{t}u\in C^{0}([0,T];V^{ *}),\]
\[\rho\in C^{\infty}((0,T]\times\Omega),\,u\in C^{\infty}((0,T]\times\Omega).\]
With this definition, we are able to obtain the following well-posedness result:
**Theorem 1.1**.: _Given initial data \(\rho_{0}\in H^{1}_{0}\), \(u_{0}\in V\), there exists a \(T_{*}=T_{*}(\rho_{0})>0\) such that there exists a unique regular solution \((\rho,u)\) to problem (1.1) on \([0,T_{*}]\)._
We will then prove a regularity criterion which allows us to continue the regular solution of (1.1) as long as the \(L^{\frac{4}{1-d}}_{t}L^{2}_{x}\) norm of \(\rho\) is controlled. More precisely, we have
**Theorem 1.2**.: _Let \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\), be a smooth, bounded domain. If the maximal lifespan \(T_{0}\) of the regular solution \((\rho,u)\) to problem (1.1) is finite, then necessarily_
\[\lim_{t\nearrow T_{0}}\int_{0}^{t}\|\rho\|^{\frac{4}{4-d}}_{L^{2}}ds=\infty.\]
A similar result was proved in [19] in the periodic setting for the uncoupled Keller-Segel equation.
In the second part of this work, we will quantify the quenching effect of the Stokes-Boussinesq flow with strong buoyancy on the Keller-Segel equation equipped with homogeneous Dirichlet boundary condition. To be more precise, we show that the flow can suppress the norm \(\|\rho\|_{L^{2}}\) to be sufficiently small within the time scale of local existence. In particular, we will show the following main result of this work:
**Theorem 1.3**.: _For any smooth, bounded domain \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\), and arbitrary initial data \(\rho_{0}\in H^{1}_{0}\), \(u_{0}\in V\), there exists \(g_{*}=g_{*}(\rho_{0},u_{0})\) so that for any \(g\geq g_{*}\), (1.1) admits a regular, global-in-time solution. In particular, \(\rho\) is quenched exponentially fast in the sense that_
\[\lim_{t\to\infty}e^{ct}\|\rho(t)\|_{L^{2}}\leq C, \tag{1.3}\]
_where \(c\), \(C\) are positive constants that only depend on the domain \(\Omega\)._
We observe that if we fix any smooth passive divergence-free \(u\) satisfying the no-flux \(u\cdot n=0\) boundary condition, then one can find smooth initial data \(\rho_{0}\) such that the solution of the first equation in (1.1) will lead to finite time blow up. The argument proving this is very similar to that of Theorem 8.1 in [19] for the case of \(\mathbb{T}^{2}\); however the localization used in the proof makes it insensitive to the boundary condition.
We will use the expression \(f\lesssim g\) to denote the following: there exists some constant \(C\) only depending on domain \(\Omega\) such that \(f\leq Cg.\) In particular, we will denote a generic constant depending only on \(\Omega\) by \(C\), and it could change from line to line. Finally, we will use the Einstein summation convention. That is, by default we sum over the repeated indices; e.g. we write \(a_{i}x_{i}:=\sum_{i}a_{i}x_{i}\).
**Acknowledgment.** The authors acknowledge partial support of the NSF-DMS grants 2006372 and 2306726.
Local Well-Posedness of Regular Solution
In this section, we will establish the local well-posedness of problem (1.1), namely Theorem 1.1. It is well-known that the classical parabolic-elliptic Keller-Segel equation is locally well-posed in domains such as \(\mathbb{R}^{d}\) or \(\mathbb{T}^{d}\), \(d=2,3\), or in a smooth, bounded domain with Neumann boundary condition on \(\rho\) in suitable function spaces (see e.g. [4, 19, 25]). However, we were unable to locate a convenient reference for a well-posedness theorem in the scale of Sobolev spaces \(H^{s}\) in the scenario of (1.1). Thus for the sake of the completeness, we will give explicit _a priori_ estimates which lead to local well-posedness.
We first set up an appropriate Galerkin scheme that uses two sets of bases in Subsection 2.1. In Subsection 2.2, we start with a set of lower order _a priori_ energy estimates which guarantee spatial regularity of a solution up to \(H^{2}\). In Subsection 2.3, we will prove the existence of regular solutions by devising an inductive argument that boosts both temporal and spatial regularity up to \(H^{s}\) for arbitrary \(s\) using parabolic smoothing. In Subsection 2.4, we will complete the proof of Theorem 1.1 by showing the uniqueness of regular solutions. Finally, we will demonstrate an \(L^{2}\) regularity criterion (i.e. Theorem 1.2) in Subsection 2.5. It will be instrumental in establishing the global well-posedness of (1.1).
**Remark 2.1**.: _We will only discuss the case when \(d=3\). Then \(d=2\) case follows from similar (and easier) arguments._
### Galerkin Approximation
Since (1.1) is a system of semilinear parabolic equations in a compact domain, it is convenient to construct a solution to (1.1) by Galerkin approximation. Let \(\{v_{k}\}_{k}\), \(\{\lambda_{k}\}_{k}\) be the eigenfunctions and eigenvalues of \(-\Delta\). Let \(\{w_{j}\}_{j}\), \(\{\eta_{j}\}_{j}\) be the eigenfunctions and eigenvalues of the Stokes operator \(\mathcal{A}\). Consider the following approximate system:
\[\begin{cases}\partial_{t}\rho^{(n)}+\mathbb{Q}_{n}(u^{(n)}\cdot\nabla\rho^{(n) })-\Delta\rho^{(n)}+\mathbb{Q}_{n}(\operatorname{div}(\rho^{(n)}\nabla(- \Delta)^{-1}\rho^{(n)}))=0,\\ \partial_{t}u^{(n)}+\mathcal{A}u^{(n)}=g\mathbb{P}_{n}(\rho^{(n)}e_{z}),\\ \rho^{(n)}(0)=\mathbb{Q}_{n}\rho_{0},\;u^{(n)}(0)=\mathbb{P}_{n}u_{0},\end{cases} \tag{2.1}\]
where \(\mathbb{Q}_{n}f:=(f,v_{k})_{L^{2}}v_{k}\), \(\mathbb{P}_{n}f:=(f,w_{j})_{L^{2}}w_{j}\). Here \((\cdot,\cdot)_{L^{2}}\) denotes the standard \(L^{2}\)-inner product. Note that the projection operators \(\mathbb{P}_{n},\mathbb{Q}_{n}\) are symmetric with respect to \(L^{2}\) inner product. Writing the approximated solutions \(\rho^{(n)}(t,x)=\rho^{(n)}_{k}(t)v_{k}(x)\), \(u^{(n)}(t,x)=u^{(n)}_{j}(t)w_{j}(x)\) (recall that we are summing over repeated indices), we obtain the following constant-coefficient ODEs in \(t\): for \(l=1,\ldots,n\),
\[\begin{cases}\frac{d}{dt}\rho^{(n)}_{l}+C^{(n)}_{ljk}u^{(n)}_{j}\rho^{(n)}_{k }+\lambda_{l}\rho^{(n)}_{l}-D^{(n)}_{ljk}\rho^{(n)}_{k}\rho^{(n)}_{j}=0,\\ \frac{d}{dt}u^{(n)}_{l}+\eta_{l}u^{(n)}_{l}=gC_{kl}\rho^{(n)}_{k}e_{z},\\ \rho^{(n)}_{l}(0)=(\rho_{0},v_{l})_{L^{2}},\;u^{(n)}_{l}(0)=(u_{0},w_{l})_{L^{ 2}},\end{cases} \tag{2.2}\]
where
\[C^{(n)}_{ljk}:=(\mathbb{Q}_{n}(w_{j}\cdot\nabla v_{k}),v_{l})_{L^{2}},\;D^{( n)}_{ljk}:=\mathbb{Q}_{n}(\operatorname{div}(v_{k}\nabla(-\Delta)^{-1}v_{j}),v_{l})_{ L^{2}},\]
\[C_{kl}:=(\mathbb{P}v_{k},w_{l})_{L^{2}}.\]
To close the Galerkin approximation argument, we shall prove suitable uniform-in-\(n\) energy estimates for \((\rho^{(n)},u^{(n)})\) and pass to the limit using compactness theorems. For the sake of simplicity, we shall prove such energy estimates in an _a priori_ fashion, for sufficiently regular solutions of the original system (1.1). One could verify that all estimates below can be carried over to the approximated solutions \((\rho^{(n)},u^{(n)})\) in a straightforward manner.
### Lower Order _a priori_ Estimates
Given initial data \(\rho_{0}\in H^{1}_{0},u\in V\), we first show the following \(L^{\infty}_{t}L^{2}_{x}\) and \(L^{2}_{t}H^{1}_{x}\) estimates for a regular solution \((\rho,u)\):
**Proposition 2.1**.: _Given initial data \(\rho_{0}\in H^{1}_{0},u\in V\), we assume \((\rho,u)\) is a regular solution to (1.1) on \([0,T]\) for some \(T>0\). Then for \(t\in[0,T]\), we have_
\[\frac{d}{dt}\|\rho\|^{2}_{L^{2}}+\|\nabla\rho\|^{2}_{L^{2}}\lesssim\|\rho\|^{6 }_{L^{2}},\;\frac{1}{2}\frac{d}{dt}\|u\|^{2}_{L^{2}}+\|\nabla u\|^{2}_{L^{2}} \leq g\|u\|_{L^{2}}\|\rho\|_{L^{2}}. \tag{2.3}\]
_Moreover, there exists \(T_{*}\in(0,1]\) only depending on \(\rho_{0}\), and a constant \(C(u_{0},\rho_{0},g)>0\) such that_
\[\sup_{t\in[0,T_{*}]}\|\rho(t)\|^{2}_{L^{2}}+\int_{0}^{T_{*}}\|\nabla\rho(t)\|^ {2}_{L^{2}}ds\leq 4\|\rho_{0}\|^{2}_{L^{2}}. \tag{2.4}\]
\[\sup_{t\in[0,T_{*}]}\|u(t)\|^{2}_{L^{2}}+\int_{0}^{T_{*}}\|\nabla u(t)\|^{2}_{ L^{2}}ds\leq C(\|\rho_{0}\|_{L^{2}},\|u_{0}\|_{L^{2}})(g^{2}+1). \tag{2.5}\]
Proof.: First by testing the \(\rho\)-equation of (1.1) by \(\rho\) and integrating by parts, we have
\[\frac{1}{2}\frac{d}{dt}\|\rho\|^{2}_{L^{2}}+\|\nabla\rho\|^{2}_{L^{2}}=\frac{ 1}{2}\int_{\Omega}\rho^{3}dx\leq C\|\rho\|^{3/2}_{L^{2}}\|\nabla\rho\|^{3/2}_{ L^{2}}\leq\frac{1}{2}\|\nabla\rho\|^{2}_{L^{2}}+C\|\rho\|^{6}_{L^{2}},\]
where we used the following standard Gagliardo-Nirenberg inequality in 3D for trace-free \(f\):
\[\|f\|^{3}_{L^{3}}\leq C\|f\|^{3/2}_{L^{2}}\|\nabla f\|^{3/2}_{L^{2}}.\]
After rearranging, we obtain the first inequality of (2.3). Similarly, we test the \(u\)-equation of (1.1) by \(u\). After integration by parts, we have
\[\frac{1}{2}\frac{d}{dt}\|u\|^{2}_{L^{2}}+\|\nabla u\|^{2}_{L^{2}}=g\int_{ \Omega}u\cdot(\rho e_{z})dx\leq g\|u\|_{L^{2}}\|\rho\|_{L^{2}}, \tag{2.6}\]
which proves the second inequality in (2.3). Then, the estimate (2.4) follows immediately from applying Gronwall inequality to (2.3) and choosing \(T_{*}=T_{*}(\rho_{0})\leq 1\) sufficiently small. Now integrating (2.6) from \(0\) to \(t\in(0,T_{*})\), using (2.4), and taking supremum over \(t\), we have
\[\sup_{t\in[0,T_{*}]}\|u(t)\|_{L^{2}}\leq 8g\|\rho_{0}\|_{L^{2}}T_{*}+\|u_{0}\|_ {L^{2}}. \tag{2.7}\]
Using (2.4) and (2.7) in the integrated in time version of (2.6), we obtain that
\[\int_{0}^{T_{*}}\|\nabla u(s)\|^{2}_{L^{2}}ds\leq\|u_{0}\|^{2}_{L^{2}}+4gT_{*} \|\rho_{0}\|_{L^{2}}(8g\|\rho_{0}\|_{L^{2}}T_{*}+\|u_{0}\|_{L^{2}}) \tag{2.8}\]
The proof of (2.5) is finished after we combine (2.7) and (2.8).
**Remark 2.2**.: _From now on, any appearance of \(T_{*}\) refers to the time \(T_{*}\) chosen in Proposition 2.1._
With Proposition 2.1, we will derive the following upgraded temporal and spatial regularity estimates for solution \((\rho,u)\) within the time interval \([0,T_{*}]\).
**Proposition 2.2**.: _Assuming \((\rho,u)\) to be a regular solution to (1.1) with initial data \(\rho_{0}\in H^{1}_{0},u\in V\), there exists \(C(\rho_{0},u_{0},g)>0\) such that_
\[\int_{0}^{T_{*}}\left(\|\rho(t)\|_{2}^{2}+\|u(t)\|_{2}^{2}+\|\partial _{t}\rho(t)\|_{L^{2}}^{2}+\|\partial_{t}u(t)\|_{L^{2}}^{2}\right)dt\] \[+\sup_{t\in[0,T_{*}]}(\|\rho(t)\|_{1}^{2}+\|u(t)\|_{1}^{2})\leq C( \rho_{0},u_{0},g).\]
Proof.: Testing the \(\rho\)-equation in (1.1) by \(-\Delta\rho\) and integrating by parts, we obtain:
\[\frac{1}{2}\frac{d}{dt}\|\nabla\rho\|_{L^{2}}^{2}+\|\Delta\rho\|_{L^{2}}^{2}= \int_{\Omega}\Delta\rho(u\cdot\nabla\rho)+\int_{\Omega}\Delta\rho\operatorname {div}(\rho\nabla(-\Delta)^{-1}\rho)=I+J.\]
Let us fix \(\epsilon>0\). Using Sobolev embedding, Poincare inequality, and Young's inequality with \(\epsilon\), we can estimate \(I\) by:
\[I\leq\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{2}}\|u\|_{L^{\infty}}\leq \epsilon\|\Delta\rho\|_{L^{2}}^{2}+C(\epsilon)\|u\|_{2}^{2}\|\nabla\rho\|_{L^{ 2}}^{2}.\]
Moreover, we can write \(J\) as:
\[J=\int_{\Omega}\Delta\rho\left(\nabla\rho\cdot\nabla(-\Delta)^{-1}\rho-\rho^{ 2}\right)dx=J_{1}+J_{2}.\]
Using the standard elliptic estimate and Gagliardo-Nirenberg-Sobolev inequality, we can estimate \(J_{1}\) by:
\[J_{1} \leq\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{3}}\|\nabla(-\Delta )^{-1}\rho\|_{L^{6}}\lesssim\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{3}}\| \nabla(-\Delta)^{-1}\rho\|_{1}\] \[\lesssim\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{2}}^{1/2}\| \nabla\rho\|_{1}^{1/2}\|\rho\|_{L^{2}}\lesssim\|\rho\|_{2}^{3/2}\|\nabla\rho \|_{L^{2}}^{1/2}\|\rho\|_{L^{2}}\] \[\leq\epsilon\|\Delta\rho\|_{L^{2}}^{2}+C(\epsilon)\|\nabla\rho\|_ {L^{2}}^{2}\|\rho\|_{L^{2}}^{4},\]
where we also used Young's inequality in the final step.
We are going to use the following Gagliardo-Nirenberg inequalities: in dimension three,
\[\|\rho\|_{L^{4}}\lesssim\|\Delta\rho\|_{L^{2}}^{3/8}\|\rho\|_{L^{2}}^{5/8};\ \| \rho\|_{L^{4}}\lesssim\|\rho\|_{1}^{3/4}\|\rho\|_{L^{2}}^{1/4}.\]
Then we can estimate \(J_{2}\) as follows:
\[J_{2} \leq\|\Delta\rho\|_{L^{2}}\|\rho\|_{L^{4}}^{2}\leq C\|\Delta \rho\|_{L^{2}}\|\Delta\rho\|_{L^{2}}^{1/2}\|\rho\|_{L^{2}}^{5/6}\|\rho\|_{1}^{ 1/2}\|\rho\|_{L^{2}}^{1/6}\] \[=C\|\Delta\rho\|_{L^{2}}^{3/2}\|\rho\|_{1}^{1/2}\|\rho\|_{L^{2}} \leq\epsilon\|\Delta\rho\|_{L^{2}}^{2}+C(\epsilon)\|\nabla\rho\|_{L^{2}}^{2} \|\rho\|_{L^{2}}^{4},\]
Collecting the estimates above and choosing \(\epsilon\) to be sufficiently small, we obtain the following:
\[\frac{d}{dt}\|\nabla\rho\|_{L^{2}}^{2}+\|\Delta\rho\|_{L^{2}}^{2}\lesssim\left( \|\rho\|_{L^{2}}^{4}+\|u\|_{2}^{2}\right)\|\nabla\rho\|_{L^{2}}^{2} \tag{2.9}\]
On the other hand, we test (1.2) by \(\mathcal{A}u\). Integrating by parts, we have
\[\frac{1}{2}\frac{d}{dt}\|\nabla u\|_{L^{2}}^{2}+\|\mathcal{A}u\|_{L^{2}}^{2}=g \int_{\Omega}\mathcal{A}u\cdot\rho e_{z}\leq\frac{1}{2}\|\mathcal{A}u\|_{L^{2} }^{2}+\frac{g^{2}}{2}\|\rho\|_{L^{2}}^{2}\]
Rearranging the above and using Theorem A.1, we conclude that,
\[\frac{d}{dt}\|\nabla u\|_{L^{2}}^{2}+\|u\|_{2}^{2}\leq g^{2}\|\rho\|_{L^{2}}^{ 2}\leq 4g^{2}\|\rho_{0}\|_{L^{2}}^{2},\ t\in[0,T_{*}], \tag{2.10}\]
where the last inequality is due to Proposition 2.1. Integrating (2.10) from \(0\) to \(t\), \(t\leq T_{*}\) and then taking supremum of \(t\) on \([0,T_{*}]\), we obtain
\[\sup_{t\in[0,T_{*}]}\|\nabla u(t)\|_{L^{2}}^{2}\leq 4g^{2}\|\rho_{0}\|_{L^{2}}^{2 }T_{*}+\|\nabla u_{0}\|_{L^{2}}^{2};\]
in addition,
\[\int_{0}^{T_{*}}\|u(t)\|_{2}^{2}\leq 4g^{2}\|\rho_{0}\|_{L^{2}}^{2}T_{*}+\| \nabla u_{0}\|_{L^{2}}^{2}. \tag{2.11}\]
It follows that
\[\sup_{t\in[0,T_{*}]}\|u(t)\|_{1}^{2}+\int_{0}^{T_{*}}\|u(t)\|_{2}^{2}dt\leq C( u_{0},\rho_{0},g).\]
Integrating (2.9) and using (2.11), we have that for all \(t\in[0,T_{*}]\),
\[\|\nabla\rho(t)\|_{L^{2}}^{2}\lesssim\|\rho_{0}\|_{1}^{2}\exp\left(\int_{0}^{T _{*}}(\|\rho\|_{L^{2}}^{4}+\|u\|_{2}^{2}ds\right)\leq\|\rho_{0}\|_{1}^{2}\exp \left(C(\rho_{0},g)T_{*}+\|u_{0}\|_{1}^{2}\right)<\infty.\]
Similarly to the case of \(u\), we can also use (2.9) to control \(\int_{0}^{T_{*}}\|\rho(t)\|_{2}^{2}dt\) as well, arriving at
\[\sup_{t\in[0,T_{*}]}\|\rho(t)\|_{1}^{2}+\int_{0}^{T_{*}}\|\rho(t)\|_{2}^{2} \leq C(u_{0},\rho_{0},g).\]
We have thus showed the spatial regularity of \(\rho\) and \(u\).
Finally, we shall obtain regularity estimates for the time derivatives. Using the equation (1.1), we see that
\[\partial_{t}\rho=-u\cdot\nabla\rho+\Delta\rho-\operatorname{div}(\rho\nabla( -\Delta)^{-1}\rho)\;\;\text{and}\;\;\partial_{t}u=-\mathcal{A}u+g\mathbb{P}( \rho e_{z}).\]
Using standard Sobolev embeddings and elliptic estimate, we have the following bounds:
\[\int_{0}^{T_{*}}\|u\cdot\nabla\rho(t)\|_{L^{2}}^{2}dt \leq\int_{0}^{T_{*}}\|u\|_{L^{6}}^{2}\|\nabla\rho\|_{L^{3}}^{2} dt\lesssim\sup_{t\in[0,T_{*}]}\|u(t)\|_{1}^{2}\int_{0}^{T_{*}}\|\rho(t)\|_{2}^{2}dt,\] \[\int_{0}^{T_{*}}\|\Delta\rho\|_{L^{2}}^{2}dt \leq\int_{0}^{T_{*}}\|\rho(t)\|_{2}^{2}dt,\] \[\int_{0}^{T_{*}}\|\operatorname{div}(\rho\nabla(-\Delta)^{-1}\rho )\|_{L^{2}}^{2}dt \lesssim\int_{0}^{T_{*}}\|\rho\|_{L^{4}}^{4}+\|\nabla\rho\cdot \nabla(-\Delta)^{-1}\rho\|_{L^{2}}^{2}dt\] \[\lesssim\sup_{t\in[0,T_{*}]}\|\rho(t)\|_{1}^{4}T_{*}+\sup_{t\in[ 0,T_{*}]}\|\rho(t)\|_{1}^{2}\int_{0}^{T_{*}}\|\rho(t)\|_{2}^{2}dt,\] \[\int_{0}^{T_{*}}\|\mathcal{A}u\|_{L^{2}}^{2}+g\|\mathbb{P}\rho\| _{L^{2}}^{2}dt \leq\int_{0}^{T_{*}}\|u\|_{2}^{2}+g\|\rho\|_{L^{2}}^{2}dt.\]
The above estimates and bounds we proved earlier imply that
\[\int_{0}^{T_{*}}\|\partial_{t}\rho\|_{L^{2}}^{2}dt+\int_{0}^{T_{*}}\|\partial _{t}u\|_{L^{2}}^{2}dt\leq C(u_{0},\rho_{0},g),\]
and the proof is thus complete.
With the regularity estimates above, we may construct solutions \((\rho,u)\) from \((\rho^{(n)},u^{(n)})\). The following standard compactness theorem is useful. We refer interested readers to Theorem IV.5.11 in [2] and Theorem 4 of Chapter 5 in [12] for related proofs.
**Theorem 2.1**.: _Let_
\[E_{1}:=\{\rho\in L^{2}((0,T);H^{2}),\;\partial_{t}\rho\in L^{2}( (0,T);L^{2})\},\] \[E_{2}:=\{u\in L^{2}((0,T);H^{2}\cap V),\;\partial_{t}u\in L^{2}( (0,T);H)\}\]
_for some \(T>0\). Then \(E_{1}\) is continuously embedded in \(C([0,T],H^{1})\), and \(E_{2}\) is continuously embedded in \(C([0,T],V)\)._
**Corollary 2.1**.: _Given initial data \(\rho_{0}\in H^{1}_{0}\), \(u\in V\), there exists a weak solution \((\rho,u)\) of the system (1.1) satisfying_
\[\rho\in C([0,T];H^{1}_{0})\cap L^{2}((0,T);H^{2}\cap H^{1}_{0}), \,u\in C([0,T];V)\cap L^{2}((0,T);H^{2}\cap V), \tag{2.12}\] \[\partial_{t}\rho\in C([0,T_{*}];H^{-1}_{0}),\,\partial_{t}u\in C( [0,T_{*}];V^{*}). \tag{2.13}\]
Proof.: The uniform bounds in Proposition 2.2 inform us that there exists a subsequence of \(\{\rho^{(n)}\}_{n},\{u^{(n)}\}_{n}\), which we still denote by \(\rho^{(n)},u^{(n)}\), and \(\rho,u\), such that
1. \(\rho^{(n)}\rightharpoonup\rho\) weak-\(*\) in \(L^{\infty}((0,T_{*});H^{1}_{0})\), weakly in \(L^{2}((0,T);H^{2}\cap H^{1}_{0})\); \(\partial_{t}\rho^{(n)}\rightharpoonup\partial_{t}\rho\) weakly in \(L^{2}((0,T);L^{2})\),
2. \(u^{(n)}\rightharpoonup u\) weak-\(*\) in \(L^{\infty}((0,T_{*});V)\), weakly in \(L^{2}((0,T);H^{2}\cap V)\); \(\partial_{t}u^{(n)}\rightharpoonup\partial_{t}u\) weakly in \(L^{2}((0,T);H)\).
It is straightforward to check that the limits \(\rho\) and \(u\) satisfy (1.1) in the sense of distribution. Evoking Theorem 2.1, we have proved (2.12).
Now, we show \(\partial_{t}u\in C([0,T_{*}];V^{*})\). In view of (1.2), it suffices to show that \(-\mathcal{A}u+g\rho e_{2}\in C([0,T_{*}];V^{*})\). For simplicity, we show that the most singular term \(\mathcal{A}u\in C([0,T_{*}];V^{*})\), and the argument for \(g\rho e_{2}\) follows similarly. Choose \(t,s\in[0,T_{*}]\) and pick arbitrary vector field \(\phi\in V\). Integrating by parts, we observe that
\[\int_{\Omega}(\mathcal{A}u(t,x)-\mathcal{A}u(s,x))\cdot\phi(x)dx =\int_{\Omega}\mathcal{A}^{1/2}(u(t,x)-u(s,x))\cdot\mathcal{A}^{ 1/2}\phi dx\] \[\leq\|u(t,\cdot)-u(s,\cdot)\|_{1}\|\phi\|_{1}.\]
By duality, we observe that
\[\|\mathcal{A}u(t,\cdot)-\mathcal{A}u(s,\cdot)\|_{V^{*}}\leq\|u(t,\cdot)-u(s, \cdot)\|_{1}\to 0\]
as \(t\to s\) due to \(u\in C([0,T_{*}];V)\). Thus, we have showed that \(\mathcal{A}u\in C([0,T_{*}];V^{*})\) and hence \(\partial_{t}u\in C([0,T_{*}];V^{*})\).
To show the needed regularity of \(\partial_{t}\rho\), it suffices to show that \(-u\cdot\nabla\rho+\Delta\rho-\operatorname{div}(\rho\nabla(-\Delta)^{-1}\rho) \in C([0,T_{*}];H^{-1}_{0})\). Similarly, we prove strong continuity for the most singular term \(u\cdot\nabla\rho\). The rest of the terms will follow from a similar argument. Let \(t,s\in[0,T_{*}]\). Picking \(\varphi\in H^{1}_{0}\) and integrating
by parts, we have
\[\int_{\Omega}(u(t,x)\cdot\nabla\rho(t,x)-u(s,x)\cdot\nabla\rho(s,x)) \varphi(x)dx\] \[\quad=\int_{\Omega}(u(t,x)-u(s,x))\cdot\nabla\rho(t,x)\varphi dx+ \int_{\Omega}u(s,\cdot)\cdot\nabla(\rho(t,x)-\rho(s,x))\varphi(x)dx\] \[\quad=\int_{\Omega}\operatorname{div}((u(t,x)-u(s,x))\rho(t,x)) \varphi dx+\int_{\Omega}\operatorname{div}(u(s,\cdot)(\rho(t,x)-\rho(s,x))) \varphi(x)dx\] \[\quad=-\int_{\Omega}((u(t,x)-u(s,x))\rho(t,x))\cdot\nabla\varphi dx -\int_{\Omega}(u(s,\cdot)(\rho(t,x)-\rho(s,x)))\cdot\nabla\varphi(x)dx.\]
The first term on RHS can be estimated by:
\[\int_{\Omega}((u(t,x)-u(s,x))\rho(t,x))\cdot\nabla\varphi dx \leq\|u(t,\cdot)-u(s,\cdot)\|_{L^{3}}\|\rho(t,\cdot)\|_{L^{6}}\| \varphi\|_{1}\] \[\lesssim\|u(t,\cdot)-u(s,\cdot)\|_{1}\|\rho(t,\cdot)\|_{1}\| \varphi\|_{1}\] \[\leq C(\rho_{0},u_{0},g)\|u(t,\cdot)-u(s,\cdot)\|_{1}\|\varphi\| _{1}.\]
Note that we used Sobolev embedding in the second inequality and the uniform bound of \(\rho\) in \(L^{\infty}((0,T_{*});H^{1}_{0})\) norm in the last inequality. Similarly, we can estimate the second term on RHS by:
\[\int_{\Omega}(u(s,\cdot)(\rho(t,x)-\rho(s,x)))\cdot\nabla\varphi(x)dx\leq C( \rho_{0},u_{0},g)\|\rho(t,\cdot)-\rho(s,\cdot)\|_{1}\|\varphi\|_{1}\]
thanks to \(u\in L^{\infty}((0,T);V)\). Combining the two estimates above and using duality, we conclude that
\[\|u(t,\cdot)\cdot\nabla\rho(t,\cdot)-u(s,\cdot)\cdot\nabla\rho(s,\cdot)\|_{ H^{-1}_{0}}\leq C(\rho_{0},u_{0},g)(\|u(t,\cdot)-u(s,\cdot)\|_{1}+\|\rho(t, \cdot)-\rho(s,\cdot)\|_{1})\to 0\]
as \(t\to s\) due to \(u\in C([0,T_{*}];V)\) and \(\rho\in C([0,T_{*}];H^{1}_{0})\). This verifies \(\partial_{t}\rho\in C([0,T_{*}];H^{-1}_{0})\), and we have proved (2.13).
### Higher Order _a priori_ Estimates
Our next task is to establish the smoothness of a solution \((\rho,u)\) for positive times, namely
\[\rho\in C^{\infty}((0,T_{*}]\times\Omega),\ u\in C^{\infty}((0,T_{*}]\times \Omega),\]
via energy estimates in higher order Sobolev norms. We would like to remark on the following caveat: with Dirichlet boundary condition imposed on both \(\rho\) and \(u\), one cannot obtain higher order Sobolev estimates by commuting the differential operator \(\partial^{s}\) with the equation, where \(\partial^{s}\) denotes a general \(s\)-th order spatial derivative. The main reason is that when we treat the dissipation term, integration by parts incurs a boundary term that is difficult to control. To remedy this issue, we commute time derivatives \(\partial^{k}_{t}\) through the equation. It is clear that no boundary terms are generated since \(\partial_{t}\) preserves Dirichlet boundary condition. By applying this strategy, we can improve regularity in time, after which spatial regularity can be upgraded using elliptic estimates.
Again, to obtain the claimed regularity we should proceed by the Galerkin scheme and perform the estimates in Proposition 2.3 for the approximated solutions. Since this step is similar to that in Corollary 2.1, we omit this tedious part and will proceed with only _a priori_ estimates as follows.
**Proposition 2.3**.: _Assume \((\rho,u)\) is a regular solution to problem (1.1) with initial condition \(\rho_{0}\in H^{1}_{0},u_{0}\in V\). Then the following bounds hold:_
\[t^{k}\left(\|\partial_{t}^{l}\rho(t,\cdot)\|_{1+k-2l}^{2}+\| \partial_{t}^{l}u(t,\cdot)\|_{1+k-2l}^{2}\right) \leq C(\rho_{0},u_{0},g,k), \tag{2.14}\] \[t^{k}\int_{t}^{T_{*}}\left(\|\partial_{t}^{l}\rho(\tau,\cdot)\|_ {2+k-2l}^{2}+\|\partial_{t}^{l}u(\tau,\cdot)\|_{2+k-2l}^{2}\right)d\tau \leq C(\rho_{0},u_{0},g,k), \tag{2.15}\]
_for any \(t\in(0,T_{*}]\), \(k\in\mathbb{N}\), \(0\leq l\leq\lfloor\frac{k+1}{2}\rfloor,\) where \(\lfloor\cdot\rfloor\) denotes the floor function._
Proof.: We prove the proposition by inducting on \(k\). Since \(k=0\) case is already proved by Proposition 2.2, we now assume that the statement holds up to index \(k-1\). We will discuss two cases based on the parity of \(k\). We also remind the readers that the constant \(C(\rho_{0},u_{0},g,k)\) might change from line to line.
1. \(k\) **is odd.** Let us write \(S=\frac{k+1}{2}\), and define the \(s\)-energy \[E_{s}(\tau)=\|\partial_{t}^{s}\rho(\tau,\cdot)\|_{L^{2}}^{2}+\|\partial_{t}^{ s}u(\tau,\cdot)\|_{L^{2}}^{2}\] for any \(0\leq s\leq S\). From now on, we fix arbitrary \(t\in(0,T_{*}]\). This case can be detailed into the following steps. **Step 1: show (2.14), (2.15) with \(l=S\).** Commuting \(\partial_{t}^{s}\) with (1.1) for \(0\leq s\leq S\), we obtain that \[\partial_{t}\partial_{t}^{s}\rho-\Delta\partial_{t}^{s}\rho+\sum_{r=0}^{s} \binom{s}{r}\bigg{[}(\partial_{t}^{r}u\cdot\nabla)\partial_{t}^{s-r}\rho+ \partial_{t}^{s-r}\rho\partial_{t}^{r}\rho+\nabla\partial_{t}^{s-r}\rho\cdot \nabla(-\Delta)^{-1}(\partial_{t}^{r}\rho)\bigg{]}=0,\] (2.16a) \[\partial_{t}\partial_{t}^{s}u+\mathcal{A}\partial_{t}^{s}u=g\mathbb{P}( \partial_{t}^{s}\rho e_{z}),\] (2.16b) equipped with boundary conditions \(\partial_{t}^{s}\rho|_{\partial\Omega}=0\), \(\partial_{t}^{s}u|_{\partial\Omega}=0\). Testing (2.16b) with \(s=S\) by \(\partial_{t}^{S}u\), we obtain that \[\frac{1}{2}\frac{d}{dt}\|\partial_{t}^{S}u\|_{L^{2}}^{2}+\|\nabla\partial_{t}^ {S}u\|_{L^{2}}^{2}\leq\frac{g}{2}\left(\|\partial_{t}^{S}u\|_{L^{2}}^{2}+\| \partial_{t}^{S}\rho\|_{L^{2}}^{2}\right).\] Testing (2.16a) with \(s=S\) by \(\partial_{t}^{S}\rho\): \[\frac{1}{2}\frac{d}{dt}\|\partial_{t}^{S}\rho\|_{L^{2}}^{2}+\| \nabla\partial_{t}^{S}\rho\|_{L^{2}}^{2}=\sum_{r=0}^{S}\binom{S}{r}(I_{r}+J_{r }+K_{r}),\] (2.17) where \[I_{r}=\int_{\Omega}(\partial_{t}^{S}\rho)(\partial_{t}^{r}u\cdot\nabla) \partial_{t}^{S-r}\rho,\;J_{r}=\int_{\Omega}(\partial_{t}^{S}\rho)\partial_{ t}^{S-r}\rho(\partial_{t}^{r}\rho),\] \[K_{r}=\int_{\Omega}(\partial_{t}^{S}\rho)\nabla\partial_{t}^{S-r}\rho\cdot \nabla(-\Delta)^{-1}(\partial_{t}^{r}\rho).\] To estimate \(I_{r}\), first note that \(I_{0}=0\) by incompressibility and integration by parts. For \(1\leq r\leq S-1\), we integrate \(I_{r}\) by parts once to obtain: \[I_{r}=-\int_{\Omega}\partial_{j}\partial_{t}^{S}\rho\partial_{t}^{r}u_{j} \partial_{t}^{S-r}\rho,\]
where we also used the incompressibility of \(\partial_{t}^{r}u\). Thus, we can estimate:
\[I_{r}\leq\|\nabla\partial_{t}^{S}\rho\|_{L^{2}}\|\partial_{t}^{r}u\|_{L^{3}}\| \partial_{t}^{S-r}\rho\|_{L^{6}}\leq\delta\|\nabla\partial_{t}^{S}\rho\|_{L^{2 }}^{2}+C(\delta)\|\partial_{t}^{r}u\|_{L^{3}}^{2}\|\partial_{t}^{S-r}\rho\|_{1 }^{2},\]
for some \(\delta>0\). If \(r=S\), we instead estimate:
\[I_{S}\leq\|\nabla\partial_{t}^{S}\rho\|_{L^{2}}\|\partial_{t}^{S}u\|_{L^{2}}\| \rho\|_{L^{\infty}}\leq\delta\|\nabla\partial_{t}^{S}\rho\|_{L^{2}}^{2}+C( \delta)\|\rho\|_{2}^{2}\|\partial_{t}^{S}u\|_{L^{2}}^{2}.\]
This concludes the estimates of \(I_{r}\). To estimate \(J_{r}\), we note that if \(r=0\) or \(r=S\), we have
\[J_{r}\leq\|\partial_{t}^{S}\rho\|_{L^{2}}^{2}\|\rho\|_{L^{\infty}}\lesssim\| \partial_{t}^{S}\rho\|_{L^{2}}^{2}\|\rho\|_{2}\]
If \(1\leq r\leq S-1\), then we have
\[J_{r}\leq\frac{1}{2}\|\partial_{t}^{S}\rho\|_{L^{2}}^{2}+\frac{1}{2}\| \partial_{t}^{r}\rho\|_{1}^{2}\|\partial_{t}^{S-r}\rho\|_{1}^{2}.\]
Now we estimate \(K_{r}\). If \(r=0\), we use the standard elliptic estimate and Young's inequality to obtain:
\[K_{0}\leq\delta\|\nabla\partial_{t}^{S}\rho\|_{L^{2}}^{2}+C(\delta)\|\rho\|_{ 1}^{2}\|\partial_{t}^{S}\rho\|_{L^{2}}^{2},\]
where \(\delta>0\). If \(r=S\), we apply elliptic estimate and Sobolev embedding:
\[K_{S}\leq\|\nabla\rho\|_{L^{3}}\|\partial_{t}^{S}\rho\|_{L^{2}}\|\nabla(- \Delta)^{-1}\partial_{t}^{S}\rho\|_{L^{6}}\lesssim\|\nabla\rho\|_{1}\| \partial_{t}^{S}\rho\|_{L^{2}}^{2}.\]
If \(1\leq r\leq S-1\), we can estimate
\[K_{r}\leq\|\nabla\partial_{t}^{S-r}\rho\|_{L^{3}}\|\partial_{t}^{S}\rho\|_{L^ {2}}\|\nabla(-\Delta)^{-1}\partial_{t}^{r}\rho\|_{L^{6}}\leq\frac{1}{2}\| \partial_{t}^{S}\rho\|_{L^{2}}^{2}+C\|\nabla\partial_{t}^{S-r}\rho\|_{1}^{2} \|\partial_{t}^{r}\rho\|_{L^{2}}^{2}.\]
After choosing \(\delta>0\) to be sufficiently small, the above estimates yield the following differential inequality: for \(\tau\in(0,T_{*})\),
\[\frac{dE_{S}}{d\tau}+\|\nabla\partial_{t}^{S}\rho(\tau,\cdot)\|_ {L^{2}}^{2}+ \|\nabla\partial_{t}^{S}u(\tau,\cdot)\|_{L^{2}}^{2}\leq C(k)\bigg{[} \left(1+g+\|\rho\|_{2}+\|\rho\|_{2}^{2}\right)E_{S}(\tau)\] \[+\sum_{r=1}^{S-1}\left(\|\nabla\partial_{t}^{S-r}\rho\|_{1}^{2}\| \partial_{t}^{r}\rho\|_{L^{2}}^{2}+\|\partial_{t}^{r}\rho\|_{1}^{2}\|\partial _{t}^{S-r}\rho\|_{1}^{2}+\|\partial_{t}^{r}u\|_{L^{3}}^{2}\|\partial_{t}^{S-r} \rho\|_{1}^{2}\right)\bigg{]}\] \[=C(k)\left(F(\tau)E_{S}(\tau)+\sum_{r=1}^{S-1}G_{r}(\tau)\right) \tag{2.18}\]
with
\[F(\tau)=1+g+\|\rho(\tau,\cdot)\|_{2}+\|\rho(\tau,\cdot)\|_{2}^{2},\] \[G_{r}(\tau)=\|\nabla\partial_{t}^{S-r}\rho\|_{1}^{2}\|\partial_{t }^{r}\rho\|_{L^{2}}^{2}+\|\partial_{t}^{r}\rho\|_{1}^{2}\|\partial_{t}^{S-r} \rho\|_{1}^{2}+\|\partial_{t}^{r}u\|_{L^{3}}^{2}\|\partial_{t}^{S-r}\rho\|_{1} ^{2}.\]
To proceed, we need the following useful lemma:
**Lemma 2.1**.: _There exists \(\tau_{0}\in[t/2,t]\) such that \(E_{S}(\tau_{0})\leq C(\rho_{0},u_{0},g,k)t^{-k}\)._
Proof.: Let us consider (2.16) with \(s=S-1\). For any \(\tau\in[t/2,t]\), we note that by (2.16b),
\[\|\partial_{t}^{S}u(\tau)\|_{L^{2}}^{2}\lesssim\|\mathcal{A}\partial_{t}^{S-1}u \|_{L^{2}}^{2}+g^{2}\|\partial_{t}^{S-1}\rho\|_{L^{2}}^{2}\leq\|\partial_{t}^{S -1}u\|_{2}^{2}+g^{2}\|\partial_{t}^{S-1}\rho\|_{L^{2}}^{2}.\]
Integrating over \([t/2,t]\) and using (2.15) at index \(k-1\) (which is valid as this is part of the induction hypothesis), we obtain
\[\int_{t/2}^{t}\|\partial_{t}^{S}u(\tau)\|_{L^{2}}^{2}d\tau\leq\int_{t/2}^{T_{ \star}}\|\partial_{t}^{S}u(\tau)\|_{L^{2}}^{2}d\tau\leq C(\rho_{0},u_{0},g,k)t ^{1-k}. \tag{2.19}\]
Similarly, applying Holder inequality to (2.16a), we have
\[\|\partial_{t}^{S}\rho\|_{L^{2}}^{2} \lesssim\|\partial_{t}^{S-1}\rho\|_{2}^{2}+\sum_{r=0}^{S-1}C(k) \bigg{(}\|\partial_{t}^{r}u\|_{1}^{2}\|\nabla\partial_{t}^{S-1-r}\rho\|_{1}^{2}\] \[+\|\partial_{t}^{S-1-r}\rho\|_{1}^{2}\|\partial_{t}^{r}\rho\|_{1 }^{2}+\|\nabla\partial_{t}^{S-1-r}\rho\|_{1}^{2}\|\partial_{t}^{r}\rho\|_{L^{ 2}}^{2}\bigg{)}. \tag{2.20}\]
Observe that given the induction hypothesis, applying (2.15) with index \(k-1\), we have
\[\int_{t/2}^{t}\|\partial_{t}^{S-1}\rho(\tau)\|_{2}^{2}d\tau\leq C(\rho_{0},u_ {0},g,k)t^{1-k}.\]
Also, for \(r=0,\dots,S-1\),
\[\int_{t/2}^{t}\|\partial_{t}^{r}u(\tau)\|_{1}^{2}\|\nabla\partial_{t}^{S-1-r} \rho(\tau)\|_{1}^{2}d\tau\leq C(\rho_{0},u_{0},g,k)t^{-2r}t^{-2(S-r-1)}=C(\rho _{0},u_{0},g,k)t^{1-k},\]
where we applied (2.14) with index \(2r\) to \(\|\partial_{t}^{r}u(\tau)\|_{1}\) and (2.15) with index \(2(S-r-1)\) to \(\|\nabla\partial_{t}^{S-1-r}\rho(\tau)\|_{1}\). In a similar fashion, we can also obtain the following bound:
\[\int_{t/2}^{t}\left[\|\partial_{t}^{S-1-r}\rho(\tau)\|_{1}^{2}\|\partial_{t}^ {r}\rho(\tau)\|_{1}^{2}+\|\nabla\partial_{t}^{S-1-r}\rho(\tau)\|_{1}^{2}\| \partial_{t}^{r}\rho(\tau)\|_{L^{2}}^{2}\right]d\tau\leq C(\rho_{0},u_{0},g,k)t ^{1-k}.\]
Collecting the estimates above and combining with (2.20), we have
\[\int_{t/2}^{t}\|\partial_{t}^{S}\rho(\tau)\|_{L^{2}}^{2}d\tau\leq C(\rho_{0}, u_{0},g,k)t^{1-k}. \tag{2.21}\]
Combining (2.19) and (2.21), we have
\[\int_{t/2}^{t}\left(\|\partial_{t}^{S}u(\tau)\|_{L^{2}}^{2}+\|\partial_{t}^{S }\rho(\tau)\|_{L^{2}}^{2}\right)d\tau\leq C(\rho_{0},u_{0},g,k)t^{1-k}.\]
By mean value theorem, we can find a \(\tau_{0}\in(t/2,t)\) such that
\[E_{S}(\tau_{0})=\|\partial_{t}^{S}u(\tau_{0})\|_{L^{2}}^{2}+\|\partial_{t}^{S }\rho(\tau_{0})\|_{L^{2}}^{2}\leq C(\rho_{0},u_{0},g,k)t^{-k},\]
and this concludes the proof.
We also need another lemma that treats the terms \(G_{r}\).
**Lemma 2.2**.: _Let \(\tau_{0}\) be chosen as in Lemma 2.1. Then for any \(r=1,\ldots,S-1\), we have_
\[\int_{\tau_{0}}^{T_{*}}G_{r}(\tau)d\tau\leq C(\rho_{0},u_{0},g,k)t^{-k}.\]
Proof.: We fix \(r=1,\ldots,S-1\). By definition of \(G_{r}\), we can write
\[\int_{\tau_{0}}^{T_{*}}G_{r}(\tau)d\tau =\int_{\tau_{0}}^{T_{*}}\left(\|\nabla\partial_{t}^{S-r}\rho\|_{1 }^{2}\|\partial_{t}^{r}\rho\|_{L^{2}}^{2}+\|\partial_{t}^{r}\rho\|_{1}^{2}\| \partial_{t}^{S-r}\rho\|_{1}^{2}+\|\partial_{t}^{r}u\|_{L^{3}}^{2}\|\partial_ {t}^{S-r}\rho\|_{1}^{2}\right)d\tau\] \[=:\int_{\tau_{0}}^{T_{*}}\left(G_{r}^{1}(\tau)+G_{r}^{2}(\tau)+G_ {r}^{3}(\tau)\right)d\tau.\]
Applying (2.14) with index \(2r-1\) and (2.15) with index \(k-2r+1\) to terms \(\|\partial_{t}^{r}\rho\|_{L^{2}}^{2}\) and \(\|\nabla\partial_{t}^{S-r}\rho\|_{1}^{2}\) respectively, we observe that
\[\int_{\tau_{0}}^{T_{*}}G_{r}^{1}(\tau)d\tau \leq C(\rho_{0},u_{0},g,k)\tau_{0}^{1-2r}\int_{\tau_{0}}^{T_{*}}\| \partial_{t}^{S-r}\rho(\tau)\|_{2}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-(2r-1)}\tau_{0}^{-(k-2r+1)}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-k},\]
where we used the fact that \(\tau_{0}>t/2\).
To study the term involving \(G_{r}^{2}\), we will apply (2.14) with index \(2r\) and (2.15) with index \(k-2r\) to terms \(\|\partial_{t}^{r}\rho\|_{1}^{2}\) and \(\|\partial_{t}^{S-r}\rho\|_{1}^{2}\) respectively. This yields:
\[\int_{\tau_{0}}^{T_{*}}G_{r}^{2}(\tau)d\tau \leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-2r}\int_{\tau_{0}}^{T_{*}}\| \partial_{t}^{S-r}\rho(\tau)\|_{1}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-2r}\tau_{0}^{-(k-2r)}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-k},\]
Finally, using Sobolev embedding, Gagliardo-Nirenberg-Sobolev inequality, and Cauchy-Schwarz inequality,
\[\int_{\tau_{0}}^{T_{*}}G_{r}^{3}(\tau)d\tau \leq\int_{\tau_{0}}^{T_{*}}\|\partial_{t}^{r}u\|_{L^{3}}^{2}\| \partial_{t}^{S-r}\rho\|_{1}^{2}d\tau\lesssim\int_{\tau_{0}}^{T_{*}}\| \partial_{t}^{r}u\|_{L^{2}}\|\nabla\partial_{t}^{r}u\|_{L^{2}}\|\partial_{t}^ {S-r}\rho\|_{1}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-\frac{2r-1}{2}}\tau_{0}^{-r }\int_{\tau_{0}}^{T_{*}}\|\partial_{t}^{S-r}\rho\|_{1}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-\frac{1}{2})}\leq C(\rho_{0},u_{ 0},g,k)t^{-k}.\]
Note that we applied (2.14) with index \(2r-1\) to \(\|\partial_{t}^{r}u\|_{L^{2}}\), (2.14) with index \(2r\) to \(\|\nabla\partial_{t}^{r}u\|_{L^{2}}\), and (2.15) with index \(k-2r\) to the other \(\|\partial_{t}^{S-r}\rho\|_{1}^{2}\). We also used \(\tau_{0}\leq T_{*}\leq 1\) in the final inequality. The proof is thus completed after we combine the estimates above.
Using induction hypothesis at \(k=0\), we have \(F\in L^{1}(0,T_{*})\) with the bound \(\|F\|_{L^{1}(0,T_{*})}\leq C(u_{0},\rho_{0},g)\). We may thus apply Gronwall inequality to (2.18) on time interval \([\tau_{0},t]\), where \(\tau_{0}\) is selected as in Lemma 2.1 above. Using the two lemmas above, we have
\[E_{S}(t) \leq C(k)\left(E_{S}(\tau_{0})+\sum_{r=1}^{S-1}\int_{\tau_{0}}^{ t}G_{r}(\tau)d\tau\right)\exp\left(\|F\|_{L^{1}(0,T_{*})}\right)\] \[\leq C(\rho_{0},u_{0},g,k)t^{-k}, \tag{2.22}\]
where we recall that \(T_{*}\) depends only on \(\rho_{0}\). This verifies (2.14). To verify (2.15), we integrate (2.18) on interval \([t,T_{*}]\), which yields:
\[\int_{t}^{T_{*}}\left(\|\nabla\partial_{t}^{S}\rho(\tau)\|_{L^{2}}^{2}+\|\nabla \partial_{t}^{S}u(\tau)\|_{L^{2}}^{2}\right)d\tau\leq E_{S}(t)+C(k)\bigg{(} \int_{t}^{T_{*}}F(\tau)E_{S}(\tau)d\tau+\sum_{r=1}^{S-1}\int_{t}^{T_{*}}G_{r}( \tau)d\tau\bigg{)}.\]
Using (2.22), Lemma 2.2, and the fact that \(\frac{t}{2}<\tau_{0}<t\), we can estimate the above by:
\[\int_{t}^{T_{*}}(\|\nabla\partial_{t}^{S}\rho(\tau)\|_{L^{2}}^{2} +\|\nabla\partial_{t}^{S}u(\tau)\|_{L^{2}}^{2})d\tau\leq C(\rho_{ 0},u_{0},g,k)t^{-k}\] \[+C(\rho_{0},u_{0},g,k)(t^{-k}\|F\|_{L^{1}}+t^{-k})\] \[\leq C(\rho_{0},u_{0},g,k)t^{-k}.\]
This concludes the proof of (2.15) with \(l=S\).
**Step 2: show (2.14), (2.15) with \(l<S\).** We will show how we obtain the case when \(l=S-1\). Then the rest just follows from another induction on \(l=1,\ldots,S\) backwards.
We may rewrite the equations (2.16) with \(s=S-1\) as
\[-\Delta\partial_{t}^{S-1}\rho =-\partial_{t}^{S}\rho-\sum_{r=0}^{S-1}\binom{S-1}{r}\bigg{[} \partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-1-r}\rho+\partial_{t}^{S-1-r}\rho \partial_{t}^{r}\rho+\nabla\partial_{t}^{S-1-r}\rho\cdot\nabla(-\Delta)^{-1} (\partial_{t}^{r}\rho)\bigg{]}\] \[=-\partial_{t}^{S}\rho+R_{1} \tag{2.23a}\] \[\mathcal{A}\partial_{t}^{S-1}u =-\partial_{t}^{S}u+g\mathbb{P}(\partial_{t}^{S-1}\rho e_{z})=- \partial_{t}^{S}u+R_{2} \tag{2.23b}\]
Here, \(R_{1},R_{2}\) are the remainder terms which are essentially of lower order. We will see that these terms can be treated by the induction hypothesis on \(k\). To illustrate this, we show that the following estimates hold:
**Lemma 2.3**.: _For any \(t\in(0,T_{*}]\),_
\[t^{k-\frac{1}{4}}(\|R_{1}(t)\|_{L^{2}}^{2}+\|R_{2}(t)\|_{L^{2}}^{2})\leq C( \rho_{0},u_{0},g,k),\]
\[t^{k-\frac{1}{4}}\int_{t}^{T_{*}}\left(\|R_{1}(\tau)\|_{1}^{2}+\|R_{2}(\tau)\| _{1}^{2}\right)d\tau\leq C(\rho_{0},u_{0},g,k).\]
Proof.: First, it is straightforward to obtain the following bounds for \(R_{2}\) by directly imposing the induction hypothesis at index \(k-1\):
\[t^{k-1}\|R_{2}(t)\|_{L^{2}}^{2}+t^{k-1}\int_{t}^{T_{*}}\|R_{2}(t)\|_{1}^{2}dt \leq C(\rho_{0},u_{0},g,k). \tag{2.24}\]
Prior to estimating \(R_{1}\), we first need an improved bound for \(\|u\|_{2}\): invoking (2.24) with \(k=1\), we have
\[\|R_{2}(t)\|_{L^{2}}^{2}\leq C(\rho_{0},u_{0},g).\]
Since \(S=1\) when \(k=1\) by definition, we apply the Stokes estimate to (2.23b) with \(S=1\) to see that
\[\|u\|_{2}^{2}\lesssim\|\partial_{t}u\|_{L^{2}}^{2}+\|R_{2}\|_{L^{2}}^{2}\leq C (\rho_{0},u_{0},g)(t^{-1}+1)\leq C(\rho_{0},u_{0},g)t^{-1}, \tag{2.25}\]
where we used Step 1 with \(k=1\) above. Now, we are ready to estimate \(R_{1}\). We first note that it involves 3 typical terms, namely
\[R_{11}^{r}:=\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-1-r}\rho,\ R_{12}^{r}:= \partial_{t}^{S-1-r}\rho\partial_{t}^{r}\rho,\ R_{13}^{r}:=\nabla\partial_{t}^ {S-1-r}\rho\cdot\nabla(-\Delta)^{-1}(\partial_{t}^{r}\rho),\]
where \(0\leq r\leq S-1\). We will prove suitable bounds for \(R_{11}^{r}\), and the rest can be bounded more easily since these terms involve fewer derivatives. If \(1\leq r\leq S-1\), then by Holder inequality:
\[\|R_{11}^{r}\|_{L^{2}}^{2} \leq\|\partial_{t}^{r}u\|_{L^{6}}^{2}\|\nabla\partial_{t}^{S-1-r }\rho\|_{L^{3}}^{2}\lesssim\|\partial_{t}^{r}u\|_{1}^{2}\|\partial_{t}^{S-1-r }\rho\|_{1}\|\partial_{t}^{S-1-r}\rho\|_{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-2r}t^{-\frac{k-2r-1}{2}}t^{-\frac{ k-2r}{2}}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-\frac{1}{2})},\]
where we used (2.14) at indices \(2r,k-2r-1,k-2r\) respectively.
If \(r=0\), then we observe that \(R_{11}^{0}=u\cdot\nabla\partial_{t}^{S-1}\rho\). We estimate as follows:
\[\|R_{11}^{0}\|_{L^{2}}^{2} \leq\|u\|_{L^{\infty}}^{2}\|\partial_{t}^{S-1}\rho\|_{1}^{2}\leq \|u\|_{L^{2}}^{1/2}\|u\|_{2}^{3/2}\|\partial_{t}^{S-1}\rho\|_{1}^{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-3/4}t^{-(k-1)}=C(\rho_{0},u_{0},g,k )t^{-(k-1/4)}\]
where we used Agmon's inequality in 3D:
\[\|u\|_{L^{\infty}}^{2}\lesssim\|u\|_{L^{2}}^{1/2}\|u\|_{2}^{3/2}\]
in the second inequality. We also invoked (2.14) with index \(0\) to estimate \(\|u\|_{L^{2}}\), (2.14) with index \(k-1\) to bound \(\|\partial_{t}^{S-1}\rho\|_{1}\), and (2.25) to control \(\|u\|_{2}\).
Turning to the second inequality, since \(\partial_{t}^{r}u=0\) on \(\partial\Omega\), then we can invoke Poincare inequality to obtain:
\[\int_{t}^{T_{*}}\|R_{11}^{r}\|_{1}^{2}d\tau \lesssim\int_{t}^{T_{*}}\|\nabla R_{11}^{r}\|_{L^{2}}^{2}d\tau\] \[\lesssim\int_{t}^{T_{*}}\left(\|\nabla\partial_{t}^{r}u\cdot \nabla\partial_{t}^{S-1-r}\rho\|_{L^{2}}^{2}+\|\partial_{t}^{r}u\cdot\nabla^{ 2}\partial_{t}^{S-1-r}\rho\|_{L^{2}}^{2}\right)d\tau\] \[=:R_{11}^{r}+R_{112}^{r}.\]
If \(1\leq r\leq S-1\), using Holder inequality and Gagliardo-Nirenberg-Sobolev inequalities, we can estimate \(R_{111}^{r}\) by
\[R_{111}^{r} \leq\int_{t}^{T_{*}}\|\nabla\partial_{t}^{r}u\|_{L^{3}}^{2}\| \nabla\partial_{t}^{S-1-r}\rho\|_{L^{6}}^{2}d\tau\lesssim\int_{t}^{T_{*}}\| \nabla\partial_{t}^{r}u\|_{L^{2}}\|\nabla^{2}\partial_{t}^{r}u\|_{L^{2}}\| \nabla\partial_{t}^{S-1-r}\rho\|_{1}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-r}t^{-(k-2r)}\int_{t}^{T_{*}}\| \nabla^{2}\partial_{t}^{r}u\|_{L^{2}}\|\nabla\partial_{t}^{S-1-r}\rho\|_{1}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-r}t^{-\frac{k-2r}{2}}t^{-r}t^{- \frac{k-2r-1}{2}}=C(\rho_{0},u_{0},g,k)t^{-(k-\frac{1}{2})}.\]
If \(r=0\), then we apply Holder inequality and a Gagliardo-Nirenberg-Sobolev inequality to estimate that
\[R_{111}^{0} \leq\int_{t}^{T_{*}}\|\nabla u\|_{L^{3}}^{2}\|\nabla\partial_{t}^ {S-1}\rho\|_{L^{6}}^{2}d\tau\leq\int_{t}^{T_{*}}\|u\|_{1}\|u\|_{2}\|\partial_{t }^{S-1}\rho\|_{2}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-1/2}\int_{t}^{T_{*}}\|\partial_{t}^ {S-1}\rho\|_{2}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-1/2}t^{-(k-1)}\leq C(\rho_{0},u_{0}, g,k)t^{-(k-1/2)},\]
where we used the bound (2.25) and (2.15) with index \(0\) and \(k-1\) above.
Now we discuss the bound for \(R^{r}_{112}\). For \(1\leq r\leq S-1\), we have
\[R^{r}_{112} \leq\int_{t}^{T_{*}}\|\partial_{t}^{r}u\|_{L^{3}}^{2}\|\nabla^{2} \partial_{t}^{S-1-r}\rho\|_{L^{6}}^{2}d\tau\] \[\leq\int_{t}^{T_{*}}\|\partial_{t}^{r}u\|_{L^{2}}\|\partial_{t}^{ r}u\|_{1}\|\partial_{t}^{S-1-r}\rho\|_{3}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{2r-1}{2}}t^{-r}\int_{t}^{T_{* }}\|\partial_{t}^{S-1-r}\rho\|_{3}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-1/2)},\]
where we used (2.14) with indices \(2r-1\) and \(2r\) in the third inequality, and (2.15) with index \(k-2r\) in the last inequality. If \(r=0\), then we take advantage of Agmon's inequality in 3D again to obtain:
\[R^{0}_{112} \leq\int_{t}^{T_{*}}\|u\|_{L^{\infty}}^{2}\|\nabla^{2}\partial_{t }^{S-1}\rho\|_{L^{2}}^{2}d\tau\] \[\leq\int_{t}^{T_{*}}\|u\|_{L^{2}}^{1/2}\|u\|_{2}^{3/2}\|\partial_{ t}^{S-1}\rho\|_{2}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-3/4}\int_{t}^{T_{*}}\|\partial_{t}^ {S-1}\rho\|_{2}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-3/4}t^{-(k-1)}\] \[=C(\rho_{0},u_{0},g,k)t^{-(k-1/4)}.\]
Therefore, we arrive at the bound:
\[\int_{t}^{T_{*}}\|R^{r}_{11}\|_{1}^{2}d\tau\leq C(\rho_{0},u_{0},g,k)t^{-(k-1/ 4)}.\]
Proceeding in a similar fashion, we can acquire similar bounds for the \(R^{r}_{12}\) and \(R^{r}_{13}\). The proof of the lemma is thus complete after we sum up the estimates above.
By Step 1, we know that for any \(t\in(0,T_{*}]\),
\[t^{k}\left(\|\partial_{t}^{S}\rho(t)\|_{L^{2}}^{2}+\|\partial_{t}^{S}u(t)\|_{ L^{2}}^{2}\right)\leq C(\rho_{0},u_{0},g,k),\]
\[t^{k}\int_{t}^{T_{*}}\left(\|\partial_{t}^{S}\rho(\tau)\|_{1}^{2}+\|\partial_{ t}^{S}u(\tau)\|_{1}^{2}\right)d\tau\leq C(\rho_{0},u_{0},g,k).\]
Combining Lemma 2.3 with equations (2.23a), (2.23b), and using elliptic estimates, we conclude that for \(t\in(0,T_{*}]\)
\[\|\partial_{t}^{S-1}\rho(t)\|_{2}^{2}+\|\partial_{t}^{S-1}u(t)\|_{2}^{2}\leq C (\rho_{0},u_{0},g,k)t^{-k},\]
\[\int_{t}^{T_{*}}\left(\|\partial_{t}^{S-1}\rho(\tau)\|_{3}^{2}+\|\partial_{t}^ {S-1}u(\tau)\|_{3}^{2}\right)d\tau\leq C(\rho_{0},u_{0},g,k)t^{-k},\]
which finishes the case when \(l=S-1\). The rest will follow from an induction in \(l\), and we omit the details here. Hence, we have concluded the case where \(k\) is odd.
2. \(k\) **is even**. Since we have proved the \(k=0\) case, we may write \(k=2S\), \(S\geq 1\), and define \[\tilde{E}_{s}(t)=\|\nabla\partial_{t}^{s}\rho\|_{L^{2}}^{2}+\|\nabla\partial_{t} ^{s}u\|_{L^{2}}^{2}\] for \(0\leq s\leq S\). Notice that \(\tilde{E}_{s}(t)\sim\|\partial_{t}^{s}\rho\|_{1}^{2}+\|\partial_{t}^{s}u\|_{1} ^{2}\) in view of the Poincare inequality. The scheme of the proof in this case is the same double induction argument (in forward \(k\) and for each \(k\) backwards in \(l\)), and we will follow the same outline as in the odd case. Considering (2.16) for \(s=1,\ldots,S\), we test (2.16a), (2.16b) with \(s=S\) by \(-\Delta\partial_{t}^{S}\rho,\mathcal{A}\partial_{t}^{S}u\) respectively, which yields: \[\frac{1}{2}\frac{d}{dt}\|\nabla\partial_{t}^{S}\rho\|_{L^{2}}^{2}+\|\Delta \partial_{t}^{S}\rho\|_{L^{2}}^{2}=\sum_{r=0}^{S}{S\choose r}(\tilde{I}_{r}+ \tilde{J}_{r}+\tilde{K}_{r}),\] \[\frac{1}{2}\frac{d}{dt}\|\nabla\partial_{t}^{S}u\|_{L^{2}}^{2}+\|\mathcal{A} \partial_{t}^{S}u\|_{L^{2}}^{2}=g\int_{\Omega}\mathcal{A}\partial_{t}^{S}u \mathbb{P}(\partial_{t}^{S}\rho e_{z})\leq\frac{g}{2}\tilde{E}_{S},\] where for \(r=0,\ldots,S\), \[\tilde{I}_{r}=\int_{\Omega}\Delta\partial_{t}^{S}\rho(\partial_{t}^{r}u\cdot \nabla)\partial_{t}^{S-r}\rho,\;\tilde{J}_{r}=\int_{\Omega}\Delta\partial_{t} ^{S}\rho\partial_{t}^{r}\rho\partial_{t}^{S-r}\rho,\] \[\tilde{K}_{r}=\int_{\Omega}\Delta\partial_{t}^{S}\rho\nabla\partial_{t}^{S-r} \rho\cdot\nabla(-\Delta)^{-1}\partial_{t}^{r}\rho.\] To estimate \(\tilde{I}_{r}\), we first observe that \[\tilde{I}_{r}\leq\|\Delta\partial_{t}^{S}\rho\|_{L^{2}}\|\partial_{t}^{r}u\|_ {L^{6}}\|\nabla\partial_{t}^{S-r}\rho\|_{L^{3}}\leq\|\Delta\partial_{t}^{S} \rho\|_{L^{2}}\|\partial_{t}^{r}u\|_{1}\|\nabla\partial_{t}^{S-r}\rho\|_{L^{2 }}^{1/2}\|\nabla^{2}\partial_{t}^{S-r}\rho\|_{L^{2}}^{1/2}.\] Hence if \(r\neq 0\), we may estimate \(\tilde{I}_{r}\) as follows: for any \(\epsilon>0\), \[\tilde{I}_{r}\leq\epsilon\|\Delta\partial_{t}^{S}\rho\|_{L^{2}}^{2}+C(\epsilon )\|\partial_{t}^{S-r}\rho\|_{1}\|\partial_{t}^{S-r}\rho\|_{2}\|\partial_{t}^{ r}u\|_{1}^{2}.\] If \(r=0\), we estimate \[\tilde{I}_{0}=\int_{\Omega}\Delta\partial_{t}^{S}\rho(u\cdot\nabla)\partial_{ t}^{S}\rho\leq\|\Delta\partial_{t}^{S}\rho\|_{L^{2}}\|u\|_{L^{\infty}}\| \nabla\partial_{t}^{S}\rho\|_{L^{2}}\leq\epsilon\|\Delta\partial_{t}^{S} \rho\|_{L^{2}}^{2}+C(\epsilon)\|u\|_{2}^{2}\|\nabla\partial_{t}^{S}\rho\|_{L^ {2}}^{2}.\] To estimate \(\tilde{J}_{r}\), we have: \[\tilde{J}_{r}\leq\epsilon\|\Delta\partial_{t}^{S}\rho\|_{L^{2}}^{2}+C( \epsilon)\|\partial_{t}^{r}\rho\|_{1}^{2}\|\partial_{t}^{S-r}\rho\|_{1}^{2},\] where \(\epsilon>0\). Finally, to estimate of \(\tilde{K}_{r}\), we evoke elliptic estimate to obtain \[\tilde{K}_{r} \leq\|\Delta\partial_{t}^{S}\rho\|_{L^{2}}\|\partial_{t}^{S-r} \rho\|_{1}\|\nabla(-\Delta)^{-1}\partial_{t}^{r}\rho\|_{L^{\infty}}\lesssim\| \Delta\partial_{t}^{S}\rho\|_{L^{2}}\|\partial_{t}^{S-r}\rho\|_{1}\|\nabla(- \Delta)^{-1}\partial_{t}^{r}\rho\|_{2}\] \[\leq\epsilon\|\Delta\partial_{t}^{S}\rho\|_{L^{2}}^{2}+C(\epsilon )\|\partial_{t}^{r}\rho\|_{1}^{2}\|\partial_{t}^{S-r}\rho\|_{1}^{2},\] for any \(\epsilon>0\). Combining the estimates above yields the following energy inequality: for \(\tau\in(0,T_{*})\), \[\frac{d\tilde{E}_{S}}{d\tau}+\|\partial_{t}^{S}\rho(\tau,\cdot)\|_{ 2}^{2}+ \|\partial_{t}^{S}u(\tau,\cdot)\|_{2}^{2}\leq C(k)\bigg{[}\left(g+\|u \|_{2}^{2}+\|\rho\|_{2}^{2}\right)\tilde{E}_{S}(\tau)\] \[+\sum_{r=1}^{S-1}\left(\|\partial_{t}^{S-r}\rho\|_{1}\|\partial_{t} ^{S-r}\rho\|_{2}\|\partial_{t}^{r}u\|_{1}^{2}+\|\partial_{t}^{S-r}\rho\|_{1}^{2 }\|\partial_{t}^{r}\rho\|_{1}^{2}\right)\bigg{]}\] \[\leq C(k)\left(\tilde{F}(\tau)\tilde{E}_{S}(\tau)+\sum_{r=1}^{S-1 }\tilde{G}_{r}(\tau)\right),\] (2.26)
where \(\tilde{F}\in L^{1}(0,T_{*})\) due to the induction hypothesis at \(k=0\). Now, we would like to follow the same plan as that in the odd case. This motivates us to prove lemmas similar to Lemma 2.1, 2.2, and 2.3 adapted to the even case. First, we show the following lemma that parallels Lemma 2.1:
**Lemma 2.4**.: _There exists \(\tau_{0}\in[t/2,t]\) such that \(\tilde{E}_{S}(\tau_{0})\leq C(\rho_{0},u_{0},g,k)t^{-k}\)._
Proof.: We consider (2.16) with \(s=S-1\). In view of (2.16b), we have
\[\|\partial_{t}^{S}u\|_{1}^{2}\lesssim\|\mathcal{A}\partial_{t}^{S-1}u\|_{1}^{2 }+g^{2}\|\partial_{t}^{S-1}\rho\|_{1}^{2}\leq\|\partial_{t}^{S-1}u\|_{3}^{2}+ g^{2}\|\partial_{t}^{S-1}\rho\|_{1}^{2},\]
for any \(\tau\in[t/2,t]\). Integrating over \([t/2,t]\) and using (2.15) with index \(k-1\), we obtain
\[\int_{t/2}^{t}\|\partial_{t}^{S}u(\tau)\|_{1}^{2}d\tau\leq\int_{t/2}^{T_{*}} \|\partial_{t}^{S}u(\tau)\|_{1}^{2}d\tau\leq C(\rho_{0},u_{0},g,k)t^{1-k}. \tag{2.27}\]
To estimate \(\|\nabla\partial_{t}^{S}\rho\|_{L^{2}}\), we apply \(\nabla\) to both sides of (2.16a) with \(s=S-1\), and then use Holder's inequality:
\[\|\nabla\partial_{t}^{S}\rho\|_{L^{2}}^{2} \lesssim\|\partial_{t}^{S-1}\rho\|_{3}^{2}+\sum_{r=0}^{S-1}C(k) \bigg{(}\|\nabla(\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-r-1}\rho)\|_{L^ {2}}^{2}\] \[+\|\nabla(\partial_{t}^{S-r-1}\rho\partial_{t}^{r}\rho)\|_{L^{2}} ^{2}+\|\nabla(\nabla\partial_{t}^{S-r-1}\rho\cdot\nabla(-\Delta)^{-1}( \partial_{t}^{r}\rho))\|_{L^{2}}^{2}\bigg{)}. \tag{2.28}\]
To save space, we only consider the most singular term, namely \(\|\nabla(\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-r-1}\rho)\|_{L^{2}}^{2}\), and show that
\[\int_{t/2}^{t}\|\nabla(\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-r-1}\rho) \|_{L^{2}}^{2}d\tau\leq C(\rho_{0},u_{0},g,k)t^{1-k}. \tag{2.29}\]
The estimates on the rest of the terms follow from a similar argument. To show (2.29), we first compute that
\[\nabla(\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-r-1}\rho)=\nabla\partial_{ t}^{r}u\cdot\nabla\partial_{t}^{S-r-1}\rho+\partial_{t}^{r}u\cdot\nabla^{2} \partial_{t}^{S-r-1}\rho.\]
The first term can be estimated by
\[\|\nabla\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-r-1}\rho\|_{L ^{2}}^{2} \leq\|\nabla\partial_{t}^{r}u\|_{L^{4}}^{2}\|\nabla\partial_{t}^{ S-r-1}\rho\|_{L^{4}}^{2}\] \[\lesssim\|\nabla\partial_{t}^{r}u\|_{1}^{2}\|\nabla\partial_{t}^{ S-r-1}\rho\|_{1}^{2}\] \[\leq\|\partial_{t}^{r}u\|_{2}^{2}\|\partial_{t}^{S-r-1}\rho\|_{2} ^{2}.\]
Similarly, we may estimate the second term above by
\[\|\partial_{t}^{r}u\cdot\nabla^{2}\partial_{t}^{S-r-1}\rho\|_{L^{2}}^{2} \lesssim\|\partial_{t}^{r}u\|_{1}^{2}\|\partial_{t}^{S-r-1}\rho\|_{3}^{2}\]
Thus for \(r=0,\ldots,S-1\),
\[\int_{t/2}^{t}\|\nabla(\partial_{t}^{r}u\cdot\nabla\partial_{t}^ {S-r-1}\rho)\|_{L^{2}}^{2}d\tau \lesssim\int_{t/2}^{t}\left(\|\partial_{t}^{r}u\|_{2}^{2}\|\partial _{t}^{S-r-1}\rho\|_{2}^{2}+\|\partial_{t}^{r}u\|_{1}^{2}\|\partial_{t}^{S-r-1} \rho\|_{3}^{2}\right)d\tau\] \[\leq C(\rho_{0},u_{0},g,k)(t^{-(2r+1)}t^{-(k-2r-2)}+t^{-2r}t^{-(k -2r-1)})\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-1)}\]
where we applied (2.14) with index \(2r+1\) to \(\|\partial_{t}^{r}u\|_{2}\), (2.15) with index \(k-2r-2\) to \(\|\partial_{t}^{S-1-r}\rho\|_{2}\), (2.14) with index \(2r\) to \(\|\partial_{t}^{r}u\|_{1}\), and (2.15) with index \(k-2r-1\) to \(\|\partial_{t}^{S-1-r}\rho\|_{3}\). In a similar fashion, we can also obtain the following bound:
\[\int_{t/2}^{t}\left[\|\nabla(\partial_{t}^{S-r-1}\rho\partial_{t}^{r}\rho)\|_{ L^{2}}^{2}+\|\nabla(\nabla\partial_{t}^{S-r-1}\rho\cdot\nabla(-\Delta)^{-1}( \partial_{t}^{r}\rho))\|_{L^{2}}^{2}\right]d\tau\leq C(\rho_{0},u_{0},g,k)t^{1 -k}.\]
Collecting the estimates above and combining with (2.28), we have
\[\int_{t/2}^{t}\|\nabla\partial_{t}^{S}\rho(\tau)\|_{L^{2}}^{2}d\tau\leq C(\rho _{0},u_{0},g,k)t^{1-k}. \tag{2.30}\]
Combining (2.27) and (2.30), we have
\[\int_{t/2}^{t}\tilde{E}_{S}(\tau)d\tau\leq C(\rho_{0},u_{0},g,k)t^{1-k}.\]
By mean value theorem, we can find a \(\tau_{0}\in(t/2,t)\) such that
\[\tilde{E}_{S}(\tau_{0})\leq C(\rho_{0},u_{0},g,k)t^{-k},\]
and this concludes the proof.
Then we show a counterpart to Lemma 2.2.
**Lemma 2.5**.: _Let \(\tau_{0}\) be chosen as in Lemma 2.4. Then for any \(r=1,\ldots,S-1\), we have_
\[\int_{\tau_{0}}^{T_{*}}\tilde{G}_{r}(\tau)d\tau\leq C(\rho_{0},u_{0},g,k)t^{-( k-\frac{1}{2})}.\]
Proof.: Observe that for \(r=1,\ldots,S-1\),
\[\tilde{G}_{r}=\|\partial_{t}^{S-r}\rho\|_{1}\|\partial_{t}^{S-r}\rho\|_{2}\| \partial_{t}^{r}u\|_{1}^{2}+\|\partial_{t}^{S-r}\rho\|_{1}^{2}\|\partial_{t}^ {r}\rho\|_{1}^{2}=:\tilde{G}_{r}^{1}+\tilde{G}_{r}^{2}.\]
To estimate \(\tilde{G}_{r}^{2}\), apply (2.14) with index \(2r\) to \(\|\partial_{t}^{r}\rho\|_{1}^{2}\) and (2.15) with index \(k-2r-1\) to \(\|\partial_{t}^{S-r}\rho\|_{1}^{2}:\)
\[\int_{\tau_{0}}^{T_{*}}\tilde{G}_{r}^{2}(\tau)d\tau \leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-2r}\int_{\tau_{0}}^{T_{*}}\| \partial_{t}^{S-r}\rho\|_{1}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-2r}\tau_{0}^{-(k-2r-1)}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-1)}.\]
To treat the term \(\tilde{G}_{r}^{1}\), we use the induction hypothesis to obtain that
\[\int_{\tau_{0}}^{T_{*}}\tilde{G}_{r}^{1}(\tau)d\tau =\int_{\tau_{0}}^{T_{*}}\|\partial_{t}^{S-r}\rho\|_{1}\|\partial_ {t}^{r}u\|_{1}\|\partial_{t}^{S-r}\rho\|_{2}\|\partial_{t}^{r}u\|_{1}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-\frac{k-2r}{2}}\tau_{0}^{-r} \int_{\tau_{0}}^{T_{*}}\|\partial_{t}^{S-r}\rho\|_{2}\|\partial_{t}^{r}u\|_{1}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)\tau_{0}^{-\frac{k-2r}{2}}\tau_{0}^{-r} \tau_{0}^{-\frac{k-2r}{2}}\tau_{0}^{-\frac{2r-1}{2}}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-\frac{1}{2})}\]
Summing up the two estimates above completes the proof of the lemma.
Finally, we show a result parallel to Lemma 2.3.
**Lemma 2.6**.: _For any \(t\in(0,T_{*}]\),_
\[t^{k-\frac{1}{4}}(\|R_{1}(t)\|_{1}^{2}+\|R_{2}(t)\|_{1}^{2})\leq C(\rho_{0},u_{0 },g,k),\]
\[t^{k-\frac{1}{4}}\int_{t}^{T_{*}}\left(\|R_{1}(\tau)\|_{2}^{2}+\|R_{2}(\tau)\|_ {2}^{2}\right)d\tau\leq C(\rho_{0},u_{0},g,k),\]
_where \(R_{1},R_{2}\) are defined as in (2.23)._
Proof.: First, we note that by applying (2.14) and (2.15) with index \(k-2\), we have
\[t^{k-2}\left(\|R_{2}(t)\|_{1}^{2}+\int_{t}^{T_{*}}\|R_{2}(\tau)\|_{2}^{2}d\tau \right)\leq C(\rho_{0},u_{0},g,k).\]
Then it suffices for us to show suitable bounds for \(R_{1}\). Similarly to the proof of Lemma 2.3, we need to control the following typical terms:
\[R_{11}^{r}:=\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-1-r}\rho,\ R_{12}^{r}: =\partial_{t}^{S-1-r}\rho\partial_{t}^{r}\rho,\ R_{13}^{r}:=\nabla \partial_{t}^{S-1-r}\rho\cdot\nabla(-\Delta)^{-1}(\partial_{t}^{r}\rho),\]
For simplicity, we will only consider in detail the most singular term \(R_{11}^{r}\), as the estimates for the remaining two terms will follow similarly.
We first study \(\|R_{11}^{r}\|_{1}^{2}\), and it suffices for us to consider the leading order contribution i.e. \(\|\nabla R_{11}^{r}\|_{L^{2}}^{2}\). Recall from the proof of Lemma 2.3 that
\[\|\nabla R_{11}^{r}\|_{L^{2}}^{2}\lesssim\|\nabla\partial_{t}^{r}u\cdot\nabla \partial_{t}^{S-1-r}\rho\|_{L^{2}}^{2}+\|\partial_{t}^{r}u\cdot\nabla^{2} \partial_{t}^{S-1-r}\rho\|_{L^{2}}^{2}=:R_{111}^{r}+R_{112}^{r}.\]
To treat \(R_{111}^{r}\), we see that for any \(0\leq r\leq S-1\), an application of Holder inequality, Sobolev embedding, and Gagliardo-Nirenberg Sobolev inequality yields:
\[R_{111}^{r} \leq\|\nabla\partial_{t}^{r}u\|_{L^{3}}^{2}\|\nabla\partial_{t}^ {S-r-1}\rho\|_{L^{6}}^{2}\] \[\lesssim\|\nabla\partial_{t}^{r}u\|_{L^{2}}\|\nabla\partial_{t}^ {r}u\|_{1}\|\nabla\partial_{t}^{S-r-1}\rho\|_{1}^{2}\] \[\lesssim\|\partial_{t}^{r}u\|_{1}\|\partial_{t}^{r}u\|_{2}\| \partial_{t}^{S-r-1}\rho\|_{2}^{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-r}t^{-\frac{2r+1}{2}}t^{-(k-2r-1)}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-1/2)},\]
where we used (2.14) with indices \(2r\), \(2r+1\), \(k-2r-1\) respectively in the second to the last inequality above. To treat \(R_{112}^{r}\), we first discuss the case when \(1\leq r\leq S-1\):
\[R_{112}^{r} \leq\|\partial_{t}^{r}u\|_{L^{3}}^{2}\|\nabla^{2}\partial_{t}^{S- r-1}\rho\|_{L^{6}}^{2}\] \[\leq\|\partial_{t}^{r}u\|_{L^{2}}\|\partial_{t}^{r}u\|_{1}\| \partial_{t}^{S-r-1}\rho\|_{3}^{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{2r-1}{2}}t^{-r}t^{-(k-2r)}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-1/2)},\]
where we used (2.14) with index \(2r-1\), \(2r\), \(k-2r\) respectively in the second to the last inequality above. In the case where \(r=0\), we instead estimate as follows using Agmon's
inequality:
\[R^{0}_{112} \leq\|u\|_{L^{\infty}}^{2}\|\nabla^{2}\partial_{t}^{S-1}\rho\|_{L^{2}} ^{2}\] \[\lesssim\|u\|_{L^{2}}^{1/2}\|u\|_{2}^{3/2}\|\partial_{t}^{S-1}\rho \|_{2}^{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{3}{2}t^{-(k-1)}}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-1/4)},\]
where we used (2.14) with index \(0\), \(1\), and \(k-1\) in the third inequality. Combining the estimates above yields
\[t^{k-1/4}\|R^{r}_{11}(t)\|_{1}^{2}\leq C(\rho_{0},u_{0},g,k).\]
Now we shall study \(\|R^{r}_{11}\|_{2}^{2}\). We still consider the leading order contribution, namely \(\|\nabla^{2}R^{r}_{11}\|_{L^{2}}^{2}\). A straightforward computation yields:
\[\|\nabla^{2}R^{r}_{11}\|_{L^{2}}^{2} \lesssim\|\nabla^{2}\partial_{t}^{r}u\cdot\nabla\partial_{t}^{S-1 -r}\rho\|_{L^{2}}^{2}+\|\nabla\partial_{t}^{r}u\cdot\nabla^{2}\partial_{t}^{S -1-r}\rho\|_{L^{2}}^{2}+\|\partial_{t}^{r}u\cdot\nabla^{3}\partial_{t}^{S-1-r} \rho\|_{L^{2}}^{2}\] \[=:\tilde{R}^{r}_{111}+\tilde{R}^{r}_{112}+\tilde{R}^{r}_{113}.\]
To control \(\tilde{R}^{r}_{111}\), we have for any \(t\in(0,T_{*}]\):
\[\tilde{R}^{r}_{111} \leq\|\nabla^{2}\partial_{t}^{r}u\|_{L^{3}}^{2}\|\nabla\partial_ {t}^{S-1-r}\rho\|_{L^{6}}^{2}\] \[\lesssim\|\nabla^{2}\partial_{t}^{r}u\|_{L^{2}}\|\nabla^{2} \partial_{t}^{r}u\|_{1}\|\nabla\partial_{t}^{S-1-r}\rho\|_{1}^{2}\] \[\lesssim\|\partial_{t}^{r}u\|_{2}\|\partial_{t}^{r}u\|_{3}\| \partial_{t}^{S-1-r}\rho\|_{2}^{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{2r+1}{2}}t^{-\frac{k-2r-1}{2 }}\|\partial_{t}^{r}u\|_{3}\|\partial_{t}^{S-1-r}\rho\|_{2}\] \[=C(\rho_{0},u_{0},g,k)t^{-\frac{k}{2}}\|\partial_{t}^{r}u\|_{3}\| \partial_{t}^{S-1-r}\rho\|_{2}\]
where we used (2.14) with indices \(2r+1\) and \(k-2r-1\) above. Integrating in time, we obtain
\[\int_{t}^{T_{*}}\tilde{R}^{r}_{111}d\tau \leq C(\rho_{0},u_{0},g,k)t^{-\frac{k}{2}}\int_{t}^{T_{*}}\| \partial_{t}^{r}u\|_{3}\|\partial_{t}^{S-1-r}\rho\|_{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{k}{2}}\left(\int_{t}^{T_{*}} \|\partial_{t}^{r}u\|_{3}^{2}d\tau\right)^{1/2}\left(\int_{t}^{T_{*}}\| \partial_{t}^{S-1-r}\rho\|_{2}^{2}d\tau\right)^{1/2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{k}{2}}t^{-\frac{2r+1}{2}}t^{- \frac{k-2r-2}{2}}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-(k-1/2)},\]
where we used (2.15) with indices \(2r+1\) and \(k-2r-2\). A similar argument switching the estimates of \(u\) and \(\rho\) terms yields the same bound for \(\tilde{R}^{r}_{112}\):
\[\int_{t}^{T_{*}}\tilde{R}^{r}_{112}d\tau\leq C(\rho_{0},u_{0},g,k)t^{-(k-1/2)}.\]
To estimate \(\tilde{R}^{r}_{113}\), we first note that for \(1\leq r\leq S-1\),
\[\tilde{R}^{r}_{113} \leq\|\partial_{t}^{r}u\|_{L^{3}}^{2}\|\nabla^{3}\partial_{t}^{S- r-1}\rho\|_{L^{6}}^{2}\] \[\leq\|\partial_{t}^{r}u\|_{L^{2}}\|\partial_{t}^{r}u\|_{1}\| \partial_{t}^{S-r-1}\rho\|_{4}^{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{2r-1}{2}}t^{-r}\|\partial_{t }^{S-r-1}\rho\|_{4}^{2}\]
where we used (2.14) with index \(2r-1\) and \(2r\) respectively in the last inequality above. Integrating in time, we get: \[\int_{t}^{T_{*}}\tilde{R}^{r}_{113}d\tau \leq C(\rho_{0},u_{0},g,k)t^{-\frac{2r-1}{2}}t^{-r}\int_{t}^{T_{*} }\|\partial_{t}^{S-r-1}\rho\|_{4}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{2r-1}{2}}t^{-r}t^{k-2r}\] \[=C(\rho_{0},u_{0},g,k)t^{-(k-1/2)},\] where we used (2.15) with index \(k-2r\) above. In the case where \(r=0\), we instead estimate as follows using Agmon's inequality: \[\tilde{R}^{0}_{113} \leq\|u\|_{L^{\infty}}^{2}\|\nabla^{3}\partial_{t}^{S-1}\rho\|_{ L^{2}}^{2}\] \[\lesssim\|u\|_{L^{2}}^{1/2}\|u\|_{2}^{3/2}\|\partial_{t}^{S-1} \rho\|_{3}^{2}\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{3}{4}}\|\partial_{t}^{S-1} \rho\|_{3}^{2}\] where we used (2.14) with indices \(0\) and \(1\). Integrating in time yields: \[\int_{t}^{T_{*}}\tilde{R}^{0}_{113}d\tau \leq C(\rho_{0},u_{0},g,k)t^{-\frac{3}{4}}\int_{t}^{T_{*}}\| \partial_{t}^{S-1}\rho\|_{3}^{2}d\tau\] \[\leq C(\rho_{0},u_{0},g,k)t^{-\frac{3}{4}}t^{-(k-1)}\] \[=C(\rho_{0},u_{0},g,k)t^{-(k-1/4)},\] where we used (2.15) with index \(k-1\) above. Collecting the estimates above yields \[t^{k-1/4}\int_{t}^{T_{*}}\|\nabla^{2}R^{r}_{11}\|_{L^{2}}^{2}\leq C(\rho_{0}, u_{0},g,k).\] The proof is therefore completed.
From this point on, a similar argument to the odd case combining with the three lemmas above finishes the proof for the even case. We leave details for the interested reader.
Finally, by combining Corollary 2.1, Proposition 2.3, and using Sobolev embeddings, we infer the existence of a regular solution to (1.1)
### Uniqueness
In this section, we show the uniqueness of regular solutions to problem (1.1).
**Proposition 2.4**.: _Given initial data \(\rho_{0}\in H^{1}_{0}\), \(u_{0}\in V\), there exist a \(T_{*}>0\) depending only on \(\rho_{0}\), and a unique regular solution to problem (1.1) on \([0,T_{*}]\)._
Proof.: Assume \((\rho_{i},u_{i})\), \(i=1,2\), to be two regular solutions to problem (1.1) with initial condition \(\rho_{0},u_{0}\). Write \(r=\rho_{1}-\rho_{2}\), \(w=u_{1}-u_{2}\). A straightforward computation yields the following equations satisfied by \(\rho,u\):
\[\begin{cases}\partial_{t}r-\Delta r+u_{1}\cdot\nabla r+w\cdot\nabla\rho_{2}+ \operatorname{div}(r\nabla(-\Delta)^{-1}\rho_{1}-\rho_{2}\nabla(-\Delta)^{-1} r)=0,\\ \partial_{t}w+\mathcal{A}w=g\mathbb{P}(re_{z}),\end{cases}\]
with boundary conditions \(r|_{\partial\Omega}=0\), \(w|_{\partial\Omega}=0\) and zero initial condition. Testing the \(r\)-equation by \(r\), we obtain
\[\frac{1}{2}\frac{d}{dt}\|r\|_{L^{2}}^{2}+\|\nabla r\|_{L^{2}}^{2} =-\int_{\Omega}ru_{1}\cdot\nabla r-\int_{\Omega}r(w\cdot\nabla \rho_{2})+\int_{\Omega}r\nabla r\cdot\nabla(-\Delta)^{-1}\rho_{1}\] \[-\int_{\Omega}\rho_{2}\nabla r\cdot\nabla(-\Delta)^{-1}r=I_{1}+I _{2}+I_{3}+I_{4}.\]
Using incompressibility of \(u_{1}\), we immediately have \(I_{1}=0\) via integration by parts. Using Holder inequality and Sobolev embedding, we can estimate \(I_{2}\) by:
\[I_{2}\leq\|r\|_{L^{2}}\|w\|_{L^{6}}\|\nabla\rho_{2}\|_{L^{3}}\lesssim\|r\|_{L^ {2}}\|w\|_{1}\|\rho_{2}\|_{2}\leq\epsilon\|w\|_{1}^{2}+C(\epsilon)\|\rho_{2} \|_{2}^{2}\|r\|_{L^{2}}^{2}\]
for any \(\epsilon>0\). Using elliptic estimates, Sobolev embedding, and Gagliardo-Nirenberg-Sobolev inequalities, we may estimate \(I_{3}\) by:
\[I_{3} \leq\|\nabla r\|_{L^{2}}\|r\|_{L^{3}}\|\nabla(-\Delta)^{-1}\rho_{ 1}\|_{L^{6}}\lesssim\|\nabla r\|_{L^{2}}\|r\|_{L^{2}}^{1/2}\|\nabla r\|_{L^{2 }}^{1/2}\|\rho_{1}\|_{L^{2}}\] \[\lesssim\|\rho_{1}\|_{L^{2}}\|\nabla r\|_{L^{2}}^{3/2}\|\rho\|_{L ^{2}}^{1/2}\leq\epsilon\|\nabla r\|_{L^{2}}^{2}+C(\epsilon)\|\rho_{1}\|_{L^{2 }}^{4}\|r\|_{L^{2}}^{2}.\]
Similarly, we can estimate \(I_{4}\) by
\[I_{4}\lesssim\|\rho_{2}\|_{L^{\infty}}\|r\|_{L^{2}}\|\nabla r\|_{L^{2}} \lesssim\|\rho_{2}\|_{2}\|r\|_{L^{2}}\|\nabla r\|_{L^{2}}\leq\epsilon\|\nabla r \|_{L^{2}}^{2}+C(\epsilon)\|\rho_{2}\|_{2}^{2}\|r\|_{L^{2}}^{2}.\]
On the other hand, we test the \(w\)-equation by \(w\):
\[\frac{1}{2}\frac{d}{dt}\|w\|_{L^{2}}^{2}+\|\nabla w\|_{L^{2}}^{2}=g\int_{ \Omega}w\cdot re_{z}\leq\frac{1}{2}\|w\|_{L^{2}}^{2}+\frac{g^{2}}{2}\|r\|_{L^{ 2}}^{2}.\]
Consider \(E(t):=\|w\|_{L^{2}}^{2}+\|r\|_{L^{2}}^{2}\). Collecting the estimates above and choosing \(\epsilon>0\) to be sufficiently small, we have the following inequality:
\[\frac{dE}{dt}\leq C(\|\rho_{2}\|_{2}^{2}+\|\rho_{1}\|_{L^{2}}^{4}+g^{2})E(t)=: Cf(t)E(t).\]
Note that as \((\rho_{i},u_{i})\) are regular solutions for \(i=1,2\), we particularly have \(\rho_{1}\in C([0,T_{*}];V)\) and \(\rho_{2}\in L^{2}((0,T_{*});H^{2}\cap V)\). Hence \(f\in L^{1}(0,T_{*})\). Since \((r,w)\) assumes zero initial condition, we have \(E(0)=0\). Then an application of Gronwall's inequality implies
\[E(t)=0,\;t\in[0,T_{*}],\]
and uniqueness is proved.
### Regularity Criterion
In this section, we aim to prove Theorem 1.2. We first need the following fact on the monotonicity of \(L^{1}\) norm of cell density \(\rho\):
**Lemma 2.7**.: _Assume \(\Omega\) to be a smooth domain in either \(\mathbb{R}^{2}\) or \(\mathbb{R}^{3}\). Let \((\rho,u)\) be a smooth solution to problem (1.1) on \([0,T]\). Suppose also that \(\rho_{0}\) is nonnegative. Then for any \(t\in[0,T]\), we have_
\[\frac{d}{dt}\|\rho(t)\|_{L^{1}}\leq 0.\]
Proof.: First, we note that by parabolic maximum principle, we must have \(\rho(t,x)\geq 0\) in \([0,T]\times\Omega\). Using (1.1), we compute that
\[\frac{d}{dt}\|\rho(t,\cdot)\|_{L^{1}} =\frac{d}{dt}\int_{\Omega}\rho(t,x)dx=\int_{\Omega}\left(-u\cdot \nabla\rho+\Delta\rho-\operatorname{div}(\rho\nabla(-\Delta)^{-1}\rho)\right)dx\] \[=\int_{\Omega}\operatorname{div}(\nabla\rho-\rho\nabla(-\Delta)^ {-1}\rho)dx=\int_{\partial\Omega}\frac{\partial\rho}{\partial n}-\rho\frac{ \partial}{\partial n}(-\Delta)^{-1}\rho dS\] \[=\int_{\partial\Omega}\frac{\partial\rho}{\partial n}dS,\]
where \(\frac{\partial}{\partial n}\) denotes the outward normal derivative and \(dS\) denotes the surface unit. We also used the incompressibility of \(u\), divergence theorem, and the Dirichlet boundary condition in the derivation above. In view of parabolic maximum principle, we must have
\[\frac{\partial\rho}{\partial n}\big{|}_{\partial\Omega}\leq 0.\]
Hence, we conclude that
\[\frac{d}{dt}\|\rho(t,\cdot)\|_{L^{1}}\leq 0,\;t\in[0,T].\]
Now, we are ready to give a proof of the \(L^{2}\) regularity criterion:
Proof of Theorem 1.2.: Assume \((\rho,u)\) is a solution to (1.1) with smooth data \((\rho_{0},u_{0})\). Let \(T_{0}>0\) be its maximal lifespan.
1. \(d=2\). Suppose \(T_{0}<\infty\) and \[\lim_{t\nearrow T_{0}}\int_{0}^{t}\|\rho\|_{L^{2}}^{2}ds=M<\infty.\] First, we test the \(u\)-equation in (1.1) by \(\mathcal{A}u\), which yields: \[\frac{1}{2}\frac{d}{dt}\|\nabla u\|_{L^{2}}^{2}+\|\mathcal{A}u\|_{L^{2}}^{2}=g \int_{\Omega}\mathcal{A}u\cdot\rho e_{2}\leq\frac{1}{2}\|\mathcal{A}u\|_{L^{2 }}^{2}+\frac{g^{2}}{2}\|\rho\|_{L^{2}}^{2},\;t\in[0,T_{0}).\] Rearranging the above inequality, using Gronwall inequality, Theorem A.1 and the assumption, we obtain that \[\sup_{t\in[0,T_{0}]}\|u\|_{1}^{2}+\int_{0}^{T_{0}}\|u\|_{2}^{2}ds\leq\|u_{0}\|_ {1}^{2}+\frac{g^{2}M}{2}<\infty.\] (2.31) Testing \(\rho\)-equation by \(-\Delta\rho\), one obtains that \[\frac{1}{2}\frac{d}{dt}\|\nabla\rho\|_{L^{2}}^{2}+\|\Delta\rho\|_ {L^{2}}^{2} =\int_{\Omega}\Delta\rho u\cdot\nabla\rho-\int_{\Omega}\Delta\rho \rho^{2}+\int_{\Omega}\Delta\rho\nabla\rho\cdot\nabla(-\Delta)^{-1}\rho\] \[=:Q_{1}+Q_{2}+Q_{3}.\] Similarly to the estimate (2.9), we have for any \(\epsilon>0\) \[Q_{1} \leq\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{2}}\|u\|_{L^{\infty }}\leq\epsilon\|\Delta\rho\|_{L^{2}}^{2}+C(\epsilon)\|\nabla\rho\|_{L^{2}}^{2} \|u\|_{2}^{2},\] \[Q_{2} \leq\epsilon\|\Delta\rho\|_{L^{2}}^{2}+C(\epsilon)\|\rho\|_{L^{4} }^{4}\leq\epsilon\|\Delta\rho\|_{L^{2}}^{2}+C(\epsilon)\|\rho\|_{L^{2}}^{2}\| \nabla\rho\|_{L^{2}}^{2}.\]
The term that we have to treat differently is \(Q_{3}\). Using Holder inequality, Sobolev embedding, and an \(L^{p}\)-based elliptic estimate, we have: \[Q_{3} \leq\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{3}}\|\nabla(-\Delta)^{ -1}\rho\|_{L^{6}}\lesssim\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{3}}\|\nabla(- \Delta)^{-1}\rho\|_{1,\frac{3}{2}}\] \[\lesssim\|\Delta\rho\|_{L^{2}}\|\nabla\rho\|_{L^{3}}\|\rho\|_{L^{ 3/2}}\lesssim\|\Delta\rho\|_{L^{2}}\|\rho\|_{L^{2}}^{1/3}\|\nabla^{2}\rho\|_{L^ {2}}^{2/3}\|\rho\|_{L^{1}}^{2/3}\|\nabla\rho\|_{L^{2}}^{1/3}\] \[\lesssim\|\Delta\rho\|_{L^{2}}^{5/3}\|\rho\|_{L^{2}}^{1/3}\|\rho \|_{L^{1}}^{2/3}\|\nabla\rho\|_{L^{2}}^{1/3}\leq\epsilon\|\Delta\rho\|_{L^{2} }^{2}+C(\epsilon)\|\rho\|_{L^{2}}^{2}\|\rho\|_{L^{1}}^{4}\|\nabla\rho\|_{L^{2} }^{2},\] where we used the Gagliardo-Nirenberg-Sobolev inequalities \[\|f\|_{L^{3/2}}\leq C\|f\|_{L^{1}}^{2/3}\|\nabla f\|_{L^{2}}^{1/3},\;\|\nabla f \|_{L^{3}}\leq C\|f\|_{L^{2}}^{1/3}\|\nabla^{2}f\|_{L^{2}}^{2/3},\] in the fourth inequality, and Young's inequality in the last step. By Lemma 2.7, we know that for \(t\in[0,T_{0})\), \(\|\rho(t,\cdot)\|_{L^{1}}\leq\|\rho_{0}\|_{L^{1}}\). Then we have \[Q_{3}\leq\epsilon\|\Delta\rho\|_{L^{2}}^{2}+C(\rho_{0},\epsilon)\|\rho\|_{L^ {2}}^{2}\|\nabla\rho\|_{L^{2}}^{2}.\] Choosing \(\epsilon>0\) sufficiently small and using the estimates of \(L_{i}\) above, the \(\rho\)-estimate can be rearranged as: \[\frac{d}{dt}\|\nabla\rho\|_{L^{2}}^{2}+\|\Delta\rho\|_{L^{2}}^{2}\leq C(\rho_ {0})(\|u\|_{2}^{2}+\|\rho\|_{L^{2}}^{2})\|\nabla\rho\|_{L^{2}}^{2}.\] (2.32) Using Gronwall inequality, we have: \[\sup_{0\leq t\leq T_{0}}\|\nabla\rho(t,\cdot)\|_{L^{2}}^{2}+\int _{0}^{T_{0}}\|\rho\|_{2}^{2}ds \lesssim\|\nabla\rho_{0}\|_{L^{2}}^{2}\exp\left(C(\rho_{0})\int _{0}^{T_{0}}(\|u\|_{2}^{2}+\|\rho\|_{L^{2}}^{2})ds\right)\] \[\leq C(\rho_{0},u_{0},M,g,T_{0}),\] where we used the assumption, (2.31), and elliptic estimate. But this implies that one can extend the solution \((\rho,u)\) beyond the supposed lifespan \(T_{0}\) by Theorem 1.1. This yields a contradiction.
2. \(d=3\). Suppose \(T_{0}<\infty\) and \[\lim_{t\nearrow T_{0}}\int_{0}^{t}\|\rho\|_{L^{2}}^{4}ds=M<\infty.\] Testing the \(u\)-equation in (1.1) by \(Au\) and deploying estimates similar to the \(d=2\) case, we have \[\sup_{t\in[0,T_{0}]}\|\nabla u\|_{L^{2}}^{2}+\int_{0}^{T_{0}}\|u\|_{2}^{2}ds \leq\|u_{0}\|_{1}^{2}+\frac{g^{2}\sqrt{MT_{0}}}{2}<\infty.\] A derivation identical to (2.9) yields: \[\frac{d}{dt}\|\nabla\rho\|_{L^{2}}^{2}+\|\Delta\rho\|_{L^{2}}^{2}\lesssim\left( \|\rho\|_{L^{2}}^{4}+\|u\|_{2}^{2}\right)\|\nabla\rho\|_{L^{2}}^{2}.\] Applying Gronwall inequality and combining the two estimates above, we have for \(t\in[0,T_{0}]\) that \[\|\nabla\rho(t,\cdot)\|_{L^{2}}^{2}+\int_{0}^{T_{0}}\|\rho\|_{2}^{2}ds \lesssim\|\rho_{0}\|_{1}^{2}\exp\left(C(\rho_{0})\int_{0}^{T_{0}}(\|\rho\|_{L^ {2}}^{4}+\|u\|_{2}^{2})ds\right)\leq C(\rho_{0},u_{0},M,g,T_{0}).\] And this contradicts the assumption that \(T_{0}\) is the maximal lifespan in view of Theorem 1.1. The proof is thus completed.
Proof of the Main Theorem: Suppression of Chemotactic Blowup
In this section, our goal is to conclude Theorem 1.3 that (1.1) is globally regular in the regime of sufficiently large \(g\). In particular, we will see that the coupling of the Keller-Segel equation to the Stokes flow with sufficiently robust buoyancy term is regularizing, in the sense that the solution \(\rho(t,x)\) approaches zero exponentially fast as \(g\) is sufficiently large. For the rest of the section, \(\Omega\) denotes any smooth, bounded domain in either 2D or 3D.
### Velocity Control
In this subsection, we remark on two controls on the velocity field \(u\) in (1.1) that will be instrumental in our main proof. The first lemma is in fact a standard \(H^{1}_{t,x}\) control of \(u\), which is hidden in our proof of energy estimate in Proposition 2.3. We give a brief derivation here for clarity.
**Lemma 3.1**.: _Let \((\rho,u)\) be a regular solution to problem (1.1) with initial data \(\rho_{0}\in H^{1}_{0}\), \(u_{0}\in V\). We have_
\[\|u\|^{2}_{H^{1}([0,T_{*}]\times\Omega)}\leq C(\rho_{0},u_{0})(g^{2}+1). \tag{3.1}\]
Proof.: In view of the estimate (2.5) in Proposition 2.1, it suffices to show that
\[\int_{0}^{T_{*}}\|\partial_{t}u(t)\|^{2}_{L^{2}}dt\leq C(\rho_{0},u_{0})(g^{2 }+1). \tag{3.2}\]
Testing the \(u\) equation in (1.1) by \(\partial_{t}u\), we have
\[\|\partial_{t}u\|^{2}_{L^{2}}+\frac{1}{2}\frac{d}{dt}\|\nabla u\|^{2}_{L^{2}}= g\int_{\Omega}\partial_{t}u\cdot(\rho e_{z})dx\leq\frac{1}{2}\|\partial_{t}u\|^{2}_ {L^{2}}+\frac{g^{2}}{2}\|\rho\|^{2}_{L^{2}},\]
where we used incompressiblity of \(u\) and Cauchy-Schwarz inequality above. Rearranging, integrating in time, and using (2.4) we obtain
\[\int_{0}^{t}\|\partial_{t}u(s)\|^{2}_{L^{2}}ds+\|\nabla u(t)\|^{2 }_{L^{2}} \leq g^{2}\int_{0}^{t}\|\rho(s)\|^{2}_{L^{2}}ds+\|\nabla u_{0}\|^ {2}_{L^{2}}\] \[\leq g^{2}(2T_{*}\|\rho_{0}\|^{2}_{L^{2}})+\|u_{0}\|^{2}_{1}\] \[\leq C(\rho_{0},u_{0})(g^{2}+1).\]
By taking supremum of \(t\) over \([0,T_{*}]\), we have arrive at the estimate (3.2).
The following lemma yields a key additional control over the velocity field by genuinely exploiting the buoyancy forcing structure of the fluid equation in (1.1):
**Lemma 3.2**.: _Let \((\rho,u)\) be a regular solution to problem (1.1) with initial data \(\rho_{0}\in H^{1}_{0}\), \(u_{0}\in V\). Then_
\[\int_{0}^{T_{*}}\|u(t)\|^{2}_{L^{2}}dt\leq C(\Omega,\rho_{0},u_{0})(g+1). \tag{3.3}\]
**Remark 3.1**.: _Note that a straightforward \(L^{2}\) estimate of \(u\) only yields a bound \(\int_{0}^{T_{*}}\|u(t)\|^{2}_{L^{2}}dt\lesssim g^{2}\). What we display in the lemma is that the structure of buoyancy forcing "gains a \(g^{-1}\)"._
Proof.: Without loss of generality, assume that \(\Omega\) contains the origin. Denote \(L:=\operatorname{diam}(\Omega)>0\). Multiplying the \(\rho\)-equation of (1.1) by \(z-L\) (recall that \(z=x_{d}\) when \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\)) and integrating over \(\Omega\), we have
\[\frac{d}{dt}\int_{\Omega}(z-L)\rho dx+\int_{\Omega}(z-L)(u\cdot\nabla\rho)dx- \int_{\Omega}(z-L)\Delta\rho dx+\int_{\Omega}(z-L)\operatorname{div}(\rho \nabla(-\Delta)^{-1}\rho)dx=0.\]
Moreover using the Dirichlet conditions \(\rho|_{\partial\Omega}=0\) and \(u|_{\partial\Omega}=0\), we note that via integration by parts:
\[\int_{\Omega}(z-L)(u\cdot\nabla\rho)dx=-\int_{\Omega}\rho u_{z}dx+ \int_{\partial\Omega}(z-L)\rho u_{n}dx=-\int_{\Omega}\rho u_{z}dx,\] \[-\int_{\Omega}(z-L)\Delta\rho dx=\int_{\Omega}\partial_{z}\rho dx -\int_{\partial\Omega}(z-L)\frac{\partial\rho}{\partial n}dS,\] \[\int_{\Omega}(z-L)\operatorname{div}(\rho\nabla(-\Delta)^{-1}\rho )dx=-\int_{\Omega}\rho\partial_{z}(-\Delta)^{-1}\rho dx,\]
where \(u_{n}\) denotes the normal component of \(u\) along \(\partial\Omega\), and \(dS\) denotes the surface measure induced on \(\partial\Omega\). Collecting the above computations, we have
\[\int_{\Omega}\rho u_{z}dx=\frac{d}{dt}\int_{\Omega}(z-L)\rho dx+\int_{\Omega} \partial_{z}\rho dx-\int_{\partial\Omega}(z-L)\frac{\partial\rho}{\partial n }dS-\int_{\Omega}\rho\partial_{z}(-\Delta)^{-1}\rho dx. \tag{3.4}\]
On the other hand, testing the \(u\)-equation of (1.1) by \(u\), we also have
\[\frac{1}{2}\frac{d}{dt}\|u\|_{L^{2}}^{2}+\|\nabla u\|_{L^{2}}^{2}=g\int_{ \Omega}\rho u_{z}. \tag{3.5}\]
From Lemma 2.7, we also know that \(\partial\rho/\partial n\leq 0\) on \(\partial\Omega\) in \([0,T_{*}]\). Hence, we have \(\int_{\partial\Omega}(z-L)\frac{\partial\rho}{\partial n}dS\geq 0\) by definition of \(L\). Combining this fact with (3.4), (3.5), and integrating on \([0,T_{*}]\), we have
\[\|u(t)\|_{L^{2}}^{2}-\|u_{0}\|_{L^{2}}^{2} \leq 2g\bigg{[}\int_{\Omega}(z-L)(\rho(t,x)-\rho_{0}(x))\,dx+\int _{0}^{t}\int_{\Omega}\partial_{z}\rho\,dx-\int_{0}^{t}\int_{\Omega}\rho \partial_{z}(-\Delta)^{-1}\rho\,dx\bigg{]}\] \[\leq C(\Omega)g(\|\rho_{0}\|_{L^{1}}+\sqrt{T_{*}}\left(\int_{0}^{ T_{*}}\|\nabla\rho\|_{L^{2}}^{2}dt\right)^{1/2}+\int_{0}^{T_{*}}\|\rho\|_{L^{2}}^{2}dt)\] \[\leq C(\Omega,\rho_{0})g,\]
where we used elliptic estimate in the second inequality, and (2.4) in the final inequality. The proof is therefore completed after integrating in time again.
### A Key Theorem
In this part, we prove a quantitative characterization of the regularizing effect of the Stokes-Boussinesq flow in (1.1). With a rigidity-type argument inspired by [7], we show that the flow with sufficiently large \(g\) can suppress the \(L^{2}\) energy of \(\rho\) to be arbitrarily small within the time scale of local existence, as elucidated in the following theorem:
**Theorem 3.1**.: _Let \(\rho_{0}\in H^{1}_{0},u_{0}\in V\) be initial conditions for the problem (1.1), and consider \((\rho,u)\) to be the regular solution. For arbitrary \(\epsilon>0\), there exists \(g_{*}=g_{*}(\rho_{0},u_{0},\epsilon)\) such that for any \(g\geq g_{*}\),_
\[\inf_{t\in[0,T_{*}]}\|\rho(t,\cdot)\|_{L^{2}}\leq\epsilon.\]
Proof.: Suppose for the sake of contradiction that there exists \(\epsilon_{0}>0\) such that there is a sequence \(\{(\rho_{n},u_{n},g_{n})\}_{n}\) which are regular solutions to (1.1) with \(\rho=\rho_{n},u=u_{n},g=g_{n}\) and \(g_{n}\to+\infty\) (corresponding to initial data \(\rho_{0},u_{0}\)). Indeed, we may without loss of generality assume that the sequence \(\{g_{n}\}_{n}\) is increasing by picking a subsequence if necessary. Also, for any \(t\in[0,T_{*}]\), and for all \(n\)
\[\|\rho_{n}(t,\cdot)\|_{L^{2}}>\epsilon_{0}. \tag{3.6}\]
Note that indeed we can use the uniform choice of time \(T_{*}\) here, since \(T_{*}\) only depends on \(\rho_{0}\). Moreover, we consider the normalized velocity \(\bar{u}_{n}=u_{n}/g_{n}\). We will divide the proof into the following steps:
* **Step 1: Convergence properties of \((\rho_{n},u_{n})\).** From (3.1), we have \(\|\bar{u}_{n}\|_{H^{1}([0,T_{*}]\times\Omega)}\leq C(\rho_{0},u_{0})\). Using weak compactness and the Sobolev compact embedding theorem, we obtain that there exists \(\bar{u}_{\infty}\in H^{1}([0,T_{*}]\times\Omega)\) such that \[\bar{u}_{n}\rightharpoonup\bar{u}_{\infty}\ \ \mbox{in}\ H^{1}([0,T_{*}]\times \Omega),\ \ \mbox{and}\ \ \bar{u}_{n}\to\bar{u}_{\infty}\ \ \mbox{in}\ L^{2}([0,T_{*}]\times \Omega).\] In fact, observe that from the estimate (3.3) of Lemma 3.2 it follows that \(\|\bar{u}_{n}\|_{L^{2}([0,T_{*}]\times\Omega)}\to 0\) as \(n\to\infty\), so \(\bar{u}_{\infty}=0.\) In addition, from the energy estimate (2.4), we may pick a further subsequence, still indexed by \(n\), such that there exists \(\rho_{\infty}\in L^{2}(0,T_{*};H^{1}_{0}(\Omega))\) and \[\rho_{n}\rightharpoonup\rho_{\infty}\ \ \mbox{in}\ L^{2}(0,T_{*};H^{1}_{0}(\Omega)).\]
* **Step 2: Derivation of the limiting fluid equation.** Since \((\rho_{n},u_{n})\) is a regular solution to (1.1) with parameter \(g_{n}\) on \([0,T_{*}]\), \(u_{n}\) in particular solves the fluid equation in (1.1) weakly. That is, \[-\int_{0}^{T_{*}}\int_{\Omega}(\partial_{t}\phi)\bar{u}_{n}dxdt+\int_{0}^{T_{* }}\int_{\Omega}(\mathcal{A}\phi)\bar{u}_{n}dxdt=\int_{0}^{T_{*}}\int_{\Omega} \rho_{n}(\phi\cdot e_{z})dxdt,\] for any smooth vector field \(\phi\in C_{c}^{\infty}([0,T_{*}]\times\Omega)\) with \(\mbox{div}\,\phi=0\). By the convergence properties of \(\rho_{n}\), \(u_{n}\) as shown in Step 1, and by Lemma 3.2 we find that \[\rho_{\infty}e_{z}=\nabla p_{\infty},\ \left(t,x\right)\in[0,T_{*}]\times\Omega\] (3.7) holds in a weak sense.
* **Step 3: Nontriviality of \(\rho_{\infty}\).** By maximum principle, we know that \(\rho_{n}\), and thus \(\rho_{\infty}\), is nonnegative. We would also like to claim that \(\rho_{\infty}\not\equiv 0\). To show this fact, we need the following proposition. **Proposition 3.1**.: _Let \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\), be a smooth, bounded domain. Assume \((\rho,u)\) to be the regular solution of problem (1.1) on \([0,T_{*}]\) with initial condition \(\rho_{0}\geq 0\in H^{1}_{0},\,u_{0}\in V\). If there exists \(M>0\) such that \(\sup_{0\leq t\leq T_{*}}\|\rho(t)\|_{L^{2}}\leq M\), then we have_ \[\sup_{0\leq t\leq T_{*}}\|\rho(t)\|_{L^{\infty}}\leq CM^{\frac{4}{4-d}}.\] _Here \(C\) is a constant that may only depend on \(d\) and \(\Omega\)._ A variant of this result has been proved in [19] (Proposition 9.1), in a two dimensional periodic setting. The proof of Proposition 3.1 is similar and for the sake of completeness will be provided in the appendix. Next, we need the following lemma.
**Lemma 3.3**.: _Let \(D\subset\mathbb{R}^{d}\), \(d\in\mathbb{N}\), be a bounded domain, and let \(\{f_{n}\}_{n}\subset L^{2}(D)\) be a sequence of nonnegative functions that weakly converges to a function \(f\in L^{2}(D)\). Assume that there exist \(M,\epsilon>0\) such that \(\|f_{n}\|_{L^{2}}>\epsilon\), \(\|f_{n}\|_{L^{\infty}}\leq M\) for all \(n\). Then \(f\not\equiv 0\)._
Proof.: Suppose for the sake of contradiction that \(f\equiv 0\). Consider the characteristic function \(\phi=\chi_{D}\) Since \(D\) is bounded, \(\phi\in L^{2}(D)\). Then the weak convergence informs us that
\[\lim_{n\to\infty}\int_{D}f_{n}=0.\]
As \(f_{n}\geq 0\) for all \(n\), this is equivalent to \(\lim_{n\to\infty}\|f_{n}\|_{L^{1}}=0\). Since \(\|f_{n}\|_{L^{\infty}}\leq M\), by interpolation we have
\[\|f_{n}\|_{L^{2}}^{2}\leq\|f_{n}\|_{L^{\infty}}\|f_{n}\|_{L^{1}}\to 0\]
as \(n\to\infty\). But this contradicts with the assumption that \(\|f_{n}\|_{L^{2}}>\epsilon\).
Observe that from (2.4), we know that \(\|\rho_{n}(t,\cdot)\|_{L^{2}}\leq 4\|\rho_{0}\|_{L^{2}}\) for all \(t\in[0,T_{*}]\) and all \(n\). Thus applying Proposition 3.1 to \(\rho_{n}\) we get that \(\|\rho_{n}(t,\cdot)\|_{L^{\infty}}\leq M\) for all \(t\in[0,T_{*}]\), and all \(n\), where \(M=C(d,\Omega)\|\rho_{0}\|_{L^{2}}^{\frac{4}{4-d}}.\) Then Lemma 3.3 implies that \(\rho_{\infty}\not\equiv 0\).
* **Step 4: Derivation of a contradiction.** Let us consider
\[\psi_{n}(x):=\int_{0}^{T_{*}}\rho_{n}(t,x)dt,\;\psi_{\infty}(x):=\int_{0}^{T_ {*}}\rho_{\infty}(t,x)dt.\]
In particular, \(\psi_{\infty}\not\equiv 0\) and \(\psi_{\infty}\geq 0\) by Step 3. On one hand, picking arbitrary \(\eta\in L^{2}(\Omega)\), we have
\[\bigg{|}\int_{\Omega}\eta(x)(\psi_{n}(x)-\psi_{\infty}(x))dx\bigg{|} =\bigg{|}\int_{0}^{T_{*}}\int_{\Omega}\eta(x)(\rho_{n}(t,x)-\rho_ {\infty}(t,x))dxdt\bigg{|}\] \[=\bigg{|}\int_{0}^{T_{*}}\int_{\Omega}\eta(x)\chi_{[0,T_{*}]}(t)( \rho_{n}(t,x)-\rho_{\infty}(t,x))dxdt\bigg{|},\]
which converges to \(0\) as \(\rho_{n}\rightharpoonup\rho_{\infty}\) in \(L^{2}([0,T_{*}]\times\Omega)\). This implies that \(\psi_{n}\rightharpoonup\psi_{\infty}\) in \(L^{2}(\Omega)\). On the other hand, we note that by Minkowski inequality and Holder inequality,
\[\|\nabla\psi_{n}\|_{L^{2}}\leq\int_{0}^{T_{*}}\|\nabla\rho_{n}\|_{L^{2}}dt \leq\sqrt{T_{*}}\|\nabla\rho_{n}\|_{L^{2}([0,T_{*}]\times\Omega)}\leq C(\rho_ {0}),\]
where we used (2.4) in the last step. Since \(\rho_{n}|_{\partial\Omega}=0\), we know that \(\psi_{n}\in H^{1}_{0}(\Omega)\) with a uniform \(H^{1}\)-norm bound from above. Hence by weak compactness and Sobolev compact embedding theorem, there exists a subsequence, still denoted by \(\psi_{n}\), and \(\tilde{\psi}_{\infty}\in H^{1}_{0}(\Omega)\) such that
\[\psi_{n}\rightharpoonup\tilde{\psi}_{\infty}\;\;\text{in}\;H^{1}_{0}(\Omega), \;\psi_{n}\to\tilde{\psi}_{\infty}\;\;\text{in}\;L^{2}(\Omega).\]
Indeed, we must have \(\tilde{\psi}_{\infty}=\psi_{\infty}\) due to the uniqueness of weak limit, and hence \(\psi_{\infty}\in H^{1}_{0}(\Omega)\). But now, integrating (3.7) with respect to time, we have
\[\nabla P=\psi_{\infty}e_{z},\]
where \(P(x):=\int_{0}^{T_{*}}p_{\infty}(t,x)dt\). But this implies that \(\psi_{\infty}(x)=h(z)\), where \(h\) is some single-variable function. Moreover, we know that \(\psi_{\infty}\in H^{1}_{0}(\Omega)\). These two facts imply that \(\psi_{\infty}\equiv 0\). However, this contradicts the fact that \(\psi_{\infty}>0\). This completes the proof of the theorem.
### Proof of Global Well-Posedness with Large \(g\)
Now we are ready to prove the main theorem. As we will see below, \(\|\rho\|_{L^{2}}\) enjoys a Riccati-type differential inequality which preserves smallness. This structure combined with Theorem 3.1 gives the boundedness of \(\|\rho(t)\|_{L^{2}}^{2}\) globally in time (and actually smallness in large time). The proof is thus done after we invoke Theorem 1.2.
Proof of Theorem 1.3.: Say the solution \((\rho,u)\) is regular up to a maximal time \(T_{0},\) which may or may not be infinite. Suppose first that \(T_{0}<\infty\). Similarly to the proof of Proposition 2.1, using the energy estimate of \(\rho,\) a Gagliardo-Nirenberg-Sobolev inequality, Young's inequality and Poincare inequality, we have for \(t\in(0,T_{0})\) that
\[\frac{1}{2}\frac{d}{dt}\|\rho\|_{L^{2}}^{2} \leq-\|\nabla\rho\|_{L^{2}}^{2}+\frac{1}{2}\|\nabla\rho\|_{L^{2}} ^{2}+C\|\rho\|_{L^{2}}^{\frac{12-2d}{4-d}}\] \[\leq-\frac{1}{2C_{p}}\|\rho\|_{L^{2}}^{2}+C\|\rho\|_{L^{2}}^{ \frac{12-2d}{4-d}}=:f_{d}(\|\rho\|_{L^{2}}^{2}), \tag{3.8}\]
where \(C_{p}\) denotes the Poincare constant that only depends on domain \(\Omega\). Since \(2<\frac{12-2d}{4-d}\) when \(d=2,3\), we fix \(\epsilon\in(0,1)\) sufficiently small that \(f_{d}(\epsilon)<-\frac{1}{4C_{p}}\epsilon\). Note that such choice of \(\epsilon\) only depends on domain \(\Omega\). By Theorem 3.1, there exists \(g_{*}=g_{*}(\rho_{0},u_{0})\) such that there exists \(\tau\in[0,T_{*}]\) with \(\|\rho(\tau)\|_{L^{2}}^{2}\leq\epsilon\) for any \(g\geq g_{*}\). Now we consider the problem (1.1) starting from \(t=\tau\). Then from the inequalities above, we note that \(\frac{d}{dt}\|\rho(t,\cdot)\|_{L^{2}}^{2}|_{t=\tau}<0;\) by (3.8) this inequality also holds for all \(t\in[\tau,T_{0}].\) Hence, there exists \(M>0\) depending on \(\rho_{0}\) such that \(\sup_{t\in[0,T_{0}]}\|\rho(t,\cdot)\|_{L^{2}}^{2}\leq M,\) which yields
\[\int_{0}^{T_{0}}\|\rho(t,\cdot)\|_{L^{2}}^{\frac{4}{4-d}}dt\leq M^{\frac{2}{4 -d}}T_{0}<\infty.\]
By the regularity criterion, this contradicts the definition of \(T_{0}.\) Therefore, we conclude that \(T_{0}=\infty\) and the solution is globally regular. To prove (1.3), we note from above that \(\sup_{t\geq\tau}\|\rho(t,\cdot)\|_{L^{2}}^{2}\leq\epsilon.\) In fact, by our choice of \(\epsilon\) and (3.8), we have
\[\frac{d}{dt}\|\rho\|_{L^{2}}^{2}\leq-\frac{1}{4C_{p}}\|\rho\|_{L^{2}}^{2},\;t \geq\tau.\]
Using Gronwall inequality, \(\|\rho(\tau)\|_{L^{2}}^{2}\leq\epsilon<1,\) and \(\tau\leq T_{*}\leq 1,\) we have: for \(t\geq\tau,\)
\[\|\rho(t)\|_{L^{2}}^{2} \leq\|\rho(\tau)\|_{L^{2}}^{2}e^{-\frac{1}{4C_{p}}(t-\tau)}\leq e ^{-\frac{1}{4C_{p}}t}e^{\frac{1}{4C_{p}}\tau}\] \[\leq Ce^{-\frac{1}{4C_{p}}t},\]
where \(C=e^{1/4C_{p}}\) is a constant that only depends on domain \(\Omega\). This yields (1.3) after rearranging the inequality above.
## Appendix A Appendix
In the appendix, we will remark on one regularity estimate for Stokes operator \(\mathcal{A}\) that plays an essential role in our energy estimates. What follows will be a proof of Proposition 3.1 that appear in the proof of the main lemma.
The regularity result for Stokes operator stated below is standard; proofs can be found for example in [6]:
**Theorem A.1**.: _Let \(\Omega\) be a bounded \(C^{2}\) domain. Then there exists a constant \(C=C(\Omega)\) such that for all \(u\in D(\mathcal{A})=H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\),_
\[\|u\|_{2}\leq C(\Omega)\|\mathcal{A}u\|_{L^{2}}.\]
_Moreover, there exist constants \(c_{0},C_{0}\) only depending on domain \(\Omega\) such that_
\[c_{0}\|\nabla u\|_{L^{2}}\leq\|\mathcal{A}^{1/2}u\|_{L^{2}}\leq C_{0}\|\nabla u \|_{L^{2}}\]
We will now give a proof for Proposition 3.1; its statement is reiterated below.
**Proposition A.1**.: _Let \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\), be a smooth, bounded domain. Assume \(\rho,u\) to be the regular solution of problem (1.1) on \([0,T]\) with initial condition \(\rho_{0}\in H^{1}_{0}\) and \(\rho_{0}\geq 0\). If there exists \(M>0\) such that \(\sup_{0\leq t\leq T}\|\rho(t)\|_{L^{2}}\leq M\), we then have_
\[\sup_{0\leq t\leq T}\|\rho(t)\|_{L^{\infty}}\leq CM^{\frac{4}{4-d}},\]
_where \(C\) is a constant only depending on domain \(\Omega\)._
Proof.: In the proof, we shall suppress the variable \(t\). Let \(p\geq 1\) be an integer. We start with the following computation using (1.1):
\[\frac{d}{dt}\|\rho\|_{L^{2p}}^{2p}=2p\int_{\Omega}\rho^{2p-1}(-(u\cdot\nabla) \rho-\operatorname{div}(\rho\nabla(-\Delta)^{-1}\rho)+\Delta\rho)\,dx=2p(I+J+ K).\]
Using incompressibility of \(u\), we can compute that
\[I=-\int_{\Omega}\rho^{2p-1}(u\cdot\nabla)\rho\,dx=-\frac{1}{2p}\int_{\Omega} u_{j}\partial_{j}\rho^{2p}=0.\]
Integrating by parts, we have
\[J=(2p-1)\int_{\Omega}\rho^{2p-1}\partial_{j}\rho\partial_{j}(-\Delta)^{-1} \rho dx=\frac{2p-1}{2p}\int_{\Omega}\partial_{j}(\rho^{2p})\partial_{j}(- \Delta)^{-1}\rho dx=\frac{2p-1}{2p}\int_{\Omega}\rho^{2p+1}dx\]
Using chain rule, we also have
\[K=-(2p-1)\int_{\Omega}\rho^{2p-2}\partial_{j}\rho\partial_{j}\rho dx=-\frac{2 p-1}{p^{2}}\int_{\Omega}|\nabla\rho^{p}|^{2}dx.\]
Collecting all computations above, we observe that
\[\frac{d}{dt}\|\rho\|_{L^{2p}}^{2p}=(2p-1)\|\rho\|_{L^{2p+1}}^{2p+1}-\left(4- \frac{2}{p}\right)\|\nabla\rho^{p}\|_{L^{2}}^{2}.\] (A.1)
Now we shall estimate \(\|\rho\|_{L^{2n}}\) inductively on \(n\). The base case \(n=1\) is dealt with by our assumption. Assume for \(t\in[0,T]\) we have the bound
\[\|\rho\|_{L^{2n}}\leq B_{n},B_{n}\geq 1\]
for any \(t\in[0,T]\). Define \(f=\rho^{2^{n}}\), and apply \(p=2^{n}\) to (A.1), we obtain that
\[\frac{d}{dt}\int_{\Omega}f^{2}dx\leq-2\|\nabla f\|_{L^{2}}^{2}+2^{n+1}\|f\|_{L ^{2+2^{-n}}}^{2+2^{-n}}.\] (A.2)
Applying a Gagliardo-Nirenberg-Sobolev inequality (see [1], for example), we can estimate using Young's inequality that
\[\|f\|_{L^{2}}^{2+2^{-n}}\lesssim\|\nabla f\|_{L^{2}}^{d2^{-n-1}}\|f \|_{L^{2}}^{2+2^{-n}-d2^{-n-1}}\leq d2^{-n-2}\|\nabla f\|_{L^{2}}^{2}+C\|f\|_{L^ {2}}^{\frac{2+2^{-n}-d2^{-n-1}}{1-d2^{-n-2}}},\] (A.3) \[\|f\|_{L^{2}}\lesssim\|\nabla f\|_{L^{2}}^{\frac{d}{d-2}}\|f\|_{L ^{2}}^{\frac{2}{d+2}}.\] (A.4)
The constants in the above inequalities do not depend on \(n\). Plugging (A.3), (A.4) to (A.2), we obtain
\[\frac{d}{dt}\int_{\Omega}f^{2}dx \leq-2\|\nabla f\|_{L^{2}}^{2}+\frac{d}{2}\|\nabla f\|_{L^{2}}^{2 }+C_{2}2^{n+1}\|f\|_{L^{2}}^{\frac{2+2^{-n}-d2^{-n-1}}{1-d2^{-n-2}}}\] \[\leq-C_{1}\|f\|_{L^{2}}^{\frac{2d+4}{d}}\|f\|_{L^{1}}^{-\frac{4} {d}}+C_{2}2^{n+1}\|f\|_{L^{2}}^{\frac{2+2^{-n}-d2^{-n-1}}{1-d2^{-n-2}}},\] (A.5)
where \(C_{1}\), \(C_{2}\) are constants only depending on \(d\). Note that given \(d=2,3\), we have \(\frac{2+2^{-n}-d2^{-n-1}}{1-d2^{-n-2}}<\frac{2d+4}{d}\) for \(n\geq 1\). Moreover, observe that
\[\|f\|_{L^{1}}\leq B_{n}^{2^{n}}<\infty.\]
Then for each \(n\geq 1\), the right hand side of (A.5) becomes negative when \(\|f\|_{L^{2}}\) is sufficiently large. In particular, one can compute that \(\|\rho\|_{L^{2^{n+1}}}\) will never reach the value \(B_{n+1}\), where \(B_{n+1}\) is defined by the following recursive relation:
\[\log B_{n+1}=\frac{2^{n+2}-d}{2^{n+2}-2d}\log B_{n}+\frac{d}{2^{n}}\left[\log C +(n+1)\log 2\right],\]
where \(C\) is a constant independent of \(n\). Note that we have
\[\prod_{j=1}^{n}\frac{2^{n+2}-d}{2^{n+2}-2d}=\frac{4-d2^{-n}}{4-d}\to\frac{4}{4 -d}\]
as \(n\to\infty\), where in the first equality we used the telescoping nature of the product. Then via an inductive argument, there exists some dimensional constant \(C>0\) such that for all \(n\geq 1\),
\[B_{n}\leq CM^{\frac{4}{4-d}}.\]
As \(\Omega\) is bounded, we have
\[\|\rho\|_{L^{\infty}}=\lim_{n\to\infty}\|\rho\|_{L^{2^{n}}}\leq CM^{\frac{4}{4 -d}},\]
and the proof of the lemma is complete.
|
2310.00058 | Kinetic relaxation and nucleation of Bose stars in self-interacting wave
dark matter | We revisit kinetic relaxation and soliton/Boson star nucleation in fuzzy
scalar dark matter featuring short-ranged self-interactions $\mathcal{H}_{\rm
int} = -\lambda|\psi|^4/2m^2$, alongside gravitational self-interactions. We
map out the full curve of nucleation timescale for both repulsive ($\lambda <
0$) and attractive ($\lambda > 0$) short-ranged self-interaction strength, and
in doing so reveal two new points. Firstly, besides the two usual terms,
$\propto G^2$ and $\propto \lambda^2$, in the total relaxation rate
$\Gamma_{\rm relax}$, there is an additional cross term $\propto G\lambda$
arising due to interference between gravitational and short-ranged
self-interaction scattering amplitudes. This yields a critical repulsive
interaction strength $\lambda_{\rm cr} \simeq - 2\pi Gm^2/v_{0}^2$, at which
the relaxation rate is smallest and serves as the transition point between
typical net attractive self-interaction ($\lambda \gtrsim \lambda_{\rm cr}$),
and net repulsive self-interaction ($-\lambda \gtrsim -\lambda_{\rm cr}$).
Secondly, while in the net attractive regime, nucleation time scale is similar
to inverse relaxation time scale $\tau_{\rm nuc} \sim \Gamma^{-1}_{\rm relax}$,
in the net repulsive regime nucleation occurs at a delayed time $\tau_{\rm nuc}
\sim (\lambda/\lambda_{\rm cr})\Gamma^{-1}_{\rm relax}$. We confirm our
analytical understanding by performing 3D field simulations with varying
average mass density $\bar{\rho}$, box size $L$ and grid size $N$. | Mudit Jain, Wisha Wanichwecharungruang, Jonathan Thomas | 2023-09-29T18:02:03Z | http://arxiv.org/abs/2310.00058v1 | # Kinetic relaxation and nucleation of Bose stars in self-interacting wave dark matter
###### Abstract
We revisit kinetic relaxation and soliton/Boson star nucleation in fuzzy scalar dark matter featuring short-ranged self-interactions \(\mathcal{H}_{\rm int}=-\lambda|\psi|^{4}/2m^{2}\), alongside gravitational self-interactions. We map out the full curve of nucleation timescale for both repulsive (\(\lambda<0\)) and attractive (\(\lambda>0\)) short-ranged self-interaction strength, and in doing so reveal two new points. Firstly, besides the two usual terms, \(\propto G^{2}\) and \(\propto\lambda^{2}\), in the total relaxation rate \(\Gamma_{\rm relax}\), there is an additional cross term \(\propto G\lambda\) arising due to interference between gravitational and short-ranged self-interaction scattering amplitudes. This yields a critical repulsive interaction strength \(\lambda_{cr}\simeq-2\pi Gm^{2}/v_{0}^{2}\), at which the relaxation rate is smallest and serves as the transition point between typical attractive self-interaction (\(\lambda\gtrsim\lambda_{cr}\)), and net repulsive self-interaction (\(-\lambda\gtrsim-\lambda_{cr}\)). Secondly, while in the net attractive regime, nucleation time scale is similar to inverse relaxation time scale \(\tau_{\rm nuc}\sim\Gamma_{\rm relax}^{-1}\), in the net repulsive regime nucleation occurs at a delayed time \(\tau_{\rm nuc}\sim(\lambda/\lambda_{cr})\Gamma_{\rm relax}^{-1}\). We confirm our analytical understanding by performing 3D field simulations with varying average mass density \(\bar{\rho}\), box size \(L\) and grid size \(N\).
###### Contents
* I Introduction
* II Model
* III Wave kinetics and relaxation
* IV Nucleation and behavior of solitons
* IV.1 Nucleation
* IV.2 Eventual behavior
* IV.2.1 Attractive short-ranged interactions
* IV.2.2 Repulsive short-ranged interactions \(\lambda<0\)
* V Field simulations
* V.1 Net attractive interactions (\(\lambda\gtrsim\lambda_{\rm cr}\)).
* V.2 Net repulsive interactions (\(-\lambda\gtrsim-\lambda_{\rm cr}\))
* VI Summary and Discussion
* VI.1 Comparison with earlier work
* VI.2 Implications
## I Introduction
Understanding the nature of dark matter (DM) is one of the main quests of modern cosmology. It could be multi-faceted in the sense that there are many degrees of freedom in the whole dark sector, for instance the String theory Axiverse [1; 2; 3] or other confined sector(s) (e.g. see [4; 5], and also [6]). Or it could be that there is a dominant degree of freedom, such as the QCD axion [7; 8; 9; 10; 11; 12], that comprises all (or most) of the dark matter. Furthermore, while the DM appears to interact only gravitationally with the Standard Model degrees of freedom (or very weakly if it does otherwise), it can still have appreciable non-gravitational self-interactions (nGSI) besides the usual gravitational self-interactions (GSI). Such is the case even for the above mentioned examples.
For bosonic particles (of any integer spin) and high enough occupation numbers, which is indeed the case for particle masses below a few eV, classical description of the associated field suffices and the dynamics is described by a non-linear Schrodinger equation in the non-relativistic regime. The non-linear Schrodinger equation entails novel wave dynamics owing to the de-Broglie scale becoming manifestly important. As a few examples, suppression of structure on small scales [13; 14], turbulence [15], superradiance [16], vortices [17], bound states called solitons/Bose stars [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28], interference patterns [29; 30; 28], field correlation scales depending upon the nature of self-interaction [31], etc. For comprehensive recent reviews in the case of scalar DM, see [32; 33].
Of particular interest to us in this paper, is the phenomenon of kinetic relaxation and associated nucleation of Bose stars within a bath of DM waves [34; 35; 36; 37; 38; 39; 40; 41; 42]. The term "kinetic" implies two key aspects: (a) The self-interactions in the field are small, allowing wave modes to freely evolve (at leading order) with the
non-relativistic dispersion relation \(\omega_{k}=k^{2}/2m\). This enables a kinetic treatment of the mode occupation number function; (b) the size of the 'box' (\(\sim\) the size of a DM halo for practical purposes), is much larger than the typical fluctuation scale \(\ell_{\rm dB}\sim\pi/\bar{k}\sim\pi/(mv_{0})\) in the bath of DM waves. The process of kinetic relaxation is attributed to these self-interactions of the DM field which although small, over large time scales \(\tau_{\rm relax}\gg\omega_{\bar{k}}^{-1}\) drive the occupation number function to develop increasing support towards smaller wavenumbers \(\mathbf{k}\to 0\). See [34] and [41] for a relevant discussion for the cases of point-like quartic self-interactions and gravitational self-interactions respectively. Once enough particles condense into lower momentum states, their collective _net attractive_ self-interaction becomes strong enough to counter balance their wave pressure resulting in the nucleation of a Bose star.
In this paper, we focus on investigating kinetic relaxation and subsequent Bose star nucleation for a single scalar Schrodinger field with both GSI and point-like quartic nGSI. Employing wave-kinetic Boltzmann analysis and 3D simulations, we demonstrate the presence of a previously overlooked cross term \(\propto G\lambda\) in the rate of _relaxation_\(\Gamma_{\rm relax}\). (Here \(G\) denotes Newton's constant and \(\lambda\) represents the point-like nGSI strength). It arises due to interference between the gravitational and point-like self-interaction scattering amplitudes. The presence of this cross term gives rise to a critical nGSI (repulsive) strength \(\lambda_{\rm cr}\simeq-(2\pi G)m^{2}/v_{0}^{2}\), at which the rate of relaxation reaches its minimum value (corresponding to maximum nucleation time). This critical value also serves as the transition point from typical net (contributions from both gravity and short-ranged self-interactions) attractive to repulsive self-interactions.
Because of the presence of gravitational self-interaction, kinetic relaxation is generally accompanied with _nucleation_ of spatially localized clumps/Bose stars, with their nucleation times dependent on the nature of the short-ranged self-interactions - attractive or repulsive. For \(\lambda\gtrsim\lambda_{\rm cr}\), the net typical self-interaction is attractive, and nucleation happens quickly after relaxation. On the other hand for \(\lambda\lesssim\lambda_{\rm cr}\), the net typical self-interaction is repulsive and nucleation gets delayed. We will study relaxation and nucleation of Bose stars, and also discuss their eventual fate.1 However we will not dwell into a careful analysis of the growth rate of these nucleated stars. See [43; 44; 45] for the gravity only (\(\lambda=0\)) case.
Footnote 1: Following conventional nomenclature, we shall use the words relaxation and condensation interchangeably, but it is to be stressed that nucleation (of a bound state) is not always equivalent to relaxation/condensation. As we shall see, it is equivalent to the other two in the net attractive regime, whereas different in the net repulsive regime.
The rest of the paper is organized as follows: Starting with the basic model of fuzzy scalar DM carrying both GSI and point-like nGSI in sec. II, we describe the associated wave kinetic Boltzmann equation for the evolution of the occupation number function in sec. III. Highlighting the presence of the cross term (that gives rise to \(\lambda_{\rm cr}\)), we estimate the total rate of kinetic relaxation/condensation. In sec. IV we discuss the two cases of \(\lambda\gtrsim\lambda_{\rm cr}\) and \(-\lambda\gtrsim-\lambda_{\rm cr}\) and write down the associated nucleation time scales of spatially localized bound objects. In sec. V we discuss our 3D simulations and compare our analytical estimates with them. We also discuss eventual behavior of Bose clumps observed in simulations. Finally in sec.VI, we summarize our work and also compare our results with the existing literature on this subject. In appendix A we discuss statistical convergence of our simulations, and in appendix B we discuss a peculiarity observed in the case of repulsive short-ranged self-interactions, over longer time scales as compared to nucleation.
**Conventions**: Unless stated otherwise, we will work in units where \(\hbar=c=1\).
## II Model
Ignoring Hubble flow (for we are interested in sufficiently sub-horizon dynamics), the evolution of the cold/non-relativistic fuzzy scalar dark matter with both GSI and short-ranged quartic nGSI, can be described using mean field theory. The dark matter field \(\psi\) obeys the following non-linear Schrodinger (Gross-Pitaevskii) equation:
\[i\frac{\partial}{\partial t}\psi=-\frac{1}{2m}\nabla^{2}\psi+\psi\Big{(}4\pi Gm ^{2}\nabla_{/0}^{-2}-\frac{\lambda}{m^{2}}\Big{)}\psi^{*}\psi\,. \tag{1}\]
Here \(G\) is the Newton's constant, and \(\lambda\) is the point like self-interaction strength. In our convention, \(\lambda>0\) and \(\lambda<0\) dictate attractive and repulsive self-interaction respectively. To obtain the form Eq. (1), we have plugged the self-gravitational potential, \(\Phi=4\pi G\,\nabla^{-2}(m\psi^{*}\psi-\bar{\rho})\equiv 4\pi Gm\nabla_{/0}^{-2} \psi^{*}\psi\), in the usual Schrodinger-Poisson system of equations. The \(\nabla_{/0}^{-2}\) denotes exclusion of the homogeneous part of the number density field \(\psi^{*}\psi\). In Fourier space with the decomposition \(\psi(\mathbf{x},t)=(2\pi)^{-3}\int\mathrm{d}\mathbf{k}\,e^{-i\mathbf{k}\cdot\mathbf{x}}\,\Psi_ {\mathbf{k}}(t)\), the Schrodinger equation becomes
\[i\dot{\Psi}_{\mathbf{k}}=\frac{k^{2}}{2m}\Psi_{\mathbf{k}}+\int\frac{ \mathrm{d}\mathbf{p}}{(2\pi)^{3}}\frac{\mathrm{d}\mathbf{q}}{(2\pi)^{3}}\frac{\mathrm{ d}\mathbf{\ell}}{(2\pi)^{3}}\mathcal{T}_{\mathbf{k},\mathbf{p},\mathbf{q},\mathbf{\ell}}\,\Psi_{\mathbf{p}}^{*} \Psi_{\mathbf{q}}\Psi_{\mathbf{\ell}}\\ \times\,(2\pi)^{3}\delta^{(3)}(\mathbf{k}+\mathbf{p}-\mathbf{q}-\mathbf{\ell})\,, \tag{2}\]
where
\[\mathcal{T}_{\mathbf{k},\mathbf{p},\mathbf{q},\mathbf{\ell}}= -\frac{4\pi Gm^{2}}{|\mathbf{k}-\mathbf{\ell}|^{2}}-\frac{\lambda}{m^{2}}\,, \tag{3}\]
and it is understood that \(\mathbf{k}\neq\mathbf{\ell}\neq 0\) in the above. For later convenience, it is also useful to write down the Hamiltonian density (in physical space) for the mean field \(\psi\):
\[\mathcal{H}=\frac{1}{2m}|\nabla\psi|^{2}+m\Phi|\psi|^{2}-\frac{\lambda}{2m^{2}} |\psi|^{4}\,. \tag{4}\]
Here the different terms in the above can be attributed to the wave-pressure \(\mathcal{H}_{\text{wp}}=|\nabla\psi|^{2}/2m\), gravitational self-interaction \(\mathcal{H}_{\text{gr}}=m\Phi|\psi|^{2}\), and short-ranged self-interaction \(\mathcal{H}_{\text{self}}=-\lambda|\psi|^{4}/2m^{2}\).
The Gross-Pitaevskii (GP) equation, being non-linear, renders it difficult to analyze and study the evolution of the \(\psi\) field in generality. However for the purposes of kinetic relaxation leading to nucleation of localized Bose clumps however, wave kinetic Boltzmann analysis can be performed which we discuss next. To test and verify our analytical understanding, we perform 3D field simulations which we discuss in a later section.
## III Wave kinetics and relaxation
For kinetic relaxation in wave dynamics, we can study the evolution of the mode occupation number function \(f_{\mathbf{k}}=|\Psi_{\mathbf{k}}|^{2}/V\) (\(V\) is the volume), which is nothing but the Fourier transform of the 2-point volume averaged field correlator \(\zeta(\mathbf{x},t)=V^{-1}\int\mathrm{d}\mathbf{y}\,\psi^{*}(\mathbf{y},t)\,\psi(\mathbf{y}+ \mathbf{x},t)\). Under random phase approximation with weak interactions, the relevant wave-kinetic Boltzmann equation can be derived. See for instance [46]. For a derivation for the general case of arbitrary number of fields and 2 body interactions, see [41]. For the scalar case at hand, characterizing the dependence of the occupation number functions on wavenumbers as \(f_{\mathbf{k}/m}\), the wave-kinetic equation takes the familiar form
\[\frac{\partial f_{\mathbf{k}/m}}{\partial t}=\int\frac{\mathrm{d}\mathbf{ p}}{(2\pi)^{3}}\,\mathrm{d}\sigma_{\mathbf{k}+\mathbf{p}\to\mathbf{q}+\mathbf{\ell}}\,|\mathbf{v}- \tilde{\mathbf{v}}|\left[(f_{\mathbf{k}/m}+f_{\mathbf{p}/m})f_{\mathbf{q}/m}f_{\mathbf{\ell}/m}-(f _{\mathbf{q}/m}+f_{\mathbf{\ell}/m})f_{\mathbf{k}/m}f_{\mathbf{p}/m}\right],\\ \text{where}\quad\mathrm{d}\sigma_{\mathbf{k}+\mathbf{p}\to\mathbf{q}+\mathbf{ \ell}}=\frac{1}{2|\mathbf{v}-\tilde{\mathbf{v}}|}\frac{\mathrm{d}\mathbf{q}}{(2\pi)^{3}} \frac{\mathrm{d}\mathbf{\ell}}{(2\pi)^{3}}\Big{(}\mathcal{T}_{\mathbf{k},\mathbf{p},\mathbf{q },\mathbf{\ell}}+\mathcal{T}_{\mathbf{k},\mathbf{p},\mathbf{\ell},\mathbf{q}}\Big{)}\Big{(} \mathcal{T}_{\mathbf{k},\mathbf{p},\mathbf{q},\mathbf{\ell}}+\mathcal{T}_{\mathbf{k},\mathbf{p},\mathbf{ \ell},\mathbf{q}}\Big{)}^{*}\times\\ (2\pi)^{4}\delta^{(3)}(\mathbf{k}+\mathbf{p}-\mathbf{q}-\mathbf{\ell})\,\delta(E _{\mathbf{k}}+E_{\mathbf{p}}-E_{\mathbf{q}}-E_{\mathbf{\ell}})\,. \tag{5}\]
Here \(\mathbf{v}=\mathbf{k}/m\) and \(\tilde{\mathbf{v}}=\mathbf{p}/m\) are the incoming "velocities" in the 2-wave interaction, and the quantities in the 1-dimensional Dirac delta function are the free wave energies \(E_{\mathbf{k}}=k^{2}/2m\). The quantity \(\mathrm{d}\sigma\) is the effective differential cross section. The cubic nature of the terms in the right hand side bracket (\(\sim f_{\mathbf{x}}f_{\mathbf{y}}f_{\mathbf{z}}\)), usually understood as Boltzmann enhancement terms, arise due to the wave-mechanical nature of the system (1) and are crucial for the phenomenon of Bose condensation. Last but not the least, it is the form of the differential cross section that appears in the wave-kinetic equation, \(\sim|\mathcal{T}|^{2}\), that is of utmost importance for our discussion. The scattering amplitudes due to the different kinds of 2-body interactions (here gravity and point-like self-interactions), are _added first and then squared_: What appears in the differential cross section is \(|\mathcal{T}|^{2}\) where \(\mathcal{T}=\mathcal{T}_{G}+\mathcal{T}_{\lambda}\) (c.f. Eq. (3)), and \(|\mathcal{T}_{G}+\mathcal{T}_{\lambda}|^{2}\neq|\mathcal{T}_{G}|^{2}+| \mathcal{T}_{\lambda}|^{2}\) (since both \(\mathcal{T}_{G}\propto-4\pi Gm^{2}\) and \(\mathcal{T}_{\lambda}=-\lambda/m^{2}\) are real). This can be attributed to the wave dynamical nature of the GP system. The above equation (5), after integration over the Dirac deltas, can be re-written in terms of the incoming and outgoing relative velocities \(\mathbf{u}=u\hat{\mathbf{n}}\) and \(\mathbf{u}^{\prime}=u\hat{\mathbf{n}}^{\prime}\) respectively2, by redefining \(\mathbf{p}/m=\mathbf{k}/m-u\hat{\mathbf{n}}\) and \(\mathbf{q}/m=\mathbf{\ell}/m-u\hat{\mathbf{n}}^{\prime}\):
Footnote 2: Note that the magnitude of the relative velocity does not change the sign of the wave-kinetic equation.
\[\frac{\partial f_{\mathbf{v}}}{\partial t}=m^{3}\int\frac{\mathrm{d} \mathbf{u}}{(2\pi)^{3}}\,\mathrm{d}\sigma\,u\Bigg{[}(f_{\mathbf{v}}+f_{\tilde{\mathbf{v}} })f_{\tilde{\mathbf{v}}-\mathbf{w}}f_{\mathbf{v}+\mathbf{w}}-(f_{\tilde{\mathbf{v}}-\mathbf{w}}+f_{\bm {v}+\mathbf{w}})f_{\mathbf{v}}f_{\tilde{\mathbf{v}}}\Bigg{]}\,,\\ \text{where}\quad\mathrm{d}\sigma=\frac{\mathrm{d}\Omega_{n^{ \prime}}}{32\pi^{2}m^{2}}\left[\Big{(}\frac{16\pi m^{2}G}{u^{2}|\hat{\mathbf{n}}^{ \prime}-\hat{\mathbf{n}}|^{2}}+\lambda\Big{)}^{2}+\Big{(}\frac{16\pi m^{2}G}{u^{2}| \hat{\mathbf{n}}^{\prime}+\hat{\mathbf{n}}|^{2}}+\lambda\Big{)}^{2}+2\Big{(}\frac{16 \pi m^{2}G}{u^{2}|\hat{\mathbf{n}}^{\prime}-\hat{\mathbf{n}}|^{2}}+\lambda\Big{)} \Big{(}\frac{16\pi m^{2}G}{u^{2}|\hat{\mathbf{n}}^{\prime}+\hat{\mathbf{n}}|^{2}}+ \lambda\Big{)}\right]. \tag{6}\]
Let us briefly discuss the different terms in the differential cross section explicitly. Broadly speaking, there are two types of interference terms that arise. One is the interference between the \(t\) and \(u\) channels (relevant mainly for the gravitational interaction), and the second is the interference between the two different types of interactions (gravitational and short-ranged). See fig. 1 for a pictorial representation.
For GSI only (\(\lambda=0\)) case, the contribution from the \(t\) and \(u\) channels are the first two terms \(\propto G^{2}\,|\hat{\mathbf{n}}^{\prime}-\hat{\mathbf{n}}|^{-4}\) and \(\propto G^{2}\,|\hat{\mathbf{n}}^{\prime}+\hat{\mathbf{n}}|^{-4}\), whereas the second term \(\propto G^{2}\,|\hat{\mathbf{n}}^{\prime}-\hat{\mathbf{n}}|^{-2}\,|\hat{\mathbf{n}}^{ \prime}+\hat{\mathbf{n}}|^{-2}\) is due to their mutual interference (as also discussed in [41]). Note that the sole contributions from the \(t\) and \(u\) channels are identical: The full integral with the \(|\hat{\mathbf{n}}^{\prime}+\hat{\mathbf{n}}|^{-4}\) term is identical to that with the \(|\hat{\mathbf{n}}^{\prime}-\hat{\mathbf{n}}|^{-4}\) term. The sole contributions give rise to the Rutherford scattering cross section, carrying a logarithmic IR divergence (aka the Coulomb logarithm), while the interference term becomes sub-dominant in the large log limit and can be omitted.
For nGSI only (\(G=0\)) case, contributions from \(t\) and \(u\) channels are identical to their mutual interference one, and goes as \(\lambda^{2}\). This is simply due to the interaction being a contact/point interaction.
Importantly when both of the interactions are present, their respective scattering amplitudes (for either of the two channels) are added first and then squared. All the terms \(\propto G\lambda\), while giving identical contributions, characterize the interference between the two types of interactions. Splitting the contributions from GSI, nGSI, and their interference, we have the following wave-kinetic Boltzmann equation
\[\frac{\partial f_{\mathbf{v}}}{\partial t} =\mathcal{C}_{\rm GSI}+\mathcal{C}_{\rm cross}+\mathcal{C}_{\rm n GSI}\] \[\text{where}\quad\mathcal{C}_{\rm GSI} =\frac{\Lambda(4\pi G)^{2}m^{5}}{4\pi}\nabla_{vi}\Bigg{[}\frac{1}{ 2}\nabla_{vi}f_{\mathbf{v}}\int\frac{\mathrm{d}\tilde{\mathbf{v}}}{(2\pi)^{3}}\,f_{ \mathbf{v}}\,\frac{\delta_{ij}-\hat{u}_{i}\hat{u}_{j}}{u}\,f_{\hat{\mathbf{v}}}+f_{\bm {v}}\,f_{\mathbf{v}}\int\frac{\mathrm{d}\tilde{\mathbf{v}}}{(2\pi)^{3}}\,\frac{\hat{u }_{i}}{u^{2}}\,f_{\hat{\mathbf{v}}}\Bigg{]},\] \[\mathcal{C}_{\rm cross} =\frac{(4\pi G)\lambda m^{3}}{4\pi}\int\frac{\mathrm{d}\Omega_{n }}{4\pi}\frac{\mathrm{d}u\,u^{2}}{2\pi^{2}}\frac{\mathrm{d}\Omega_{n^{\prime}} }{4\pi}\frac{u}{|\mathbf{w}|^{2}}\Bigg{[}(f_{\mathbf{v}}+f_{\hat{\mathbf{v}}})f_{\mathbf{v}+ \mathbf{w}}f_{\hat{\mathbf{v}}-\mathbf{w}}-(f_{\mathbf{v}+\mathbf{w}}+f_{\hat{\mathbf{v}}-\mathbf{w}})f_{ \mathbf{v}}f_{\hat{\mathbf{v}}}\Bigg{]},\] \[\mathcal{C}_{\rm nGSI} =\frac{\lambda^{2}m}{2\pi}\int\frac{\mathrm{d}\Omega_{n}}{4\pi} \frac{\mathrm{d}u\,u^{2}}{2\pi^{2}}\frac{\mathrm{d}\Omega_{n^{\prime}}}{4\pi} \,u\left[(f_{\mathbf{v}}+f_{\hat{\mathbf{v}}})f_{\mathbf{v}+\mathbf{w}}f_{\hat{\mathbf{v}}-\mathbf{w}} -(f_{\mathbf{v}+\mathbf{w}}+f_{\hat{\mathbf{v}}-\mathbf{w}})f_{\mathbf{v}}f_{\hat{\mathbf{v}}}\right]. \tag{7}\]
Here \(\mathbf{w}=u(\hat{\mathbf{n}}^{\prime}-\hat{\mathbf{n}})/2\), and \(\Lambda=\log(mv_{0}L)\) is the aforementioned Coulomb logarithm with \(v_{0}\) and \(L\) equal to typical velocity and box size (or halo size for physical considerations) respectively. While the cross term and nGSI term follow straightforwardly from Eq. (6), the Rutherford scattering collision term \(\mathcal{C}_{\rm GSI}\) is obtained after an eikonal approximation and was derived explicitly in [41]. Also see [46; 47] for the same equation for a scalar field.
Now in order to get a typical estimate for the total relaxation rate \(\Gamma_{\rm relax}\equiv\frac{1}{f_{\mathbf{v}}}\frac{\partial f_{\mathbf{v}}}{ \partial t}\), we can replace different quantities in the three collision terms with their appropriate scalings. Replacing angular volume \(\int\mathrm{d}\Omega\to 4\pi\), typical relative velocity \(|\hat{\mathbf{n}}^{\prime}-\hat{\mathbf{n}}|\to\sqrt{2}\), velocity derivative \(\nabla_{v}\to 1/v_{0}\), velocity integral \(\int\mathrm{d}u\,u^{n-1}\to v_{0}^{n}/n\), and finally the occupation number function \(f_{\mathbf{v}}\to(2\pi)^{3/2}\bar{\rho}/(m^{4}v_{0}^{3})\), the total relaxation rate is parameterized as
\[\Gamma_{\rm relax}\simeq\alpha_{1}\frac{(4\pi G)^{2}\bar{\rho}^{2}\Lambda}{4m^ {3}v_{0}^{6}}+\alpha_{12}\frac{(4\pi G)\lambda\bar{\rho}^{2}}{m^{5}v_{0}^{4}}+ \alpha_{2}\frac{\lambda^{2}\bar{\rho}^{2}}{m^{7}v_{0}^{2}}\,. \tag{8}\]
Our scaling of the occupation number is dictated by Gaussian initial condition (see Eq. (12) ahead) which we shall use to perform simulations, described in the next section. In general, \(\alpha_{1}\), \(\alpha_{12}\), and \(\alpha_{2}\) are positive \(\mathcal{O}(1)\) coefficients that would depend on the specific initial conditions. Eq. (8) is our master formula for the relaxation rate. The value of \(\lambda\) around which the relaxation rate becomes smallest, is easily estimated to be
\[\lambda_{\rm cr}=-\beta\frac{2\pi G\,m^{2}}{v_{0}^{2}}\sim 10^{-57}\left( \frac{10^{-4}}{v_{0}}\right)^{2}\left(\frac{m}{10^{-5}\,{\rm eV}}\right)^{2}\,, \tag{9}\]
where \(\beta=\alpha_{12}/\alpha_{2}\sim\mathcal{O}(1)\), and the associated (minimum) rate is3
Footnote 3: In the large Coulomb limit (relevant for realistic scenarios), \(\Lambda=\log(mv_{0}L)\gg 1\), and the rate is always positive.
\[\Gamma_{\rm relax}\left(\lambda_{\rm cr}\right)\simeq\frac{(4\pi G)^{2}\bar{ \rho}^{2}}{4m^{3}v_{0}^{6}}\left(\alpha_{1}\Lambda-\frac{\alpha_{12}^{2}}{ \alpha_{2}}\right)\,.\]
Notice that this critical value of \(\lambda_{\rm cr}\), can also be obtained from the GP equation (1) by balancing the gravitational term with the self-interaction term together
with replacing the exchange momenta by its typical value \(|\mathbf{k}-\mathbf{\ell}|^{2}\sim 2(mv_{0})^{2}\). This criticality marks the transition point from attractive to repulsive net typical self-interactions: For \(\lambda\gtrsim\lambda_{\rm cr}\), typical interactions within the bath of DM waves are attractive since typical \(\mathcal{T}\) is negative, whereas for \(-\lambda\gtrsim-\lambda_{\rm cr}\) they are repulsive since typical \(\mathcal{T}\) is positive.
## IV Nucleation and behavior of solitons
### Nucleation
In general, the process of kinetic relaxation is characterized by an increasing support of the occupation number function \(f_{\mathbf{k}}\) at vanishing wavenumber. (For instance see [34] and [41] for discussions of nGSI and GSI cases respectively). This implies increasing field correlation over larger length scales with diminishing density fluctuations, i.e. field homogenization. A heuristic understanding of the subsequent nucleation of a spatially localized and bound clump, can perhaps be gained most easily from a particle physics perspective, together with recalling that \(\lambda_{\rm cr}\) also marks the transition from typical net attractive self-interaction to typical net repulsive self-interaction. As particles lose kinetic energy on account of self-interactions and move towards smaller momenta (condensate state), there comes a time when within some region, the collective net potential (due to both self-gravitational and short-ranged interactions) becomes comparable to wave pressure. The time scale of this process is nothing but the inverse relaxation rate Eq. (8), which in the case of net attractive self-potential \(\lambda\gtrsim\lambda_{\rm cr}\), leads to 'immediate locking' of such a region into a bound clump (having negative energy). That is, \(\tau_{\rm nuc}\simeq\Gamma_{\rm relax}^{-1}\). Strictly speaking, this can be taken as a definition of \(\tau_{\rm nuc}\) with \(\Gamma_{\rm nuc}=\Gamma_{\rm relax}\), in which case the different \(\alpha\) constant coefficients in the rate Eq. (8), are understood as such.
On the other hand for \(-\lambda\gtrsim-\lambda_{\rm cr}\), relaxation cannot immediately lead to nucleation of a localized bound clump. This is because the net typical interaction is repulsive: the collective self-potential within density fluctuation regions is not binding yet. Over time though, more particles get driven towards the condensate phase, and eventually there arises a potential for a bound object to nucleate (within which net gravity can now compensate for both repulsive short-ranged interaction and wave-pressure). This gives \(\tau_{\rm nuc}>\Gamma_{\rm relax}^{-1}\). In general, we can therefore write the following
\[\tau_{\rm nuc}\simeq\frac{1}{\Gamma_{\rm relax}}\begin{cases}1&\lambda\gtrsim \lambda_{\rm cr}\\ h(\lambda)&-\lambda\gtrsim-\lambda_{\rm cr}\end{cases} \tag{10}\]
where \(h(\lambda)\) is a threshold function (or the delay factor), that relates nucleation times to relaxation rates. As mentioned earlier, relaxation means field homogenization, and we expect the rate at which the system relaxes to be comparable to the rate at which density fluctuations decrease. The delay factor can then be estimated as the ratio of typical density fluctuation at relaxation, to that at nucleation, \(h\sim\delta\rho_{\rm relax}/\delta\rho_{\rm nuc}\). While we expect this to be order unity for net attractive case (first case of Eq. 10), for large repulsive strengths it should increase with increasing \(-\lambda\). Below we estimate this scaling.
Consider a region of typical size \(\sim(mv_{0})^{-1}\) where the field would have 'locked' itself into a bound configuration upon relaxation/condensation, were the net potential was binding. However this is not the case yet, and we may balance the typical self-interaction energy density (mostly due to short-ranged interactions) \(\mathcal{H}_{\rm self}\sim-\lambda\,\delta\rho_{\rm relax}^{2}/2m^{4}\), with the wave pressure within \(\mathcal{H}_{\rm wp}\sim v_{0}^{2}\delta\rho_{\rm relax}/2\). This gives \(\delta\rho_{\rm relax}\sim m^{4}v_{0}^{2}/\lambda\). As relaxation continues (meaning more particles are driven towards low momenta state), the value of both density fluctuations \(\delta\rho\) and typical size of fluctuation regions \((mv)^{-1}\) change. The former decreases and the latter increases so as to maintain \(\mathcal{H}_{\rm self}\sim\mathcal{H}_{\rm wp}\). Then, nucleation is expected to occur when gravity can compensate for both the wave pressure and repulsive short-ranged self-interaction. That is, we may balance (the magnitudes of) all the three energy densities, \(\mathcal{H}_{\rm wp}\sim v_{\rm nuc}^{2}\delta\rho_{\rm nuc}/2\), \(|\mathcal{H}_{\rm gr}|\sim 2\pi G\delta\rho_{\rm nuc}^{2}/(mv_{\rm nuc})^{2}\), and \(\mathcal{H}_{\rm self}\sim-\lambda\delta\rho_{\rm nuc}^{2}/2m^{4}\), to give \(\delta\rho_{\rm nuc}\sim m^{2}v_{\rm nuc}^{4}/(4\pi G)\) and \(v_{\rm nuc}\sim(4\pi Gm^{2}/(-\lambda))^{1/2}\). Eliminating \(v_{\rm nuc}\) from \(\delta\rho_{\rm nuc}\) gives the following estimate
Figure 1: Pictorial / Feynman graph representation of all the terms appearing in the differential cross section in Eq. (6). The total contribution to the interaction rate (left hand side of the equality), is the square of the sum of both gravitational amplitude (top two graphs) and point self-interaction amplitude (bottom graph \(\times 2\)). For gravitational interaction, there are two distinct channels (\(t\) and \(u\)). Their mutual interference, as compared to their sole contributions, becomes subdominant in the large log limit (leading to Rutherford scattering result). More importantly, interference between scattering amplitudes of the two different interactions matters. See main text for details.
for the delay factor
\[h(\lambda)\sim\frac{\delta\rho_{\rm relax}}{\delta\rho_{\rm nuc}}\to\alpha_{3} \left(\frac{\lambda}{\lambda_{\rm cr}}\right)\,. \tag{11}\]
Here we have inserted another constant coefficient \(\alpha_{3}\) that depends upon the initial conditions. Through simulations, we will confirm our estimate Eq. (10) (together with Eq. (8) and Eq. (11)), and also extract the different \(\alpha\) coefficients for Gaussian initial conditions.
### Eventual behavior
Once a spatially-localized Bose clump/soliton emerges, its subsequent evolution and long term dynamics depends on whether the short-ranged self-interactions are attractive or repulsive. The full spectrum of such solitons is extensively discussed in the literature. See e.g. [48; 49; 50; 20]. To recapitulate some of the basic points that may suffice for our purposes, consider the energy landscape for objects of radius \(r_{s}\) and mass \(M_{s}\) in the theory. Using Eq. (4), the wave pressure energy, self-gravitational potential energy, and short-ranged self-interaction potential energy are \(H_{\rm wp}=aM_{s}/(m^{2}r_{s}^{2})\), \(H_{\rm gr}=-b(4\pi G)M_{s}^{2}/(r_{s})\), and \(H_{\rm self}=-c\lambda M_{s}^{2}/(m^{4}r_{s}^{3})\) respectively, with \(a\), \(b\), and \(c\) some positive coefficients that depend upon the exact profile of the object. The total energy is the sum of all three.
#### iv.2.1 Attractive short-ranged interactions \(\lambda>0\)
For this case, the energy vs radius curve (for a given mass \(M_{s}\)) has a _local_ minima that corresponds to quasi stable negative energy (bound) states / solitons. It is separated from the runaway behavior towards small radii, \(\sim-\lambda/r_{s}^{3}\), by a barrier whose height decreases with increasing \(M_{s}\). The barrier disappears at a critical mass \(M_{s,\rm crit}\propto(\lambda G)^{-1/2}\), beyond which the theory does not admit any quasi-stable bound states anymore. Starting in the kinetic regime and upon relaxation, a quasi-stable Bose clump nucleates and starts to accrete mass from its surroundings. Ultimately once it accumulates enough mass such that the energy barrier gets sufficiently low, and/or it 'breathes' rapidly enough so as to be able to probe beyond the energy barrier, it 'collapses'. This is because the region now prefers to lower its energy by transitioning on the runaway \(\sim r_{s}^{-3}\) curve. This is sometimes referred to as "Bosenova" (owing to its analogy with a type II supernova). While subsequent evolution of the object beyond this criticality requires a fully relativistic analysis and has been pursued in the literature [51] (also see [52] for an associated astrophysical phenomenology), the evolution leading upto this criticality (and even beyond until the wave pressure starts to become comparable to rest mass energy) is well captured by the non-relativistic treatment. For large attractive strengths \(\lambda\gtrsim-\lambda_{\rm cr}\), the barrier is less significant and the objects quickly collapses upon nucleation. In our simulations we indeed observe this phenomenon (see fig. 3 in sec. V ahead).
#### iv.2.2 Repulsive short-ranged interactions \(\lambda<0\)
In the repulsive scenario there is no runaway domain since the energy for low radii is now \(-\lambda/r_{s}^{3}>0\). This renders the previous local minima stable (hence now _global_ minima), corresponding to bound soliton states. The critical mass \(M_{s,\rm crit}\propto(-\lambda G)^{-1/2}\) serves as the transition point into the Thomas-Fermi regime [53; 54; 20]. This is where the mass of solitons gets large enough to admit comparable amounts of self-gravitational and short-ranged interaction energy densities, with gradient pressure becoming sub-dominant. As a result, the radius starts to approach a constant \(r_{\rm s}\sim\sqrt{-\lambda}/(m^{2}\sqrt{4\pi G})\) (with the mass dependent correction term dying out as \(\sim M_{s}^{-1}\)). Up until the mass becomes sufficiently large where \(GM_{s,\rm relv}\sim r_{s}\) and relativistic effects start to become important (see [55; 56]), the theory then admits a set of "Chandrasekhar solitons" with masses ranging anywhere between \(M_{s,\rm crit}\) and \(M_{s,\rm relv}\), and radii approximately around \(r_{\rm s}\sim\sqrt{-\lambda}/(m^{2}\sqrt{4\pi G})\).4
Footnote 4: The reason we call them βChandrasekharβ solitons (also see [57]) is because of the scaling of their maximum mass \(M_{s,\rm relv}\). It behaves similarly to that of the Chandrase limit for degenerate stars \(M\propto G^{-3/2}m^{-2}\), and can be attributed to the fact that the Fermi pressure essentially gets replaced with repulsive short-ranged self-interactions [53; 54; 20; 58].
Though the theory admits these stable Chandrasekhar solitons, understanding their evolution and long term behavior within the bath of DM waves is crucial, and has been extensively studied in the literature. See [59; 60; 61; 62; 63] for simulation setups using the fluid/Madelung equations instead of the Schrodinger field equation for scalar wave dark matter (with repulsive short-ranged self-interaction). For our Fourier split simulation technique (which we discuss in the next section), we find that over _longer_ time scales (after the nucleation of Bose clumps), the system reaches some sort of criticality when high frequency modes (near cutoff) start to appear in the simulation box. This leads to a breakdown of the simulation (along with the disruption of the clump), visible in the form of a checkerboard-like pattern. We present this peculiar artifact from our simulations in appendix B, although a detailed investigation of it is left for future work.
## V Field simulations
To verify our analytical understanding of kinetic relaxation and associated nucleation of bound Bose stars, we have carried out a large suite (\(\sim 500\)) of 3D simulations
of the GP system (1) with varying values of the nGSI strength \(\lambda\). We evolve the GP system (1) with the following initial Gaussian function for the \(\mathbf{k}\)-space Schrodinger field
\[V^{-1/2}\left.\Psi_{\mathbf{k}/m}\right|_{t=0} =e^{i\theta_{\mathbf{k}/m}}\sqrt{f_{\mathbf{v}}}\Big{|}_{t=0}\] \[=e^{i\theta_{\mathbf{k}/m}}\left[\frac{(2\pi)^{3/2}\bar{\rho}}{m(mv_{0} )^{3}}\,e^{-\frac{v^{2}}{2v_{0}^{2}}}\right]^{1/2}, \tag{12}\]
where \(\theta_{\mathbf{k}/m}\) are random phases, uniformly distributed in \((0,2\pi)\), for every wavenumber \(\mathbf{k}\). Our numerical algorithm is based on the well known split Fourier technique/pseudo-spectral method [64; 65; 66; 15; 67; 68], and we have used both Python based and Matlab based codes to generate our simulation data.
As mentioned earlier, in order to be in the kinetic regime we require (i) interactions to be tiny as compared with the typical free wave evolution (occurring over time scales \(\sim 2/mv_{0}^{2}\)), and (ii) the box size to be larger than the typical field fluctuation scale \(\pi(mv_{0})^{-1}\). Furthermore, we also impose the box size to be smaller than the gravitational Jeans scale associated with a incoherent bound halo \(\ell_{J}\sim v_{0}(\pi/G\bar{\rho})^{1/2}\), in order to avoid its formation within our simulation box. In this sense, our simulation box of a collection of DM waves with typical fluctuation scale \(\sim\pi(mv_{0})^{-1}\), may be regarded as a region within a DM halo. In summary we require the following to hold true
\[\text{Kinetic\,regime}: L\gg\pi(mv_{0})^{-1}\quad\&\quad\Gamma_{\text{relax}}\ll mv_{0}^{2 }/2\,,\] \[\text{sub\,Jeans\,scale}: L<\ell_{J}\sim v_{0}(\pi/G\bar{\rho})^{1/2}\,. \tag{13}\]
In our simulations, we work with dimensionless quantities, for which purpose we set \(G=1/(8\pi)\) and \(m=1\). More explicitly, one can rescale different quantities in the fashion \(t\to t/\mathcal{E}\), \(\mathbf{x}\to\mathbf{x}/\sqrt{m\,\mathcal{E}}\), \(\psi\to\psi\,\mathcal{E}/\sqrt{8\pi Gm}\) and \(\lambda\to\lambda\mathcal{E}/(8\pi Gm^{3})\) to get Eq. (1) with both \(8\pi G\) and \(m\) replaced by unity. Here \(\mathcal{E}\) is an reference energy scale in the system (for instance \(\mathcal{E}=mv_{0}^{2}/2\)). The discretization in space is simply \(\Delta x=L/(N_{x}-1)\) where \(L\) and \(N_{x}^{3}\) are the box size and number of grid points respectively, and the time discretization is \(\Delta t=2\pi(\Delta x)^{2}m/(3\eta)\) with \(\eta\geq 1\).5 In the split Fourier technique, the field evolution is split into a drift part where it is evolved solely due to the gradient term (free field evolution), and a kick part where it is evolved solely due to interactions. The Courant-Friedrichs-Lewy (CFL) condition ensures that the fastest process in the dynamics is captured appropriately. Hence, the fastest amongst the kick and drift processes, at any time iteration, sets the time discretization \(\Delta t\) (e.g. see [67; 68] for details). In the kinetic regime, the time discretization is always set by the free field evolution term \(\sim(\Delta x)^{-2}/2m\), and hence by space discretization as given above.
Footnote 5: The \(\eta\geq 1\) makes sure that there is at-least one time point in between the full \(2\pi\) rotation of the fastest oscillating mode \(k_{\text{max}}\sim 2\pi/\Delta x\). Since any faithful dynamics of the system should not be sensitive to high frequencies (corresponding to the box discretization scale), \(\eta\) can even be smaller than unity. For all our simulations, \(\eta\) is at-least as big as unity.
In all of our simulations we set \(v_{0}=1/\sqrt{2}\), and choose the box size and average mass density such that we are deep in the kinetic regime. Most of the simulations were performed with \(L=40\), \(L=45\), and \(L=50\) box sizes, and the average mass densities were chosen to be small enough such that the factor \(\Gamma_{\text{relax}}^{-1}mv_{0}^{2}/2\) was at-least as large as \(\sim 250\), going all the way up-to even \(\sim 4500\). For robustness, we have performed simulations with different grid sizes \(N_{x}=\{192,216,256\}\), scanning over different \(\lambda/\lambda_{\text{cr}}\) values. (We also performed simulations with \(N_{x}=150\) and \(N_{x}=300\) to test convergence of our results (see Appendix A).
To capture the formation of localized Bose clumps, we keep track of the mass density in the box, radially averaged (in \(\mathbf{k}\) space) occupation number function \(f_{k}\), the associated volume averaged correlation function \(\zeta(r)\), and the maximum mass density in the box \(\rho_{\text{max}}\).6 Nucleation of a localized clump can be characterized by a change of trend of \(\rho_{\text{max}}\), wherein it starts to monotonically increase beyond just the statistical fluctuations that happen over short time scales. We record the corresponding times in all of our simulations, both by direct inspection and statistical methods such as moving average.7
Footnote 6: In all of our simulations, we confirm the behavior of \(f_{k}\), in that it develops increasing support towards smaller \(k\) values, at-least up until nucleation.
Footnote 7: We note that this is not the only way to know whether a bound clump has formed or not. For instance one can alternatively construct an energy spectral function as in [35], to extract the time scale when the function develops a support towards negative \(\omega\).
In order to gauge the validity of our analytical estimate of nucleation times (c.f. Eq (10) with Eq. (8)), and to extract the different \(\alpha\) coefficients, we split the data set into two, with \(\lambda=-2\pi Gm^{2}/v_{0}^{2}\) being the splitting point. Below we elaborate on the statistical analysis we performed in the two regimes.
#### iii.1.1 Net attractive interactions (\(\lambda\gtrsim\lambda_{cr}\)).
In order to test the \(\lambda\) dependence of our estimate, we construct the quantity \(r(\lambda)=\log(mv_{0}L)(\tau_{\text{nuc}}(0)-\tau_{\text{nuc}}(\lambda))/2\tau_ {\text{nuc}}(\lambda)\) using Eq. (8) and Eq. (10). This gets rid of the \(\bar{\rho}\) and \(L\) dependence, giving
\[r(\lambda)=\frac{\alpha_{12}}{\alpha_{1}}\left(\frac{\lambda\,v_{0}^{2}}{2\pi Gm ^{2}}\right)+\frac{\alpha_{2}}{2\alpha_{1}}\left(\frac{\lambda\,v_{0}^{2}}{2 \pi Gm^{2}}\right)^{2}\.\]
Not only is the curve simple enough to do statistics with, this way we can also combine all of our simulation data (with different \(\bar{\rho}\) and \(L\)). The analogous quantity for simulations is
\[\hat{r}(\lambda)=\left[\frac{\langle\hat{\tau}_{\rm nuc}(0)\rangle-\hat{\tau}_{ \rm nuc}(\lambda)}{2\langle\hat{\tau}_{\rm nuc}(\lambda)\rangle}\,\right]\log(mv _{0}L)\,,\]
where hats denote simulation data and angle brackets denote averaging over all of the data (for a given \(\lambda\) value). To extract the ratios \(\alpha_{12}/\alpha_{1}\) and \(\alpha_{2}/\alpha_{1}\) for the theory curve, we construct the cost function
\[\text{cost}\left(\frac{\alpha_{12}}{\alpha_{1}},\frac{\alpha_{2}}{\alpha_{1}} \right)=\sum_{\lambda}^{\sim\lambda_{\rm ex}}\frac{1}{N_{\lambda}}\sum_{i=1}^{ N_{\lambda}}\Biggl{[}\frac{\hat{r}_{i}(\lambda)-r(\lambda)}{r(\lambda)}\Biggr{]}^{2} \tag{14}\]
for least square fitting. Here \(N_{\lambda}\) is the number of different simulations performed for a given \(\lambda\) value. Minimizing this cost function then fetches the optimal values for \(\alpha_{12}/\alpha_{1}\), and \(\alpha_{2}/\alpha_{1}\). For \(\alpha_{1}\), we simply find the average of \(\bar{\tau}_{\rm nuc}(0)/\tau_{\rm nuc}(0)\), which we then use to get \(\alpha_{12}\) and \(\alpha_{2}\) from the previous two ratios. For our Gaussian initial condition (12), we found \(\alpha_{1}\simeq 0.8\), \(\alpha_{12}\simeq 1.2\), and \(\alpha_{2}\simeq 1.2\).
Figure 2: _Upper Panel_: Our main figure showing the nucleation time \(\tau_{\rm nuc}\) (normalized by the gravity only case) as a function of short-ranged self-interaction strength \(\lambda\) (normalized by the critical factor \(2\pi Gm^{2}/v_{0}^{2}\)). Solid gray curves are from the theory estimate Eq. (16) (c.f. Eq. (10) with Eq. (8) and Eq. (11)), where the different \(\alpha\) coefficients are obtained from least square fitting as described in the main text. The different colored \(\alpha\) bars are from simulations (performed with Gaussian initial conditions (12)), with box sizes \(L=36\) (brown), \(L=40\) (pink), \(L=45\) (magenta), and \(L=50\) (blue), and varying average densities \(\bar{\rho}\). Here we have only plotted one theory curve for \(L=50\) (all curves for the four different box sizes lie practically on top of each other since the \(L\) dependence is quite mild). To show the effect of the interference term in the relaxation rate and the delay factor in the nucleation time, we have also plotted dotted and dashed gray curves. Respectively, these correspond to when the interference term from the relaxation rate is set to zero, and the delay factor in the nucleation time scale is set to unity. This delay factor is only relevant in the \(\lambda\lesssim 2\pi Gm^{2}/v_{0}^{2}\) case, and the dashed gray curve in the left panel is simply the extension of the main solid gray curve in the right panel. _Bottom panel_: Normalized \(\rho_{\rm max}\) (by their respective initial values) vs time curves for six different \(\lambda\) values, highlighted by colored points in the upper panel. The points of βsuddenβ rise correspond to nucleation of respective localized Bose clumps.
Net repulsive interactions (\(-\lambda\gtrsim-\lambda_{\rm cr}\))
In this case, we expect nucleation to happen later than relaxation, given by Eq. (10) with the delay factor \(h(\lambda)\) in Eq. (11). We can use the previous case relationship \(\tau_{\rm nuc}(\lambda\gtrsim\lambda_{\rm cr})\simeq\Gamma_{\rm relax}^{-1}\), to test the scaling of \(h(\lambda)\) for the \(-\lambda\gtrsim-\lambda_{\rm cr}\) case. From simulations, we construct \(\hat{\tau}_{\rm nuc}\Gamma_{\rm relax}\) with the three \(\alpha\)s in the relaxation rate set to the ones obtained above. We then perform least square fitting by constructing the cost function similar to the previous case
\[\mathrm{cost}(\alpha_{3})=\sum_{\lambda}^{\sim\lambda_{\rm cr}}\frac{1}{N_{ \lambda}}\sum_{i=1}^{N_{\lambda}}\!\left[\frac{\hat{\tau}_{\rm nuc}\Gamma_{\rm relax }-h(\lambda)}{h(\lambda)}\right]^{2}, \tag{15}\]
and minimizing it. For Gaussian initial conditions, we found \(\alpha_{3}\simeq 1\).
With the above analysis and all the four \(\alpha\) values obtained, upper panel of fig. 2 shows our main plot. We plot \(\tau_{\rm nuc}(\lambda)\) (normalized by \(\tau_{\rm nuc}(0)\)) as a function of \(\lambda(v_{0}^{2}/2\pi Gm^{2})\):
\[\frac{\tau_{\rm nuc}}{\tau_{\rm nuc,0}}(x)=\frac{2\alpha_{1} \Lambda\,h(x)}{2\alpha_{1}\Lambda+4\alpha_{12}x+\alpha_{2}x^{2}}\,;\quad x \equiv\frac{\lambda v_{0}^{2}}{2\pi Gm^{2}}\,, \tag{16}\]
along with our simulation data. Here \(\Lambda=\log(mv_{0}L)\) is the Coulomb logarithm, and \(h(x)\) is unity for \(x\gtrsim-1\) (right upper panel of fig 2) while linearly increasing as \(-x\gtrsim-1\) (left upper panel of fig 2). Note that we have only plotted one curve for \(L=50\) (solid gray), since the dependence on \(L\) is very mild and renders different curves for different values of \(L\) practically on top of each other. The error bars correspond to 1-\(\sigma\) fluctuations (owing to random and different initial condition for every simulation seed), with different colors corresponding to three different box sizes considered. In general, the agreement between analytical estimates and simulations is evident. Let us highlight our two main results: (a) The rising feature as \(\lambda\) goes from positive to negative, with a peak occurring around \(\lambda\simeq-2\pi Gm^{2}/v_{0}^{2}\simeq\lambda_{\rm cr}\), is a clear evidence of the interference term in the relaxation rate. To represent the effect of the interference term visually, we have also plotted a dotted gray curve (in the upper right panel of fig. 2), which is equal to inverse of the relaxation rate with the interference term dropped. That is, inverse of Eq. (8) with the term \(\propto G\lambda\) set to zero; (b) To the left of the peak and increasing \(-\lambda\), nucleation happens later than just the inverse relaxation rate. The delay factor \(h\) and the relaxation rate \(\Gamma_{\rm relax}\) scale as \(\sim-\lambda\) and \(\lambda^{2}\) (to leading order) respectively, resulting in the scaling of the nucleation time as \((-\lambda)^{-1}\) (and not \(\lambda^{-2}\)) to leading order. To highlight this, we have augmented the upper left panel of fig. 2 with just the relaxation time curve, i.e. \(\Gamma_{\rm rel}^{-1}\), shown in dashed gray. (This is nothing but the solid gray curve on the right upper panel, extended towards the left upper panel).
The lower panel of fig. 2 shows moving averaged \(\rho_{\rm max}\) vs time curves for six simulations with different \(\lambda\) values. The unambiguous "sudden" rise in \(\rho_{\rm max}\) marks the nucleation of a localized object within which density grows over time.8
Footnote 8: In all of our simulations, we have explicitly verified, by visually tracking the simulation box, that this rising feature indeed corresponds to appearance of an overdense region.
As visual examples, in fig. 3 we also present density projection snapshots for six different \(\lambda\) values at later times, showing the presence of nucleated Bose stars. For attractive short-ranged self-interaction \(\lambda>0\), nucleated Bose clumps eventually collapse into a Bosenova. This happens when it reaches the critical mass where it can no longer remain stable (see section IV.2).
## VI Summary and discussion
In this paper we have investigated kinetic relaxation and associated nucleation times of Bose stars, in scalar fuzzy dark matter with short-ranged 2-body self-interactions. Starting with the wave-kinetic Boltzmann equation for the mode occupation number function (which we derived in an earlier work), we first highlighted the presence of a cross/interference term \(\propto G\lambda\) in the rate of relaxation \(\Gamma_{\rm relax}\), alongside the usual two terms \(\propto G^{2}\) and \(\lambda^{2}\) due to both gravitational and short-ranged self-interaction individually. This is because of the wave-mechanical nature of the system: The rate depends on the total cross section, which is not just the sum of individual cross sections due to the different processes. Rather the scattering amplitudes due to all the processes must be added first, and then use its absolute square to get the cross section and associated rate of relaxation/condensation.
The presence of this cross term gives rise to a critical repulsive self-interaction strength \(\lambda_{\rm cr}\approx-2\pi Gm^{2}/v_{0}^{2}\), around which the typical net self-interaction (due to both gravitational and short-ranged self-interaction), transitions between being attractive and repulsive, and the relaxation rate becomes smallest. Here \(k_{0}=mv_{0}\) is the typical wave-mode present in the system initially.
For nucleation times as a function of \(\lambda\), we found that for the case of net attractive self-interaction \(\lambda\gtrsim\lambda_{\rm cr}\), nucleation happens quickly upon relaxation, giving rise to the relationship \(\tau_{\rm nuc}\simeq\Gamma_{\rm relax}^{-1}\). One the other hand for net repulsive self-interaction \(-\lambda\gtrsim-\lambda_{\rm cr}\), nucleation is delayed. This is because upon relaxation, short-ranged self-interaction dominate over gravitational self-interaction, preventing nucleation of a bound object. Over time as more particles are driven towards the condensate phase (equivalently, as field correlation length scale increases along with diminishing density fluctuations), a potential arises for the formation of a bound region where gravity can now overcome both
the wave pressure and short-ranged self-interaction. The associated delay factor rises linearly with \(-\lambda\), giving the nucleation time scale as \(\tau_{\rm nuc}\simeq(\lambda/\lambda_{\rm cr})\Gamma_{\rm relax}^{-1}\). In summary, Eq. (10) along with Eq. (8) (with the delay factor give in Eq. (11)) is our main analytical estimate for the nucleation timescale of Bose stars, as a function of the short-ranged self-interaction strength \(\lambda\).
To analyze this, we performed a large suit of 3+1 dimensional simulations of the Schrodinger -Poisson / Gross-Pitaevskii system (Eq. (1)), for many different values of \(\lambda\) and different parameters such as the box size \(L\) and average mass density \(\bar{\rho}\). All of our simulations were carried out with Maxwell-Boltzmann distribution, with random phases for each value of the wavemode \(\mathbf{k}\) (Eq. (12)). Throughout most of our simulations, we kept track of the max density in the box, occupation number function \(f_{k}\) (radially averaged \(f_{\mathbf{k}}\) in \(\mathbf{k}\) space), the associated correlation function \(\zeta(r)\), and projected mass density along some direction. By reading the times at which \(\rho_{\rm max}\) starts to monotonically rise beyond just the statistical fluctuations (together with making sure that a localized over dense region does appear in the simulation box around this time), we record the times of nucleation. Upper panel of Fig. 2 presents the comparison between simulations and analytical estimate. As examples, the figure is also appended (lower panel) with \(\rho_{\rm max}\) vs time curves for six different \(\lambda\) values.
While in this paper we have not analyzed our simulation data for the rate at which Bose stars accrete mass from their surroundings, we kept track of the eventual behavior of these objects (post nucleation), for many of our simulations. For the attractive case (\(\lambda>0\)), we confirmed that the nucleated Bose stars eventually decay away. This is expected since there exists a maximum critical mass beyond which the star becomes unstable and collapses into a Bosenova. For instance see upper right snapshot in Fig. 3, when the nucleated star 'immediately' collapses. While the study of eventual dynamics and fate of such regions requires a full relativistic treatment, field dynamics up-to this point is well described by the non-relativistic GP equation (e.g. see [51]).
For the repulsive case \(\lambda<0\), we found a peculiar decay behavior. We find that the nucleated clump _eventually_ (over time scales longer than the nucleation time) reaches
Figure 3: Density projection snapshots for 6 different \(\lambda\) values, at different times \(\hat{t}\) in the respective simulations. _Upper Panel_: Snapshots for three \(\lambda\) values in the typical _net_ attractive regime \(\lambda\gtrsim\lambda_{\rm cr}\simeq-2\pi Gm^{2}/v_{0}^{2}\). In the rightmost snapshot, for \(\lambda\approx 3\,\lambda_{\rm cr}\), the nucleated Bose clump quickly collapses (within \(5dt\)), shown in the smaller right corner image. _Bottom panel_: Snapshots for three different \(\lambda\) values for the other case of typical _net_ repulsive self-interactions \(-\lambda\gtrsim-\lambda_{\rm cr}\simeq 2\pi Gm^{2}/v_{0}^{2}\).
a type of criticality at which point very high frequency modes, passing through the clump and travelling along the three directions of the simulation box, appear in the system. See appendix B for some discussion. This could be an artifact of the periodic boundary conditions of the split-Fourier simulation setup, and if so, bringing into question its use to study long term dynamics of fuzzy dark matter with repulsive short-ranged self-interactions via such simulation setups. We leave a detailed investigation of this behavior for a separate work.
### Comparison with earlier work
Let us now compare our results with some of the earlier work on the subject of kinetic nucleation of Bose stars. First, our results encompass the result of [35] for the gravity only (\(\lambda=0\)) case, and is even in very good agreement with the order unity coefficient \(\alpha_{1}\) in the rate expression (besides the overall scaling with \(\bar{\rho}\), \(m\), \(v_{0}\), \(L\) and \(G\)), obtained for Gaussian initial condition. Upon inclusion of short-ranged self-interaction (\(\lambda\neq 0\) case), our results differ significantly from the existing literature [37; 38; 39]. First, we find that there exists an interference term \(\propto G\lambda\) in the _relaxation_ rate, which in fact is the leading order \(\lambda\) dependent term when short-ranged self-interaction is not dominating over gravitational self-interaction. Only in the scenario when the former is dominant, does the relaxation rate goes as \(\lambda^{2}\) to leading order. Secondly, the _nucleation_ time scale is not always equal to the inverse relaxation rate. While for \(\lambda\gtrsim\lambda_{\rm cr}\), nucleation time scale is just the inverse relaxation rate, for the strong repulsive self-interaction \(-\lambda\gtrsim-\lambda_{\rm cr}\), nucleation time is delayed by an extra factor of (\(\lambda/\lambda_{\rm cr}\)). Therefore for the purposes of nucleation of Bose stars, only in the case of strong attractive short-ranged self-interaction, \(\lambda\gtrsim-\lambda_{\rm cr}\), is it true that the nucleation time goes as \(\lambda^{-2}\) to leading order. For in the opposite case of strong repulsive short-ranged self-interaction, \(-\lambda\gtrsim-\lambda_{\rm cr}\), the nucleation time scale goes as \(\lambda^{-1}\) to leading order instead.
### Implications
Our results could have important implications in the context of self-interacting fuzzy dark matter and various interesting phenomenon that it entails. The appearance of the interference term in the relaxation rate, and hence in the nucleation time scale of Boson stars, may modify results for some of the phenomenon such as recurrent axinovae [52], de-stabilization of gravitational atoms [69], among others.
In general, irrespective of the nature (attractive or repulsive) of point-like self-interaction, the interference term becomes the leading order \(\lambda\) dependent term (and hence extremely important), when \(|\lambda|\) is at best comparable to the critical value \(|\lambda_{\rm cr}|\). As an example, even for the QCD axion we have \(\lambda_{\rm qcd}/|\lambda_{\rm cr}|\simeq 1.3(v_{0}^{2}m_{\rm pl}^{2}/f_{a}^{2})\), hence becoming comparable to, or less than \(|\lambda_{\rm cr}|\), in cosmological environments with \(v_{0}\lesssim(f_{a}/m_{\rm pl})\sim 10^{-5}\). For instance this could be important in the study of axion miniclusters [70].
In this paper we have focused on kinetic nucleation via both gravitational and short-ranged self-interactions for a single scalar field. A natural generalization is to include multiple scalar fields with naturally different masses and 4-point interactions, or a single spin-1 field including density-density and spin-spin interactions [57; 71], or even multiple spin-1 fields with extra Yang-Mills interactions [57], or a combination thereof. While there would necessarily be interference terms \(\propto G\lambda\), and we expect similar scaling of nucleation time scales as presented in this work (as a function of \(\lambda\)), a detailed analysis of such cases is left for future work.9
Footnote 9: We thank Benjamin Schussler for carrying out some preliminary simulations for the self-interacting vector case, confirming the presence of the interference term in the relaxation rate.
## Acknowledgements
We thank Mustafa Amin, Mark Hertzberg, Andrew Long, and David J.E. Marsh for many helpful discussions and also their comments on this manuscript. MJ is partially supported by a DOE grant DE-SC0021619, and partly supported by a Leverhulme Trust Research Project (RPG-2022-145). JT and WW acknowledge undergraduate summer support from the Department of Physics and Astronomy at Rice University.
|
2309.15250 | Transformation of polar nematic phases in the presence of electric field | Only a few years have passed since discovery of polar nematics, and now they
are becoming the most actively studied liquid crystal materials. Despite
numerous breakthrough findings made recently, a theoretical systematization is
still lacking. In the present paper we are making a step on the way of
systematization. A powerful technique that molecular-statistical physics is has
been applied to an assembly of polar molecules influenced by electric field.
Totally, the three polar nematic phases were found to be stable at various
conditions: the double-splay ferroelectric nematic $N_F^{2D}$ (observed in the
lower-temperature range in the absence or at low electric field), the
double-splay antiferroelectric nematic $N_{AF}$ (observed at intermediate
temperature in the absence or at low electric field) and the single-splay
ferroelectric nematic $N_F^{1D}$ (observed at moderate electric field at any
temperature below transition into paraelectric nematic $N$ and in the
higher-temperature range (also below $N$) at low electric field or without it.
A paradoxal transition from $N_F^{1D}$ to $N$ induced by application of higher
electric field has been found and explained. A transformation of the structure
of polar nematic phases at application of electric field has also been
investigated by Monte Carlo simulations and experimentally by observation of
POM images. In particular, it has been realized that, at planar anchoring,
$N_{AF}$ in the presence of moderate out-of-plane electric field exhibits the
twofold splay modulation: antiferroelectric in the plane of the substrate and
ferroelectric in the plane normal to the substrate. Several additional
sub-transitions related to fitting confined geometry of the cell by the
structure of polar phases were detected. | A. V. Emelyanenko, V. Yu. Rudyak, F. Araoka, H. Nishikawa, K. Ishikawa | 2023-09-26T20:25:05Z | http://arxiv.org/abs/2309.15250v1 | # Transformation of polar nematic phases in the presence of electric field
###### Abstract
Only a few years have passed since discovery of polar nematics, and now they are becoming the most actively studied liquid crystal materials. Despite numerous breakthrough findings made recently, a theoretical systematization is still lacking. In the present paper we are making a step on the way of systematization. A powerful technique that molecular-statistical physics is has been applied to an assembly of polar molecules influenced by electric field. Totally, the three polar nematic phases were found to be stable at various conditions: the double-play ferroelectric nematic \(N_{F}^{2D}\) (observed in the lower-temperature range in the absence or at low electric field), the double-play antiferroelectric nematic \(N_{AF}\) (observed at intermediate temperature in the absence or at low electric field) and the single-splay ferroelectric nematic \(N_{F}^{1D}\) (observed at moderate electric field at any temperature below transition into paraelectric nematic \(N\) and in the higher-temperature range (also below \(N\)) at low electric field or without it. A paradoxical transition from \(N_{F}^{1D}\) to \(N\) induced by application of higher electric field has been found and explained. A transformation of the structure of polar nematic phases at application of electric field has also been investigated by Monte Carlo simulations and experimentally by observation of POM images. In particular, it has been realized that, at planar anchoring, \(N_{AF}\) in the presence of moderate out-of-plane electric field exhibits the twofold splay modulation: antiferroelectric in the plane of the substrate and ferroelectric in the plane normal to the substrate. Several additional sub-transitions related to fitting confined geometry of the cell by the structure of polar phases were detected.
## I Introduction
One of the main trends in modern science today is the development of new materials, which can be effectively manipulated by electric field for various humans needs, from displays to medicine. Liquid crystals (LCs) fulfill many demands. It was noticed that LCs possessing spontaneous polarization can be better candidates for the novel applications: from fast energy-saving and compact electronics to artificial muscles. However, the layered structures of smectics (the only class of LCs known previously to posses the spontaneous polarization) are poorly resistant to mechanical stress. Over the past few years, several new classes of nematic LCs (which are sustainable to mechanical stress) with unique properties originating from unique symmetry of individual molecules have been discovered.
For the last decades, the formation of spontaneous polarization in the nematic materials has been actively discussed [1; 2; 3; 4]. Liquid crystals consisting of bent-core molecules were considered as the main candidates, since they have a significantly (several orders of magnitude) higher flexoelectric coefficient [5]. Indeed, nanosized polar clusters in the nematic phase were found for this kind of mesogens [6]. However, these materials do not possess macroscopic polarization in the absence of external field. Meanwhile, proper ferroelectricity was found in columnar phases composed of umbrella-shaped mesogens [7; 8; 9] and in re-entrant smectic phases [10; 11; 12; 13; 14].
In 2017, the two scientific groups independently reported about the existence of polar nematic phases in LCs composed of the wedge-shaped molecules [15; 16; 17]. In Ref. [18; 19] it was realized that the polar nematic phases can demonstrate spontaneous splay and flexoelectricity, while the existence of splay flexoelectricity in nematic LCs was predicted earlier theoretically in Ref. [20]. It was also noticed in Ref. [21; 22; 23; 24; 25] that minor changes in the molecular shape can sufficiently modify the phase sequence.
In Ref. [15] it was demonstrated that DIO material possesses at least three nematic phases: conventional paraelectric nematic phase (\(N\) or \(M1\)) at higher temperature, ferroelectric nematic phase (\(N_{F}\) or \(MP\)) at lower temperature and some intermediate phase (\(N_{X}\) or \(M2\)) in between them. In Refs. [16; 17] and later in Ref. [26] it was confirmed that RM-734 material demonstrates \(N_{F}\), but does not demonstrate \(N_{X}\). The anomalously high dielectric permittivity and dielectric anisotropy were found in the \(N_{F}\) phase in both DIO and RM-734 materials [27]. The value of spontaneous polarization in \(N_{F}\) is comparable with that for the solid-state ferroelectrics. At present, many other polar nematic (including chiral and biaxial) phases were found in different materials and mixtures [28; 29; 30; 31]. Our theoretical studies
presented in Ref. [32] suggest that the intermediate \(M2\) (or \(N_{X}\)) phase observed in DIO can be the antiferroelectric double splay nematic phase (in correspondence with definition and in complete agreement with Refs. [33; 34]) forming the periodical 2D-splay domains of several micrometers size. The same conclusions follow from dielectric measurements and POM observations in Ref. [15; 35] and measurements of spontaneous polarization and PFM observations in Ref. [26].
Experimentally, it is becoming more and more evident [36] that ferroelectric nematic phase (\(MP\) or \(N_{F}\)) also possesses the splay domains. For the uniformity of description, here and below we are going to use the \(N_{F}^{1D}\) and \(N_{F}^{2D}\) notations by combining "\(N_{F}\)" with "single-splay" or "double-splay" definitions introduced in Refs. [33; 34]). To be consistent furthermore, we are going to use the \(N_{AF}\) notation for \(M2\) (\(N_{X}\)), which is antiferroelectric.
There are many expectations about applications of nematic ferroelectrics (NFs). Generally, their behavior at application of electric field is not trivial, in particular, in combination of electric field and surface-related effects [37]. NFs are the good candidates in nonlinear optics [38]. Interesting effects related to the motion of ferroelectric nematic droplets in isotropic melts are considered in Ref. [39], and the light-induced branched structures of NF droplets on the surfaces are observed in Ref. [40]. Various polarization topologies in confined NFs were discussed in Ref. [41].
The discussion on how many polar nematic phases can exist, which of them are splayed and which are uniform, which of them are proper and which are improper, is still actual. Recently in Ref. [42] it was reported about the existence of three kinds of NF phases. In the present paper we justify the existence of three (macroscopically uniaxial and achiral) polar nematic phases. In particular, we expect that all three polar phases can be observed in DIO material. We are going to present theoretical explanations, why all three polar phases are splayed and improper ferroelectric. At the same time, since the splay domain size can achieve several micrometers, in some temperature ranges, the splay director deformation can be suppressed by the surfaces if the cell thickness is lower than the domain size.
The paper is organized as follows. In Sec. II the structures of polar nematic phases merged from theory, computer simulations and experiment will be outlined and systemized. In Sec. III the transformations of polar nematic phases induced by variation of temperature and electric field will be investigated. In Sec. IV the theoretical approaches used for analysis of the structures of polar nematic phases will presented. Finally, in Sec. V the conclusions will be made.
## II The structures of polar nematic phases merged from theory, computer simulations and experiment
### Generalization of elastic free energy for the presence of flexoelectric and induced polarizations
It is known that flexoelectric effect is crucial for the formation of various polar nematic phases. We are considering the polar molecules similarly to that presented in Ref. [32]. Technically, the flexoelectric term in the free energy can be obtained from consideration of specific symmetry of the pair molecular potential. In particular, the effective pair
Figure 1: A pair of interacting polar molecules. Adapted from Ref. [32].
molecular interaction potential \(U_{12}^{ef}({\bf a}_{1},{\bf a}_{2},{\bf r}_{12})\) can be approximated by spherical invariants \(T_{\ell\,L\,\lambda}({\bf a}_{1},{\bf u}_{12},{\bf a}_{2})\), where \({\bf a}_{1}\) and \({\bf a}_{2}\) are the principal axes of molecules 1 and 2 located at points \({\bf r}_{1}\) and \({\bf r}_{2}\), respectively, and \({\bf u}_{12}\equiv{\bf r}_{12}/|r_{12}|\) is the unit intermolecular vector, \({\bf r}_{12}\equiv{\bf r}_{2}-{\bf r}_{1}\) (Fig. 1):
\[U_{12}^{ef}({\bf a}_{1},{\bf a}_{2},{\bf r}_{12})=-\sum_{\ell,L,\lambda}J_{ \ell L\lambda}(r_{12})T_{\ell L\lambda}({\bf a}_{1},{\bf u}_{12},{\bf a}_{2}) \quad. \tag{1}\]
Introducing the polar \(P({\bf r})\) and non-polar \(S({\bf r})\) orientational order parameters
\[P({\bf r})\equiv\int f[({\bf a}\cdot{\bf n}),{\bf r}]P_{1}({\bf a}\cdot{\bf n} )d^{2}{\bf a}\quad,\quad S({\bf r})\equiv\int f[({\bf a}\cdot{\bf n}),{\bf r}] P_{2}({\bf a}\cdot{\bf n})d^{2}{\bf a}\quad, \tag{2}\]
where \(f[({\bf a}\cdot{\bf n}),{\bf r}]\) is the orientational distribution function for molecules having principal axes \({\bf a}\) with respect to director \({\bf n}\) at point \({\bf r}\), and using the gradient expansion of the director [43; 44], we obtain the flexoelectric term as the average of \(T_{110}({\bf a}_{1},{\bf u}_{12},{\bf a}_{2})\) and \(T_{011}({\bf a}_{1},{\bf u}_{12},{\bf a}_{2})\) polar invariants [32]:
\[\langle J_{110}(r_{12})T_{110}({\bf a}_{1},{\bf u}_{12},{\bf a}_{2})+J_{011}(r _{12})T_{011}({\bf a}_{1},{\bf u}_{12},{\bf a}_{2})\rangle\Longrightarrow \lambda P({\mathbf{\nabla}}\cdot{\bf n})\quad, \tag{3}\]
where \(\lambda\) is proportional to flexoelectric coefficient. The elastic free-energy density can be generalized by inclusion of flexoelectric splay term and the term related to the presence of external electric field:
\[\frac{\partial F_{\bf n}}{\partial V}=\frac{1}{2}K_{11}\left\{{\bf n}\left({ \mathbf{\nabla}}\cdot{\bf n}\right)-\lambda{\bf P}\right\}^{2}+\frac{1}{2}\,K_{2 2}({\bf n}\cdot[{\mathbf{\nabla}}\times{\bf n}])^{2}+\frac{1}{2}K_{33}[{\bf n} \times[{\mathbf{\nabla}}\times{\bf n}]]^{2}-\varepsilon_{a}({\bf E}\cdot{\bf P}) \quad, \tag{4}\]
where \({\bf P}({\bf r})\) is the vector having absolute value \(P({\bf r})\) [see definition in Eq. (2)] and (at positive \(P\)) parallel to particular direction (one of the two opposite directions) of pseudovector \({\bf n}({\bf r})\), \(K_{11}\), \(K_{22}\) and \(K_{33}\) are the splay, twist and bend elastic constants, respectively, \(K_{11}\lambda\) is the flexoelectric constant [Eq. (3)], and \(\varepsilon_{a}\) is the dielectric anisotropy of the material. At positive \(\lambda\), polarization \({\bf P}\) is parallel to director \({\bf n}\) at positive splay (\({\mathbf{\nabla}}\cdot{\bf n}\)), and is anti-parallel to \({\bf n}\) at negative splay. Here we should note that Eq. (4) is only a part of the free-energy density, which explicitly depends on director \({\bf n}\), but it does not contain all the terms depending on the polarization value \(P\). The total free-energy density will be considered in Sec. IV.
### Equilibrium structures of polar nematic phases
To obtain the equilibrium structures of polar nematic material at various conditions, we should minimize the total free energy independently with respect to director \({\bf n}({\bf r})\) and orientational distribution function \(f[({\bf a}\cdot{\bf n}),{\bf r}]\). The whole director-dependent part of the free-energy density is presented in Eq. (4), while the total free-energy density
Figure 2: Director distribution in \(N_{AF}\) (a), \(N_{F}^{2D}\) (b) and \(N_{I}^{1D}\) (c). Green color corresponds to the positive splay and polarization, red color corresponds to the negative splay and polarization; \({\bf x}\)-axis is either along the domain symmetry axis in (a) and (b) or along the symmetry plane in (c); \({\bf r}\)-axis is perpendicular to \({\bf x}\)-axis; \(\theta\) is the angle between the local director \({\bf n}\) and \({\bf x}\)-axis; \({\bf P}\) is the local polarization.
depending explicitly on the orientational distribution function \(f[({\bf a}\cdot{\bf n}),{\bf r}]\) will be considered in the framework of molecular statistical theory in Sec. IV A. One notes, however, that Eq. (4) contains also polarization \({\bf P}({\bf r})\), which is determined by function \(f[({\bf a}\cdot{\bf n}),{\bf r}]\) in correspondence with Eq. (2), and therefore the director and the orientational distribution function are correlated. This correlation will be considered in the framework of perturbation theory in Sec. IV B.
Theoretical part requires, however, some geometrical simplification, such as consideration of the axial and planar symmetries. Within these symmetry restrictions, at various conditions we find the three basic structures, which are presented in Fig. 2. At the same time, from computer simulations (Sec. IV C) we can see the more detailed information about the transformations between structures presented in Fig. 2, and the transient structures have obviously more complex geometry. The first basic structure is the double-splay antiferroelectric nematic phase \(N_{AF}\) (designated also \(N_{X}\) or \(M2\) elsewhere) with alternating signs of the splay and polarization in space. The other two basic structures, stable at different conditions, are the double- and single-splay ferroelectric nematics, \(N_{F}^{2D}\) and \(N_{F}^{1D}\), respectively. \(N_{F}^{2D}\) and \(N_{AF}\) are composed of quasi-cylindrical periodical domains, while \(N_{F}^{1D}\) is composed of planar periodical domains. For each structure presented in Fig. 2, the \({\bf x}\)-axis can be introduced, to which the director is parallel in the middle of each domain. In \(N_{F}^{2D}\) and \(N_{AF}\), the director exhibits variation along radius \({\bf r}\) of cylinder, while in \(N_{F}^{1D}\) the director exhibits variation along single space direction (for uniformity of equations, also designated as \({\bf r}\)). In all cases, \({\bf r}\) is perpendicular to \({\bf x}\). In ferroelectric phases, \(N_{F}^{2D}\) and \(N_{F}^{1D}\), the projection of polarization on the \({\bf x}\)-axis does not alternate in sign, while in antiferroelectric \(N_{AF}\), polarization alternates periodically in sign along each Cartesian coordinate.
The electric field - temperature phase diagram is presented in Fig. 3 (a), while the temperature dependencies of the domain radius and characteristic polar order parameter at \(E=0\) are presented in Figs. 3 (b) and (c), respectively.
Figure 3: Electric field β temperature phase diagram (a); Temperature dependencies of the domain radius (b) and characteristic polar order parameter (c) at \(E=0\). Green arrow in (a) tentatively corresponds to the phase sequence with temperature variation at fixed \(E\neq 0\). Red arrows with numbers in (a) and (b) correspond to the temperatures of specific phase transitions observed experimentally [Sec. III B]. Dash blue line in (b) corresponds to the half-thickness of the cell.
From theory (Sec. IV B) it follows that, within each polar phase, the domain radius \(r_{m}\) increases and polarization \(P^{*}\) decreases with the increasing temperature, while multiple \(r_{m}\,P^{*}\) remains constant. At zero electric field, \(N_{F}^{2D}\) minimizes the free energy at lower temperature, \(N_{F}^{1D}\) phase minimizes the free energy at higher temperature, and \(N_{AF}\) minimizes the free energy in between. The domains in \(N_{F}^{2D}\) and \(N_{AF}\) are visible in microscope [see Figs. 4 (a) and (b), respectively], their typical size is several micrometers. In the absence of electric field, the domains in \(N_{F}^{1D}\) at planar anchoring [Fig. 4 (c)] are not visible, because the energy-optimal configuration of the domains makes no optical difference between any points on the glass substrate. In the absence of electric field, the antiferroelectric single-splay phase [Fig. 4 (d)] possesses the same free energy as the ferroelectric one. The plane of each arc in Figs. 4 (c) and (d) can be vertical or tilted. At moderate values of electric field, all the splay nematic phases transform into \(N_{F}^{1D}\) [the orientations of arcs in \(N_{F}^{1D}\) in the presence of moderate electric field are shown in Fig. 4 (e), they can be vertical (\(\beta=0\)) or tilted (\(\beta\neq 0\)) to fit the cell gap] and at higher values of electric field - into paraelectric \(N\) having uniform director orientation.
Figure 4: POM images of quasi-cylindrical domains in \(N_{F}^{2D}\) (a) and \(N_{AF}\) (b); orientation of planar domains (schematic illustration) in \(N_{F}^{1D}\) (c) and \(N_{AF}^{1D}\) (d) at \(E=0\); orientation of planar domains in \(N_{F}^{1D}\) at \(E\neq 0\). In (a) and (b) DIO material is used, cell thickness 10 \(\mu\)m, scale bar 100 \(\mu\)m. Images of (a) and (b) are reproduced with permission from Ref. [15]. Copyright WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim, 2017.
## III Transformations of polar nematic phases induced by variation of temperature and electric field
### Transformation of \(N_{af}\) and \(N_{f}^{2d}\) into \(N_{f}^{1d}\) in electric field
Let us consider the variation of \(N_{AF}\) in the electric field, as it seen from POM and computer simulations (Sec. IV C). The \(5\mu m\)-thick planar cell was filled with DIO liquid crystal in isotropic state. After that, the sample was examined by polarizing optical microscope (Nikon V100N Pol, Japan) equipped with a heating stage (TMS-93 Stage Temp Controller and THMS 600 microscope stage, UK). The voltage from a waveform generator (Agilent 33220A, USA) was applied to the ITO-coated cell substrates.
On cooling in the absence of electric field, \(N_{AF}\) is observed between \(68.8^{\circ}C\) and \(84.5^{\circ}C\)[15]. The \(1\,kHz\) frequency electric field of various amplitude was applied at several temperatures within this range. The images of the structure variation at application of electric field are presented in Fig. 5. From computer simulations (Sec. IV C) if follows that the antiferroelectric splay remains in the plane of the substrate and gradually disappears when the voltage increases, while the ferroelectric splay arises in the direction perpendicular to the glass and gradually increases. Starting from particular voltage, the stripes corresponding to the director periodical modulation in space rotate from longitudinal (parallel to the rubbing direction) to transverse (perpendicular to the rubbing direction). In the \(5\mu m\)-thick planar cell, in correspondence with Fig. 3 (b), the domains are bigger than the cell thickness almost in the whole temperature range of \(N_{AF}\) and are therefore suppressed at the ground state (at \(E=0\)) by the surfaces. From birefringence measurements in Ref. [32] we also conclude that, at \(0V\), the conventional paraelectric nematic phase is observed. Temperature \(74^{\circ}C\) (Row 1 in Fig. 5) corresponds to the middle of the temperature range of \(N_{AF}\) in the infinite bulk. At \(0.6V\) the structure with longitudinal stripes, which is similar to the one obtained in computer simulations, arises. When the voltage increases, one can observe the gradual disappearance of the longitudinal stripes and appearance of the transverse ones. At \(70^{\circ}C\) (Row 2 in Fig. 5) the AF to F transition threshold [Fig. 3 (a)] corresponds to the lower voltage, therefore \(0.6V\) is sufficiently large voltage to cross over directly to the ferroelectric state, and only the transverse stripes are observed. One notes, that at each transverse stripe, the middle of each arc presented in Fig. 4 (e) fully satisfies the planar alignment at vertical orientation of director variation plane (\(\beta=0\)). When the voltage increases, the director modulation first increases, but at higher voltage starts decreasing again. At \(81^{\circ}C\) (Row 3 in Fig. 5) the situation is generally the same. First, the longitudinal stripes appear, then the transverse ones. Surprisingly, at any temperature, at higher voltage the structure is targeted to become planar paraelectric again. This mainly happens due to disbalance between the induced and flexoelectric polarizations, which is discussed in details
Figure 5: POM images of the DIO planar cell (cell thickness is \(5\,\mu m\)) at several voltages (sinusoidal signal, \(1\,kHz\)) and temperature \(T=74^{\circ}C\) (Row 1), \(70^{\circ}C\) (Row 2) and \(81^{\circ}C\) (Row 3). P and A are directions of polarizer and analyzer, R is the rubbing direction.
in Sec. IV B.
From experimental observation it also follows that transformation from \(N_{F}^{2D}\) to \(N_{F}^{1D}\) induced by electric field also happens continuously, while all the phase borders presented in Fig. 3 (a) follow from consideration of simplified geometries (either planar or cylindrical) and rather indicate tentative places on the diagram where the continuous transformations between phases should happen. In Fig. 6, a POM images of a planar cell of DIO material at \(67^{\circ}C\) (just below the temperature of transition from \(N_{F}^{2D}\) to \(N_{AF}\)) are presented at application of \(0.4\,V\), \(1\,kHz\) electric field. The major part of the cell [Fig. 6 (a)] represents a conjugation of \(N_{F}^{2D}\) cylindrical domains with each other, similar to those presented in Fig. 3 (b). Another part of the cell [Fig. 6 (b)] demonstrates a continuous transformation from \(N_{F}^{2D}\) to \(N_{F}^{1D}\). The quasi-cylindrical domains continuously transform into the elongated ones and then to the linear stripes. Here the darker dots and lines correspond to \(\theta\to 0\) and \(\pi\) (the places where the director is parallel to electric field - at cylinder axes or in the middle planes of planar domains, see definition of angle \(\theta\) in Fig. 2). The brighter surrounding corresponds to \(\theta\rightarrow\pi/2\) (the places where the director is parallel to the substrate - at the domain periphery). At higher voltage, the whole system transforms into \(N_{F}^{1D}\) and then into paraelectric \(N\).
### Variation of the structure of DIO on cooling at applied voltage
Let us consider the temperature-induced phase transitions in polar nematic in the presence of electric field. The \(5\mu m\)-thick planar cell filled with DIO material was cooling from \(99^{\circ}C\) down to \(27^{\circ}C\) at applied \(1kHz\) frequency electric field with constant amplitude \(0.9V\). The images in crossed polarizers were registered each half-degree. The phase sequence generally appears to be completely different from that observed without electric field. Particular key images are presented in Figs. 7 and 8. The images practically do not change between \(99^{\circ}C\) and \(87^{\circ}C\) (see Fig. 7). Presumably we observe the uniform paraelectric nematic with planar orientation of director in this temperature range. One notes that our theoretical phase diagram presented in Fig. 3 (a) predicts the existence of the \(N_{F}^{1D}\) polar phase below \(93^{\circ}C\). However, in correspondence with Fig. 3 (b), an equilibrium domain length (corresponding to the infinite bulk of LC) in the temperature range between \(93^{\circ}C\) and \(87^{\circ}C\) is predicted to be huge, and, in realistic confined geometry, the splay domains are most likely suppressed by the substrates. However, below \(87^{\circ}C\) the images start gradually becoming darker, and some longitudinal stripes arise.
The red arrows with numbers presented in Figs. 3 (a) and (b) reflect the temperatures of particular phase transitions observed in DIO at applied fixed voltage, and Arrow 1 tentatively corresponds to the realistic temperature of transition from paraelectric \(N\) to \(N_{F}^{1D}\) in DIO confined between parallel glasses (\(87^{\circ}C\)). At \(87^{\circ}C\) the equilibrium domain size in \(N_{F}^{1D}\) is still greater than the cell thickness. However, we expect that the highly tilted (almost parallel to the substrate, \(\beta\rightarrow\pi/2\)) single-splay domains can already exist. The electric field would like to make the director variation plane vertical [perpendicular to the substrates, \(\beta=0\) in Fig. 4 (e)], but in this case the splay domains would not fit the gap between glasses. In this situation, both arms of each arc in Fig. 4 (e) choose the longitudinal (parallel to the rubbing) orientation, and this could be the origin of the longitudinal stripes observed in the temperature range between \(87^{\circ}C\) and \(82^{\circ}C\). When the temperature decreases down to \(82^{\circ}C\), the domain size decreases continuously [see Fig. 3 (b)], therefore the arcs presented in Fig. 4 (e) gain larger and larger vertical projection (which is favorable for the coupling of flexoelectric polarization with electric field), and the images in crossed polarizers are becoming darker and darker.
Surprisingly, the structure variation does not demonstrate any irregular variation near the transition into \(N_{AF}\) registered at \(84.5^{\circ}C\) in Ref. [15] by DSC measurements in the absence of electric field, which indirectly indicates
Figure 6: POM images of the two parts of the cell with DIO at \(67^{\circ}C\) under \(0.4\,V\), \(1\,kHz\) electric field: (a) mostly \(N_{F}^{2D}\); (b) transformation from \(N_{F}^{2D}\) to \(N_{F}^{1D}\).
that the structure of LC does not have any tendency to return to the ground state between pulses of high-frequency electric field. An easy explanation for this effect is that the switch-off relaxation time should be much longer than the inverse frequency of electric field in this temperature range. This could be related to the fact that flexoelectric polarization inversion requires the director splay inversion in the whole space. However, the director is trapped by its own periodical structure. The alternative to the director continuous motion is the total director disruption in the whole space, whose energy cost is much higher. Since the director is determined on the scale, which is much larger than molecular size, the director motion is analogous to that of the Brownian particles, whose velocity is much slower than molecular velocity. Our expectation following from Einstein-Smoluchowski equation is that director should not move faster than several micrometers per second, which means that application of \(1kHz\) frequency electric field should definitely trap the director distribution within the 5 \(\mu m\)-thick cell, since the director can only move a few nanometers per a pulse. If the frequency of electric field is between the inverse \(\tau_{\rm on}\) and \(\tau_{\rm off}\), the director should stay in the position of particular (let's say, positive) pulse, and should not return to the ground state between pulses. Detailed analysis of all the images of DIO material obtained at application of the high-frequency electric field demonstrates good correlation with our theoretical predictions obtained in a supposition of constant electric field used in Sec. IV.
The next phase transition [marked with Arrow 2 in Fig. 3 (b)] takes place at some temperature between \(82^{\circ}C\) and \(81^{\circ}C\) (see Fig. 7), at which the transverse (perpendicular to the rubbing direction) stripes appear, and the whole image becomes darker step-wise. This transition is most likely the first-order and is related to the reorientation of director variation plane [reorientation of each arc presented in Fig. 4 (e) around vertical axis]. The reason is that the vertical projection of each arc is already sufficiently great at \(82^{\circ}C\), and the middle of each arc is targeted to be oriented along the rubbing direction, while the arms of each arc, on the contrary, are not biased anymore.
From observation of images presented in Fig. 8 it follows, that between \(80^{\circ}C\) and \(79^{\circ}C\) [Arrow 3 in Fig. 3 (a)] the
Figure 7: POM images of the DIO planar cell (cell thickness is 5 \(\mu m\)) at applied fixed electric field (\(0.9V\), sinusoidal signal, \(1\,kHz\)) during the cooling cycle (temperatures are indicated in the bottom right corners). P and A are directions of polarizer and analyzer, R is the rubbing direction.
transition from \(N_{F}^{1D}\) to the phase corresponding to \(N_{AF}\) deformed in the electric field (see discussion in Sec. III A) takes place. Tentatively, the structure variation in this temperature range follows the green arrow in Fig 3 (a), which is inclined, because the splay elastic constant (participating in the denominator of the value plotted on the vertical axis) should tentatively increase with the decreasing temperature, at least within the range of a single phase. One can see the islands of \(N_{AF}\) inside of \(N_{F}^{1D}\) at \(80^{o}C\), and therefore the \(N_{F}^{1D}\) to deformed \(N_{AF}\) phase transition is also of the first order.
At about \(74^{o}C\) the structure comes out of the \(N_{AF}\) range [Arrow 4 in Fig. 3 (a)]. Formally, the structure should return to \(N_{F}^{1D}\). However, it was demonstrated in Sec. III A that, at this temperature and voltage, the director splay modulation is not very deep, and the structure rather resembles the paraelectric nematic with planar director orientation. The origin of this effect will be discussed Sec. IV B.
At \(71.5^{o}C\) the new phase transition happens [Arrow 5 in Fig. 3 (b)]. From our observations in Ref. [32] and also from Fig. 3 (b) it follows that the size of domains in \(N_{AF}\) becomes comparable to the cell thickness at around \(71.5^{o}C\)
Figure 8: POM images of the DIO planar cell (cell thickness is 5 \(\mu m\)) at applied fixed electric field (\(0.9V\), sinusoidal signal, \(1\,kHz\)) during the cooling cycle (continuation of Fig. 7).
and thus, the stripes corresponding to the splay domains in \(N_{AF}\) would arise at \(E=0\). Above \(71.5^{\circ}C\) the structures of DIO at applied voltage and in the ground state are the same - the planar paraelectric nematic, while below \(71.5^{\circ}C\) they become different again. The observed structure returns to the one resembling \(N_{F}^{1D}\) observed between \(82^{\circ}C\) and \(87^{\circ}C\) with partial inclusions of the \(N_{F}^{2D}\) domains, whose axes (visible as reflecting dots) are oriented parallel to the electric field and perpendicular to the substrates. When the temperature farther decreases (see Fig. 8), the images become darker, and the number of reflecting dots increases. The structure variation completely ignores the \(N_{AF}\) to \(N_{F}^{2D}\) transition registered at \(68.8^{\circ}C\) by DSC measurements in Ref. [15] in the absence of electric field, and thus, the structure does not return to the ground state again, similarly to that in the temperature range between \(82^{\circ}C\) and \(87^{\circ}C\).
At \(57^{\circ}C\) the material structure is already close to the nominal transition into \(N_{F}^{2D}\). At \(27^{\circ}C\), the structure corresponding to the quasi-ideal \(N_{F}^{2D}\) with islands of crystal and also with some domains similar to those reported in Ref. [45], consideration of which is beyond the scope of the present paper, arises. At different temperatures we observe similar domains at application of much higher voltage, at which we already expect an induction of paraelectric nematic phase by electric field (see discussion in Secs. III A and IV B). One notes that, at planar boundary conditions, in the presence of electric field we obtained an image of \(N_{F}^{2D}\) similar to that obtained at homeotropic boundary conditions without electric field in Ref. [15].
## IV Theoretical approaches
Molecular-statistical theory: temperature and electric field dependent distributions of \(S\) and \(P\) order parameters in space
Let us consider a system of elongated polar molecules (with longitudinal electric dipoles \(\mu\)) interacting with each other and with external electric field \({\bf E}\) (Fig. 9). Formally, constant electric field participates in all equations below. Having in mind our discussion in Sec. III B about slow relaxation of director splay, we expect that the structures arising at high-frequency electric field do not differ very much from those obtained at constant electric field. In the general case, director field \({\bf n}({\bf r})\) is inhomogeneous, and the free-energy density \(\partial F/\partial V\) can be written in the following form Ref. [43]:
\[4\pi V_{0}\frac{\partial F({\bf r}_{1})}{\partial V}=k_{B}T\int d ^{2}{\bf a}_{1}f[({\bf a}_{1}\cdot{\bf n}_{1}),{\bf r}_{1}]\ln f[({\bf a}_{1 }\cdot{\bf n}_{1}),{\bf r}_{1}]\] \[+\frac{\sigma_{0}}{8\pi V_{0}}\int d^{2}{\bf a}_{1}\int d^{2}{ \bf a}_{2}\int d^{3}{\bf r}_{12}f[({\bf a}_{1}\cdot{\bf n}_{1}),{\bf r}_{1}] f[({\bf a}_{2}\cdot{\bf n}_{2}),{\bf r}_{2}]U_{12}^{ef}({\bf a}_{1},{\bf a}_{2},{ \bf r}_{12})\] \[-4\pi\mu(\sigma_{0}+1)\int d^{2}{\bf a}_{1}f[({\bf a}_{1}\cdot{ \bf n}_{1}),{\bf r}_{1}]({\bf a}_{1}\cdot{\bf E})\quad, \tag{5}\]
where \(V_{0}\) is the bulk occupied by a molecule located at point \({\bf r}_{1}\) and all its nearest neighbors, \(\sigma_{0}\) is the average number of neighbors for each molecule, \(f[({\bf a}\cdot{\bf n}),{\bf r}]\) is the orientational distribution function for molecules having principal axes \({\bf a}\) with respect to director \({\bf n}\) at point \({\bf r}\), \({\bf r}_{i}\) (\(i=1,2\)) are the coordinates of points 1 and 2, where molecules 1 and 2 are located, \({\bf r}_{12}\) is the vector connecting points 1 and 2, \(k_{B}\) is the Boltzmann constant, \(T\) is the temperature, \(U_{12}^{ef}({\bf a}_{1},{\bf a}_{2},{\bf r}_{12})\) is the effective pair interaction potential for two molecules with long axes \({\bf a}_{1}\) and \({\bf a}_{2}\) located at points 1 and 2, respectively, while \({\bf n}_{1}\) is the director at point 1 and \({\bf n}_{2}\) is the director at point 2. The first term in Eq. (II.1) is the entropy, the second term is the internal energy, and the third term is the energy of interaction of molecular longitudinal dipoles with electric field. At any point \({\bf r}\), the orientational distribution function \(f[({\bf a}\cdot{\bf n}),{\bf r}]\) in Eq. (II.1) satisfies the normalizing constraint:
\[\int d^{2}{\bf a}f[({\bf a}\cdot{\bf n}({\bf r})),{\bf r}]=1\quad. \tag{6}\]
Minimizing the free energy (II.1) with respect to orientational distribution function \(f[({\bf a}\cdot{\bf n}),{\bf r}]\) under constraint (6), one obtains:
\[f[({\bf a}\cdot{\bf n}),{\bf r}]=\frac{1}{I_{0}({\bf r})}\exp\biggl{\{}-\frac {U_{MF+E}[({\bf a}\cdot{\bf n}),{\bf r}]}{k_{B}T}\biggr{\}}\quad, \tag{7}\]
where \({\bf r}\equiv{\bf r}_{1}\), \({\bf n}\equiv{\bf n}_{1}\), \(I_{0}({\bf r})\) is the normalizing constant, and \(U_{MF+E}[({\bf a}\cdot{\bf n}),{\bf r}]\) is the potential of a molecule located at point \({\bf r}\) with orientation \({\bf a}\equiv{\bf a}_{1}\) of prime axis in a combination of the mean molecular field and electric field:
\[U_{MF+E}[({\bf a}\cdot{\bf n}),{\bf r}]\equiv\frac{\sigma_{0}}{4\pi V_{0}}\int d ^{3}{\bf r}_{12}\int d^{2}{\bf a}_{2}f[({\bf a}_{2}\cdot{\bf n}_{2}),{\bf r}_{ 2}]\biggl{[}U_{12}^{ef}({\bf a}_{1},{\bf a}_{2},{\bf r}_{12})-\mu({\bf a}\cdot {\bf n})({\bf n}\cdot{\bf E})\biggr{]}\quad. \tag{8}\]
Approximating the pair potential by spherical invariants [Eq. (1)], substituting Eq. (1) into Eq. (8), introducing coefficients
\[J_{\ell L\lambda}^{(i)}\equiv\frac{\sigma_{0}}{4\pi V_{0}}\int\limits_{0}^{ \infty}dr_{12}r_{12}^{i+2}J_{\ell L\lambda}(r_{12}) \tag{9}\]
and using only \(T_{101}\), \(T_{110}\), \(T_{011}\) and \(T_{202}\) spherical invariants resulting in average in the appearance of the terms in the mean field depending on the powers of operator \(\mathbf{\nabla}\) not higher than one, one finally obtains the following expression for the potential of a molecule with orientation \({\bf a}\) affected by a combination of the mean molecular field and electric field:
\[-U_{MF+E}(t,{\bf r})=J_{101}^{(0)}P({\bf r})P_{1}(t)+J_{202}^{(0)}S({\bf r})P_{ 2}(t)+\biggl{\{}\frac{1}{6}\biggl{[}J_{110}^{(1)}+J_{011}^{(1)}\biggr{]}(\mathbf{ \nabla}\cdot{\bf n})+\mu({\bf n}\cdot{\bf E})\biggr{\}}P_{1}(t)\quad, \tag{10}\]
where \(t\equiv({\bf a}\cdot{\bf n})\), \(P_{1}(t)\equiv t\) and \(P_{2}(t)\equiv 3/2\,t^{2}-1/2\) are the first and the second Legendre polynomials. Eq. (10) corresponds to the first (simplest) approximation reflecting modulation of the \(S\) and \(P\) order parameters caused by modulation of splay and describing the major tendency: both parameters \(S\) and \(P\) should be higher (lower) at the places where the splay is higher (lower). The first two terms in Eq. (10) are the polar and non-polar anisotropies, while the two terms in figure brackets are due to the flexoelectric effect and electric field. From Eqs. (2), (7) and (10) one readily obtains the following recurrent equations for determination of the \(P({\bf r})\) and \(S({\bf r})\) order parameters at each temperature \(T\) and electric field \(E\) at any given \({\bf n}({\bf r})\) distribution:
\[P({\bf r})=\frac{I_{1}({\bf r})}{I_{0}({\bf r})}\quad,\quad S({\bf r})=\frac{I _{2}({\bf r})}{I_{0}({\bf r})}\quad, \tag{11}\]
Figure 9: A trial (blue) polar molecule in a combination of the mean molecular field and electric field \({\bf E}\). Here \({\bf n}\) is the local director at a point, where trial molecule is located.
where integrals \(I_{m}({\bf r})\) are defined as follows:
\[I_{m}({\bf r})\equiv\int\limits_{-1}^{1}P_{m}(t)\exp\biggl{\{}-\frac{U_{MF+E}(t,{ \bf r})}{k_{B}T}\biggr{\}}dt\quad, \tag{12}\]
where \(U_{MF+E}\) is determined by Eq. (10). Substituting solution (7)-(10) back into Eq. (5), one obtains for the equilibrium free-energy density \(\partial F_{\rm eq}/\partial V\):
\[4\pi V_{0}\frac{\partial F_{\rm eq}({\bf r})}{\partial V}=-k_{B}T\ln I_{0}({ \bf r})+\frac{1}{2}J_{101}^{(0)}P^{2}({\bf r})+\frac{1}{2}J_{202}^{(0)}S^{2}({ \bf r}), \tag{13}\]
where normalizing integral \(I_{0}({\bf r})\) should be calculated using Eqs. (12) and (10). Eq. (13) should be used for comparison of the free energies of the neighboring phases in the phase diagram.
As it was mentioned in Sec. II, Eq. (4) [multiplied by \(4\pi V_{0}\)] is the explicitly depending on director \({\bf n}\) part of Eq. (5). Indeed, if one prolongs the gradient expansion in Eq. (10) up to the terms depending on the second power of operator \(\mathbf{\nabla}\) (for this purpose, invariants \(T_{220}\), \(T_{022}\), \(T_{222}\), \(T_{422}\) and \(T_{224}\) should also be considered in approximation Eq. (1); this is done in Ref.[43]) and substitutes Eq. (10) into the second and third terms of Eq. (5) [definition Eq. (8) should also be used], then one obtains Eq. (4). In particular, the flexoelectric and electric-field-dependent terms (which explicitly depend on both \(P\) and \({\bf n}\)) coincide in Eqs. (4) and (5) at substitution of \(4\pi V_{0}K_{11}\lambda=[J_{110}^{(1)}+J_{011}^{(1)}]/6\) and \(V_{0}\varepsilon_{a}=\mu(\sigma_{0}+1)\). By the same substitution of Eq (10) into Eq. (5), one obtains the \(P^{2}\) term introduced phenomenologically in Ref. [35]. In the same manner, it is possible to obtain additional contributions to the elastic constants depending on the polar order parameter \(P\) studied phenomenologically in Refs. [35; 18] (see also Ref. [32]).
Perturbation elastic continuum theory reflecting space variation of \(S\) and \(P\) order parameters
Let us now consider the director distribution in the polar phases in the presence of electric field, having in mind the results obtained in Sec. IV A. In the cases presented in Figs. 2 (a) and (b), we expect that director is (mostly) along the radial planes (the planes parallel to the cylinder axis \({\bf x}\) and radius \({\bf r}\)). Thus, in all cases presented in Fig. 2, the director mainly has two nonzero coordinates, similarly to that in Ref. [46]:
\[n_{x}=\cos\theta(r)\quad,\quad n_{r}=\sin\theta(r)\quad. \tag{14}\]
One notes, that all the structures presented in Fig. 2 can be described in a unified way, and here we also introduce the \(\delta\) parameter to distinguish between the double- and single-splay structures (\(\delta\) is set to one in the case of 2D-splay and is set to zero in the case of 1D-splay). From Eq. (14) it follows:
\[(\mathbf{\nabla}\cdot{\bf n})=\frac{\delta}{r}\sin\theta+\cos\theta\,\frac{d\theta }{dr}\quad,\quad[{\bf n}\times[\mathbf{\nabla}\times{\bf n}]]^{2}=\sin^{2}\theta \biggl{(}\frac{d\theta}{dr}\biggr{)}^{2}\quad. \tag{15}\]
In the manner of paper [32], let us consider the one-constant approximation \(K_{11}=K_{33}\equiv K\) for simplicity. An equilibrium director \({\bf n}({\bf r})\) distribution should be obtained by independent minimization of free-energy density (4), while an equilibrium polarization \(P({\bf r})\) distribution has already been obtained by independent minimization of free-energy density (5) and is presented by Eq. (11).
Precise minimization of Eq. (4) with constraint (11) appears to be complicated. The complexity is in the fact that minimization with respect to director \({\bf n}\) and polarization value \(P\) should be done independently, while varying in the space polarization \(P\) (and this variation is unknown before we know the distribution of \({\bf n}\)) participates in differential Eq. (4), from where this distribution of \({\bf n}\) is supposed to be obtained. For this purpose, let us consider a perturbation theory based on the assumption that variation of the order parameters in space is small. In the framework of perturbation theory, let us first consider the uniform polarization \(P({\bf r})=P^{*}\) in Eq. (4). In both cases of ferroelectric domains presented in Figs. 3 (b) and (c), let us consider variation of angle \(\theta\) from zero (at the \({\bf x}\)-axis of one domain) to \(\pi\) (at the \({\bf x}\)-axis of the neighboring domain). Then the same simplification for the free-energy density is valid for all the structures presented in Fig. 2: all the terms proportional either to \(d\theta/dr\) or to \(\lambda\) should have opposite signs in the neighboring domains, and the corresponding terms vanish in average. Taking this into account, substituting Eq. (15) into Eq. (4), and minimizing free-energy density (4) with respect to \(\theta\) and \(d\theta/dr\), as presented, for example, in Ref. [47], Appendix A, one obtains the following equation of state:
\[\left(\frac{d\theta}{dr}\right)^{2}+\tau^{2}|\cos\theta|-\frac{\delta}{\tau^{ 2}}\sin^{2}\theta=\frac{\tau^{2}}{k^{2}}\quad, \tag{16}\]
where \(\tau\equiv\sqrt{2\varepsilon_{a}EP^{*}/K}\) is the reduced electric field, and \(k\) is some constant independent of angle \(\theta\), which should be obtained by independent minimization of the free-energy density. Introducing new dimensionless variable \(\psi\equiv\tau\,r/k\), one obtains from Eq. (16):
\[\frac{d\theta}{d\psi}=\sqrt{1-k^{2}|\cos\theta|+\delta/\psi^{2}\,\sin^{2} \theta}\quad. \tag{17}\]
Radius \(r_{m}\) of the domain can now be found by minimization of the free energy with respect to parameter \(k\). This, however, can be done in a more precise way partially taking into account the non-uniformity of \(P({\bf r})\). Indeed, from Eq. (11) it approximately follows [after expansion of the exponent in Eq. (12) in Taylor series with substitution of Eq. (10)] that \(P({\bf r})\sim(\mathbf{\nabla}\cdot{\bf n})+\varepsilon_{a}E\cos\theta/(K\lambda)\), where the first term is flexoelectric polarization and the second term is induced by electric field polarization. One notes from Eq. (4) that any re-scale of coordinate \(r\), at which \(r\lambda P\) and \(\tau/(\lambda P)\) remain constant, does not change the free-energy density. This means in the end that distribution of polarization is determined only by distribution of angle \(\theta\) in the space. Let us therefore write the following trial approximation for the polar order parameter:
\[P(\theta)=P^{*}r_{m}\{(\mathbf{\nabla}\cdot{\bf n})+\tau^{2}\cos\theta/(2\lambda P ^{*})\}=P^{*}\psi_{m}\!\left\{\frac{\delta}{\psi}\sin\theta+\cos\theta\! \left(\frac{d\theta}{d\psi}+\frac{1}{2}k\bar{\tau}\right)\right\}\quad. \tag{18}\]
One notes from Eq. (18) that, in the case of \(N_{F}^{2D}\), the \(P^{*}\) proportionality coefficient coincides with \(P\) at \(\theta=\pi/2\) (polar order parameter at periphery \(r_{m}\) of ferroelectric domain). In the cases of \(N_{F}^{2D}\) and \(N_{AF}\), the \(P^{*}\) coefficient formally corresponds to different place \(r^{*}\) within the domain, other than periphery \(r_{m}\). Regardless of the kind of the domain, however, the expression in figure brackets in Eq. (18) is equal to \(1/r_{m}\) at \(r^{*}\). Substituting Eqs. (15), (17) and (18) into Eq. (4), integrating the free-energy density along the radius of domain (with the \(r\,dr\) Jacobian for 2D-splay or with the \(dr\) Jacobian for 1D-splay) and dividing the result by the cross section area (for 2D-splay) or by the length of domain (for 1D-splay), one obtains the expression for the average free-energy density, which should be farther minimized with parameter \(k\). This could be done for each polar nematic phase similarly to that presented in Ref. [32] for \(N_{AF}\). Subsequently, at any value of \(\tilde{\tau}\), the \(r(\theta)\) dependence can be obtained, and, in particular, radius \(r_{m}\) of the domain can be obtained. In \(N_{AF}\), \(r_{m}\lambda P^{*}\approx 1.55\) and maximum tilt is \(\theta_{m}\approx 64^{\circ}\). In \(N_{F}^{1D}\) and \(N_{F}^{2D}\), radius \(r_{m}\) of the domain generally depends on the applied electric field, while maximum tilt is always equal to \(r_{m}\lambda P^{*}\approx 1.55\).
Figure 12: Distribution of the \(S\) [(a),(c),(e)] and \(P\) [(b),(d),(f)] orientational order parameters within the domains in \(N_{F}^{2D}\) [(a),(b)], \(N_{AF}\) [(c),(d)] and \(N_{F}^{1D}\) [(e),(f)] at \(T=57^{\circ}C\) (1); \(62^{\circ}C\) (2); \(67^{\circ}C\) (3); \(69^{\circ}C\) (4); \(76^{\circ}C\) (5); \(81^{\circ}C\) (6); \(83^{\circ}C\) (7); \(85^{\circ}C\) (8); \(88^{\circ}C\) (9); \(90^{\circ}C\) (10); \(92^{\circ}C\) (11) at \(E=0\), \(\sigma_{0}J_{020}^{(0)}/k_{B}=2032\,K\), \(\sigma_{0}J_{101}^{(0)}/k_{B}=362\,K\), \(\lambda=2\,\mu m^{-1}\), \(J_{A}/k_{B}=113\,K\mu m\) and \(K\,V_{0}=5\times 10^{-35}N\,m^{3}\). Radius \(r\) is defined in Fig. 2 for all the polar phases.
to \(\theta=\pi/2\). Several \(\theta(r)\) and \((\mathbf{\nabla}\cdot\mathbf{n})\) dependencies at several particular values of dimensionless electric field \(\bar{\tau}\) are presented in Figs. 10 (a) and (b), respectively, for \(N_{F}^{2D}\) [blue curves (1) and (2)] and \(N_{F}^{1D}\) [red curves (3), (4) and (5)]. One notes that the tilt of director varies almost linearly in both \(N_{F}^{2D}\) and \(N_{F}^{1D}\), with a slight tendency to greater variation in the middle of each domain in \(N_{F}^{2D}\) and, oppositely, at the domain periphery (\(r=r_{m}\)) in \(N_{F}^{1D}\). From Fig. 10 it follows that, at moderate values of electric field, the maximum play deformation in \(N_{F}^{1D}\) is achieved at \(\theta=0\), at which the director is parallel to electric field. This is the configuration, at which both flexoelectric and induced polarizations give optimal summarized contribution to the free energy. Therefore both \(N_{F}^{2D}\) and \(N_{AF}\) exhibit a transition into \(N_{F}^{1D}\) at application of electric field. However, there always exists a disbalance between the induced and flexoelectric polarizations. Indeed, the flexopolarization can exist only in the presence of director deformation. However, at application of electric field, the structure becomes more uniform, and the splay deformation reduces. Therefore, at higher electric field, the maximum in Fig. 10 (b) reduces and shifts to the position, where director is not parallel to electric field. At \(\bar{\tau}\approx 0.843\), the splay phase becomes unstable, and a transition into paraelectric nematic phase happens. The electric field dependencies of characteristic polar order parameter \(P^{*}\) and domain radius \(r_{m}\) at particular fixed temperatures within \(N_{F}^{1D}\) are presented in Figs. 11 (a) and (b), respectively. Both dependencies are generally not monotonic because of the nontrivial correlation between splay and electric field. At higher value of electric field, \(P^{*}\) greatly decreases and \(r_{m}\) greatly increases just before the transition into \(N\).
Knowing the director distribution in space in \(N_{F}^{2D}\), \(N_{AF}\) and \(N_{F}^{1D}\), one obtains the distributions of \(S\) and \(P\) order parameters in each phase using Eqs. (10)-(12). For this purpose, one should substitute approximation (18) into Eq. (10). In particular, at specific places \(r^{*}\) within each domain, where polar order parameter \(P\) coincides with coefficient \(P^{*}\), one immediately obtains that the whole expression in figure brackets in Eq. (10) is equal to \(\left[J_{110}^{(1)}+J_{011}^{(1)}\right]/(6r_{m})\). From recurrent Eqs. (11)-(12) one obtains \(S^{*}\) and \(P^{*}\) first, and then the whole distribution of \(S(r)\) and \(P(r)\) within the domain, which are presented in Fig. 12. One notes that both \(S\) and \(P\) generally decrease with the increasing temperature. The maximal values of both parameters are observed close to the middle of domain in \(N_{F}^{2D}\) and \(N_{AF}\) and in the middle exactly in \(N_{F}^{1D}\). Polar order parameter reaches zero at the periphery of each domain in \(N_{F}^{1D}\), while in \(N_{F}^{2D}\) and \(N_{AF}\) it does not, which means that flexopolarization exhibits step-wise reversal between the domains without director disruption. Distributions of \(S(r)\) and \(P(r)\) at several non-zero values of electric field are also presented in Fig. 13. One notes that profiles of both \(S\) and \(P\) first tend to become sharper at moderate electric field and then smoother at higher electric field.
### Computer simulations
To perform calculations of director distribution in a polar nematic film under the action of electric field, we have modified the existing extended Frank elastic continuum approach [32], previously used for calculations of polar nematic material. The original approach takes into account the effects of director field distortion with the \(\lambda(\mathbf{n}\cdot\mathbf{p})\) term included, as well as the formation of defects and finite energy of the surface boundaries. In this paper, we have modified the
free energy to take into account the action of an electric field:
\[F=\frac{1}{2}\int\limits_{V}\biggl{\{}K_{11}({\bf n}(\mathbf{\nabla} \cdot{\bf n})-\lambda\,{\bf p})^{2}+K_{22}({\bf n}\cdot[\mathbf{\nabla}\times{\bf n }])^{2}+K_{33}[{\bf n}\times[\mathbf{\nabla}\times{\bf n}]]^{2}\biggr{\}}dV\] \[-\varepsilon_{a}\int\limits_{V}({\bf n}\cdot{\bf p})({\bf n}\cdot {\bf E})dV+\frac{W}{2}\int\limits_{\Omega}(1-\cos^{2}\gamma)d\Omega+F_{\rm def }\quad, \tag{19}\]
where \(K_{11}\), \(K_{22}\) and \(K_{33}\) are the splay, twist and bend elastic constants, respectively, \(K_{11}\lambda\) is the flexoelectric constant, \({\bf p}\) is polarizability direction vector, \({\bf E}\) is electric field intensity, \(V\) is the bulk of the sample having surface \(\Omega\), \(W\) is the surface anchoring energy density, \(\gamma\) is the angle between local director and normal to the surface, \(F_{\rm def}\) is the energy of defects calculated by the summation of the point and linear defect energies (see the details in Ref. [48]). The details of optimization are presented in Ref. [32]. For simplicity, polarizability direction vector \({\bf p}\) is supposed to be a unit vector parallel or anti-parallel to \({\bf n}\) in each point. In addition, in correspondence with theoretical part of the paper, an algorithm accepted only those steps with \(({\bf n}\cdot{\bf p})({\bf n}\cdot{\bf E})\geq 0\). As a result, our simulation annealing procedure leads
Figure 14: Principal geometry of the simulations box and the polar nematic film. Black arrows show the rubbing direction of planar alignment of the film. Orange arrow shows the electric field direction. Yellow dash arrows show the periodic boundary condition directions of the simulation box.
Figure 15: Dependence of the total free energy on the value of dimensionless electric field \(e\) and rubbing direction orientation \(\varphi\). Violet dash line traces energy minimum over \(e\).
to minimization of the free energy over both director \(\mathbf{n}\) and polarizability direction \(\mathbf{p}\) distributions in a self-consistent way.
The one-constant approximation was used for simplicity: \(K_{11}:K_{22}:K_{33}=1:1:1\), and the value of \(\lambda\) is set to \(10\). To take into account the potential formation of disclination lines, their cores linear energy density was set to \(f_{core}^{line}=10K_{11}\). The cubic simulation box of size \(0.125\times 2\times 2\) was rendered into \(4\times 64\times 64\) lattice. For \(x\) and \(y\) facets, the periodic boundary conditions were applied. For \(z\) facet, planar aligned boundary conditions were set with rubbing direction having angle \(\varphi\in[0^{\circ};90^{\circ}]\) with the \(x\) axis and \(\mu_{1}=Wd/K_{11}=400\), where \(d\) is the film thickness (see Fig. 14). The electric field \(\mathbf{E}\) was oriented perpendicular to the film plane, and the value of dimensionless electric field intensity \(e=Ed(\frac{\varepsilon_{x}}{K_{11}})^{1/2}\) varied from \(0.1\) to \(30\). For each \(e\), we produced \(6.1\times 10^{10}\) steps (\(3\times 10^{7}\) parallel multisteps) Monte-Carlo annealing optimization with \(4\) independent runs to find the energy-optimal structures.
The resulting structures strongly depend on the value of electric field and the rubbing direction \(\varphi\). Fig. 15 shows the dependency of the total free energy of the system on these two parameters. At low value of electric field (\(e\) from \(0.1\) to \(14\)), the energy-optimal structure corresponds to the antiferroelectric splay in the \(xy\) plane (supposed to be the plane of the substrate in real experiment) with alternating sign of polarizability. The average director orientation is almost parallel to the rubbing direction. Some slight ferroelectric modulation is also present: a projection of director along electric field arises in the middle of each antiferroelectric domain, so that the projection of director lines on the \(zy\) plane (perpendicular to rubbing) gains the shape of periodical arcs. This structure is visualized as longitudinal to the rubbing direction stripes (Fig. 16). Above the threshold value \(e*\approx 15\), the system undergoes a transition related to the reorientation of the arcs along the rubbing direction, and the arcs themselves become much bigger, while the antiferroelectric modulation in the \(xy\) plane disappears. This structure (at \(e>15\)) is visualized as transverse to the rubbing direction stripes (Fig. 16). At further increasing electric field, the layer period starts growing [similarly to that in theory, see Fig. 11 (b)], and the director divergence in the middle each domain decreases [similarly to that in theory, see Fig. 10 (b)]. Computer simulations describe well the transformation from antiferroelectric to ferroelectric structure with the reorientation of stripes, which is observed experimentally and presented in Sec. III A.
## V Conclusion
The origin and structures of ferroelectric and antiferroelectric splay nematic phases are outlined. The double-splay ferroelectric \(N_{F}^{2D}\) and antiferroelectric \(N_{AF}\) nematic phases are composed of quasi-cylindrical periodical domains.
Figure 16: Director \(\mathbf{n}\) and polarizability (\(\mathbf{n}\cdot\mathbf{p}\)) distributions at various dimensionless electric field \(e\) values in the central cross-cuts perpendicular to the film plane (top) and along the film plane (bottom). Director distributions are shown in color, corresponding to the direction of \(\mathbf{n}\) (\(x\) - red, \(y\) - green, \(z\) - blue). Polarizability is shown in orange [\((\mathbf{n}\cdot\mathbf{p})=1\)] and violet [\((\mathbf{n}\cdot\mathbf{p})=-1\)].
Without electric field, \(N_{F}^{2D}\) and \(N_{AF}\) are observed in the lower-temperature range. The single-splay ferroelectric \(N_{F}^{1D}\) nematic phase is composed of planar periodical domains. Without electric field, \(N_{F}^{1D}\) is observed in the higher-temperature range. In the presence of electric field, all the splay nematic phases first (at moderate electric field) transform into \(N_{F}^{1D}\) and then (at higher electric field) - into paraelectric nematic phase \(N\) having uniform director orientation. The origin of all the splay nematic phases is flexoelectric effect due to the polarity of molecules. The origin of the transformations between phases in electric field is the non-trivial interplay between flexoelectric and induced polarizations. The distribution of director and both polar \(P\) and non-polar \(S\) orientational order parameters within the domains of all the splay nematic phases is found. Variation of the structure and properties of the splay nematic phases with variation of temperature and electric field are investigated. The electric field - temperature phase diagram is obtained. The equilibrium domain size was found to increases and polarization was found to decrease in each polar phase with the increasing temperature. Several additional phase transitions related to optimization of the domains within the cell gap were found and explained.
###### Acknowledgements.
A.V.E. and V.Yu.R. thank the Russian Foundation for Basic Research (project No. 21-53-50008) for the financial support of theoretical investigation presented in this work. F.A, H.N and K.I. thank Japan Society for the Promotion of Science (project No. JPJSBP120214814) for the financial support of experimental investigation presented in this work. The research was carried out using the equipment of the shared research facilities of HPC computing resources at Lomonosov Moscow State University. The authors are grateful to S.A. Shvetsov for help.
|
2309.07654 | Towards Robust and Unconstrained Full Range of Rotation Head Pose
Estimation | Estimating the head pose of a person is a crucial problem for numerous
applications that is yet mainly addressed as a subtask of frontal pose
prediction. We present a novel method for unconstrained end-to-end head pose
estimation to tackle the challenging task of full range of orientation head
pose prediction. We address the issue of ambiguous rotation labels by
introducing the rotation matrix formalism for our ground truth data and propose
a continuous 6D rotation matrix representation for efficient and robust direct
regression. This allows to efficiently learn full rotation appearance and to
overcome the limitations of the current state-of-the-art. Together with new
accumulated training data that provides full head pose rotation data and a
geodesic loss approach for stable learning, we design an advanced model that is
able to predict an extended range of head orientations. An extensive evaluation
on public datasets demonstrates that our method significantly outperforms other
state-of-the-art methods in an efficient and robust manner, while its advanced
prediction range allows the expansion of the application area. We open-source
our training and testing code along with our trained models:
https://github.com/thohemp/6DRepNet360. | Thorsten Hempel, Ahmed A. Abdelrahman, Ayoub Al-Hamadi | 2023-09-14T12:17:38Z | http://arxiv.org/abs/2309.07654v1 | # Towards Robust and Unconstrained Full Range of Rotation Head Pose Estimation
###### Abstract
Estimating the head pose of a person is a crucial problem for numerous applications that is yet mainly addressed as a subtask of frontal pose prediction. We present a novel method for unconstrained end-to-end head pose estimation to tackle the challenging task of full range of orientation head pose prediction. We address the issue of ambiguous rotation labels by introducing the rotation matrix formalism for our ground truth data and propose a continuous 6D rotation matrix representation for efficient and robust direct regression. This allows to efficiently learn full rotation appearance and to overcome the limitations of the current state-of-the-art. Together with new accumulated training data that provides full head pose rotation data and a geodesic loss approach for stable learning, we design an advanced model that is able to predict an extended range of head orientations. An extensive evaluation on public datasets demonstrates that our method significantly outperforms other state-of-the-art methods in an efficient and robust manner, while its advanced prediction range allows the expansion of the application area. We open-source our training and testing code along with our trained models: [https://github.com/thohemp/6DRepNet360](https://github.com/thohemp/6DRepNet360).
head pose estimation, full range of rotation, rotation matrix, 6D representation, geodesic loss
## 1 Introduction
Head pose estimation follows the objective of predicting the human head orientation from images and is a crucial step in many computer vision algorithms. Applications are wide-ranging and include attention estimation [1, 2, 3], face recognition [4, 5], and the estimation of facial attributes [6, 7], which again are vital features in driver assistance systems [8, 9, 10], augmented reality [11, 12], and human-robot interaction [13, 14, 15]. The vast majority of present methods [16, 17, 18, 19, 20, 21, 22, 23] narrow down the research issue to the estimation of solely frontal poses with a limited rotation range. This favors the leverage of the facial feature-richness and suitable, widely available training datasets. However, in uncontrolled application scenarios [24, 25, 26] head orientations are likely to surpass the narrow angle range that most methods are trained for and, consequently, produce random and inaccurate head pose predictions. In view of extending the prediction to the full area of rotation range, the current state of research is challenged by two key limitations. The first is the absence of comprehensive datasets that cover the full range of head orientations [27]. The second equally decisive and often neglected factor is an appropriate rotation representation, as it significantly impacts the model's ability to effectively learn the connection between visual pose appearance and corresponding parameterization [28]. For instance, the commonly used Euler angle and quaternion representation suffer from ambiguity and discontinuity problems that lead to an unstable training process and a mediocre prediction performance if plainly applied [16, 19, 23, 29]. This behavior even intensifies for stronger rotations in the narrow range spectrum.
We overcome these limitations by proposing a rotation matrix-based 6D representation for efficient and unconstrained network training that we further enhance with a geodesic based loss. Additionally, we take up the ambitious challenge of predicting the full range of rotation by agglomerating new training data with enhanced pose variation. For this matter, we utilize the CMU Panoptic [30] dataset and apply an automatic head pose labeling process to generate head pose samples with focus on the back of the head. We combine these samples with the popular 300W-LP [31] head pose dataset and, together, receive a large scaled dataset with greatly expanded head rotation variations. Finally, the training of our proposed model on this new agglomerated data enables us to predict a significantly extended range of head orientations. We examine our approach in multiple experiments on public datasets that testify our method state-of-the-art accuracy and remarkable robustness in predicting challenging poses. At the same time, it is able to handle a many times greater range of head pose orientations com
Fig. 1: Example images of predicted orientations of various rotated heads. |
2309.08567 | Charting the Realms of Mesoscale Cloud Organisation using Unsupervised
Learning | Quantifying the driving mechanisms and effect on Earth's energy budget, of
mesoscale shallow cloud organisation, remains difficult. Partly because
quantifying the atmosphere's organisational state through objective means
remains challenging. We present the first map of the full continuum of
convective organisation states by extracting the manifold within an
unsupervised neural networks's internal representation. On the manifold
distinct organisational regimes, defined in prior work, sit as waymarkers in
this continuum. Composition of reanalysis and observations onto the manifold,
shows wind-speed and water vapour concentration as key environmental
characteristics varying with organisation. We show, for the first time, that
mesoscale shallow cloud organisation produces $\pm 1.4\%$ variations in albedo
in addition to variations from cloud-fraction changes alone. We further
demonstrate how the manifold's continuum representation captures the temporal
evolution of organisation. By enabling study of states and transitions in
organisation (in simulations and observations) the presented technique paves
the way for better representation of shallow clouds in simulations of Earth's
future climate. | Leif Denby | 2023-09-15T17:34:29Z | http://arxiv.org/abs/2309.08567v3 | # Charting the Realms of Mesoscale Cloud Organisation using Unsupervised Learning
###### Abstract
We present a new approach to the analysis of the real-world data in the real-world data. We present a new approach to the analysis of the real-world data in the real-world data. We present a new approach to the analysis of the real-world data in the real-world data. We present a new approach to the analysis of the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-world data in the real-in real-world data in the real-world
###### Abstract
Quantifying the driving mechanisms and effect on Earth's energy budget, of mesoscale shallow cloud organisation, remains difficult. Partly because quantifying the atmosphere's organisational state through objective means remains challenging. We present the first map of the full continuum of convective organisation states by extracting the manifold within an unsupervised neural networks's internal representation. On the manifold distinct organisational regimes, defined in prior work, sit as waymarkers in this continuum. Composition of reanalysis and observations onto the manifold, shows wind-speed and water vapour concentration as key environmental characteristics varying with organisation. We show, for the first time, that mesoscale shallow cloud organisation produces \(\pm 1.4\%\) variations in albedo in addition to variations from cloud-fraction changes alone1. We further demonstrate how the manifold's continuum representation captures the temporal evolution of organisation. By enabling study of states and transitions in organisation (in simulations and observations) the presented technique paves the way for better representation of shallow clouds in simulations of Earth's future climate.
Footnote 1: In the final stages of preparing this manuscript we became aware of independent, related work by Alinaghi et al. (2023)
## 1 Introduction
From satellite imagery of Earth it is immediately clear that clouds often organise into spatial patterns. Names have been given to the most prominent patterns we recognise (e.g. fronts, cyclones, cellular cloud-decks, etc) and we use this classification to study the impacts of clouds on Earth's weather and climate through their precipitation and interaction with radiation and atmospheric circulation.
One particular form of clouds, shallow tradeoff cumuli, have a profound importance in Earth's climate system due to their ubiquity and net cooling effect, stemming from these clouds reflecting more incoming short-wave radiation from the sun compared to the outgoing long-wave radiation they permit (Bony et al., 2004). Differing predictions of how these clouds will respond to a warming climate accounts for most of the variation in climate sensitivity between climate models (Bony & Dufresne (2005), Webb et al. (2006), Medeiros et al. (2008), Vial et al. (2013)), which highlights the urgent need
to better understand how these clouds form and interact with their environment (one of the World Climate Research Programme's Grand Science Challenges, Bony et al. (2015)).
One particular aspect of shallow convective clouds that is still poorly understood, is their mesoscale organisation, both in quantifying what regimes occur, the driving mechanisms behind them and extent to which mesoscale cloud organisation impacts Earth's climate. This interest in convective organisation stems from the observation that in high-resolution (Large Eddy) simulations, clouds cluster (self-aggregate) under certain conditions (Wing et al., 2018; Muller et al., 2022), which results in a change in cloud-fraction and net radiation for the same domain-mean state. Given the necessarily coarse resolution of climate models (\(O(10km)\)), neither the processes driving organisation nor the organisational regimes can be explicitly resolved and so the behaviour of these shallow clouds (and their radiative impact) must be parameterised.
In the context of the EUREC\({}^{4}\)A field campaign (Stevens et al., 2021) work by Stevens et al. (2019) [S19] developed a set of four classifications for shallow cloud organisation by manual examination of visual satellite imagery. These classes were motivated by the physical processes expected important in different regimes, and have since formed the framing for many studies investigating the conditions under which different forms of organisation occur, both in observations (Bony et al., 2020; Schulz et al., 2021) [B21, S21] and in simulations (Dauhut et al., 2023), with particular focus on the transition from small to larger isolated detraining shallow clouds (Narenpitak et al., 2021a; Saffin et al., 2023a).
An alternative approach was taken by Denby (2020) [D20] who developed an unsupervised machine learning approach to autonomously discover the possible states of mesoscale organisation without imposing specific classes. This approach of unsupervised learning was also taken by Kurihana, Foster, et al. (2022); Kurihana, Moyer, Willett, et al. (2022); Kurihana, Moyer, & Foster (2022) using clustering to produce individual classes of cloud patterns. Differently again, Janssens et al. (2021) [J21] utilised the framework of traditional metrics used for measuring clouds and their organisation (rather imposing specific classes), concluding that rather than occurring in isolated regimes, cloud organisation exists in a continuum (at least when viewed through these metrics).
By building on D20 we demonstrate in this work how the continuum of convective organisation states is captured as an emergent property of the internal _embedding space_
representation learnt by a neural network through unsupervised learning. Specifically, we will extract the low-dimensional manifold within the high-dimensional embedding space on which all possible states of convective organisation lie, and explore this manifold through the metrics and classes of [J21] and [S19]. Through composition of reanalysis and observations onto this manifold we characterise environmental conditions vary with convective organisation, we quantify the effect of organisation on radiation correcting for changes in cloud-fraction, and further demonstrate how transitions between organisational states can be studied with the manifold.
## 2 Methods
The primary tool used in this work is a convolutional neural network which takes as input a 2D image-tile containing cloud imagery (derived here from satellite observations) and produces a point in a high-dimensional _embedding_ space (here 100-dimensional, as in [D20]), a so-called _embedding vector_. During training the neural network has learnt to place tiles with similar cloud structures nearby in the embedding space. This is achieved by training the network on contrastive tile triplets of similar and dissimilar cloud-patterns (produced by sampling from satellite imagery both spatially closely-overlapping _anchor-neighbor_ and randomly distributed _distant_ tiles).
The image-tiles utilised here were generated by performing locally planar projections of observations from the geostationary GOES-16 satellite to produce 256\(\times\)256 pixel input tiles spanning 200\(km\times\)200\(km\). In extension to D20's use of truecolor RGB-composite tiles, we here also generate a set of tiles using the three _water-vapour window_ infrared channels (11, 14 & 15) and train a separate IR-tiles model. This enables characterisation of convective organisation throughout the diurnal cycle (including nighttime), enabling analysis of cloud evolution over multiple days (see subsection 3.4).
We use the Level-2 \(\Delta x\approx 4km\) 1-hourly CERES radiation products derived from the GOES-16 geostationary satellite (NASA/LARC/SD/ASDC, 2018)2. Environmental characteristics associated with different states of organisation were extracted by resampling ERA5 reanalysis (Hersbach et al., 2020) onto the dataset tiles. For cloud-fraction and cloud metric calculations (Denby & Janssens, 2022) we use the shallow cloud-mask
as defined by Bony et al. (2020), thresholding on GOES-16 channel 13 (\(10\,35\mu m\)) brightness temperature (\(280K~{}<~{}T_{b}^{ch13}~{}<~{}290K\)). To aid comparison with prior work we have chosen to restrict our analysis to tiles with minimum cloud top-temperature above freezing level \(T_{c}>273K\).
## 3 Results
### The manifold of mesoscale cloud organisation
In this section we will demonstrate how the continuum of cloud organisation states can be extracted as an emergent property of the embedding space utilised by a trained neural network. We do this by examining the topological structure of the embeddings produced by the neural network applied to a dataset of 10,000 triplet tiles.
As the neural network has learnt during training to place similar cloud structures close in the embedding space, the occurrence of evolution between organisation states will manifest as a continuum of points in the embedding space (assuming a dataset large enough to contain the in-between states). The extent to which this point-cloud maps out either isolated regions, clusters connected by isolated paths, or full manifolds of smooth evolution, tells us not only what kinds of regimes exist, but also how distinct they are and what transitions between regimes are actually observed in nature. Prior work by [19] suggests that organisation in the tropical Atlantic comes in four distinct forms, which would manifest as the embeddings lying in four isolated clusters. However, clustering analysis shows no evidence of isolated clusters in the embedding space. Indeed, should we expect the atmosphere to gravitate to only few distinct forms of cloud organisation? If the atmosphere is constantly evolving between different forms of organisation, why would it stay "stuck" in specific isolated regimes of organisation rather than spending as much time transitioning between regimes? Since attempts to find isolated clusters failed, we moved to techniques which are able to maintain the continuum representation that embedding vectors of organisation provide. By applying manifold extraction techniques to the embedding space we found that the tile embeddings do indeed appear to lie on a low-dimensional manifold within the high-dimensional embedding space.
We applied the Isomap manifold extraction method (Tenenbaum et al., 2000) to transform the 100D embedding-vector point cloud into a 2D plane, the result of which is visualised in Figure 1a by rendering tiles for individual points across the manifold. Isomap
Figure 1: Convective organisation embedding manifold a) visualised by the anchor tile from the closest anchor-neighbour pair within a fixed interval across the manifold inset with i) mean cloud-top height and ii) cloud-fraction, together with conventional cloud metrics of J21 b) characteristic cloud-size (\(L_{c}\)), c) contiguous open-sky fraction (\(f_{os}\)), d) directional alignment of liquid water path (\(a_{LWP}\)), e) standard deviation of cloud-top height \(\sigma_{CTH}\) and h) per-class distribution of S19 organisation classes.
was selected because it through construction of a nearest-neighbour graph through the entire tile embedding point-cloud, ensures that the topology of the manifold is unchanged (i.e. does not introduce or remove new paths between points). The manifold extraction allows us to perform dimensionality reduction of the embedding space, by extracting only the part of the embedding space that the neural network has actually utilised. Using a manifold extraction method, rather than for example Principal Component Analysis (PCA), avoids assuming that the embedding points lie on a high-dimensional plane since Isomap follows the curvature of the manifold spanned by the tile embedding points.
To visualise the embedding manifold we select the closest anchor-neighbor pair for each fixed-width bin across the embedding manifold, and render the anchor tile from that pair. This produces tiles with a clearly discernable pattern by exploiting that where the trained neural network is unable to place two anchor-neighbor tiles in close proximity (in the embedding space), this suggests that either a) these tiles are very different in the cloud structures present (unlikely given the spatial overlap of anchor-neighbor tiles) or b) it is not possible for the neural network to characterise the organisational state of these two tiles and so these tiles are not in a clearly descernable organisational state. Conversely, closely-spaced anchor-neighbor tiles are not only similar, but in an organisational state clearly identifiable by the trained neural network.
We can by eye immediately identify differences in the morphological features of clouds in different parts of the manifold, with smaller isolated clouds concentrated in the lower left, larger isolated clouds in the upper corner and the bottom right populated by cellular cloud structures (cloud-size and cloud-fraction Figure 1a insets i and ii respectively). Having produced this 2D "map" of cloud organisation through extraction of the underlying manifold utilised by the neural network, we next to turn to examining the organisational states spanned within this manifold, both in terms of contemporary means of cloud metrics and organisation classes identified by J21 and S19 respectively, and later through quantifying the environmental conditions associated with different forms of organisation with and radiation effects of organisation.
This is done by aggregating observations and reanalysis datasets (coincident in time and space with the sampled tiles) onto the 2D manifold by binning each physical variable in turn by the 2D manifold dimensions.
#### 3.1.1 Cloud-metrics on manifold
We next set the extracted manifold in a more familiar context by computing the set of four cloud property metrics identified by J21 as collectively being able to describe the majority of variation between organisational regimes: b) the cloud length-scale (\(L_{c}\)), c) open-sky amount (\(f_{os}\), Antonissen, 2019), d) directional anisotropy (\(WOI_{3}\)) of liquid water path (\(a_{LWP}\), Brune et al., 2021) and e) standard deviation of cloud-top height (\(\sigma_{CTH}\)). The variation of these metrics across the embedding manifold is shown in the respective subfigures of Figure 1, which depict the mean value for each variable across all tiles falling within a given bin in on 2D manifold.
Visually, it is immediately clear that using just these metrics there are many ways to split up the embedding manifold into regions of similar characteristics, with both smooth and abrupt changes in the cloud property metrics across the manifold. As cloud-size is a very familiar characteristic that relates to how a cloud was formed and the radiative impact it has, we will concentrate on separately examining the large regions of the manifold where the characteristic cloud-size remains near-constant. For the very smallest clouds (\(L_{c}<20km\) in Figure 1b) the principal variation is the decrease in the characteristic open-sky amount (Figure 1c) with increasing cloud-top height (Figure 1a inset i) (and lesser increase in cloud-fraction, Figure 1a inset ii), which describes the transition from scattered isolated clouds (left in the manifold) to more cellular (cold-pool arc) cloud formations (bottom right of the manifold). The largest clouds (\(L_{c}>20km\)) are found in more varied configurations; specifically, with either a) very variable cloud-top height (Figure 1e), intermediate cloud-fraction and large regions of open-sky (the large isolated clouds in the top left of the manifold), but also situations b) with less variation in cloud-top height and lower cloud-fraction (the top right of the manifold).
It is worth pausing here to consider how a 2D plane appears to be adequate to map all regimes of the shallow cumulus organisation when J21 showed that a full set off four metrics are needed. We, conjecture that this is principally because J21 used PCA and so explicitly assumed that variations in mesoscale organisation can be described by a set of linear basis defined by the metrics used. However, as noted by J21 it is not clear that the linear decomposition is physically meaningful for cloud organisation, evidenced by the metrics not being linearly separable. Said differently, it is quite possible that the full set of four metrics are only required to distinguish a subset of organisational states and
for other regimes the metrics actually covary. We see this behaviour in our analysis as the first two metrics (\(L_{c}\) and \(f_{os}\)) show an unidirectional gradient across the entire embedding manifold, whereas the last two metrics, \(a_{LWP}\) and \(\sigma_{CTH}\) (Figure 1d and 1e) primarily vary within smaller regions where the first two metrics are near-constant.
The extent to which the space of possible organisational states is indeed inherently two-dimensional, will be investigated using topological analysis tools in future work. However, through the compositing cloud and environment characteristics (the latter in subsection 3.2) on the 2D embedding manifold we do find physically meaningful interpretations of variability observed and so we expect that to leading order the 2D manifold here is a useful tool to understanding what kinds of organisation form and why they form.
#### 3.1.2 Finding Sugar, Fish, Flowers and Gravel
We next turn to examining the extent to which the convective organisation classes (_Sugar, Gravel, Fish_ and _Flowers_) described by B19 appear as distinct regions on the embedding manifold, examining whether the self-supervised neural network has "discovered" these manually defined classes of organisation. This is done by producing embeddings with the trained neural network at the same locations which have been manually labelled as belonging to one of the four classes and, for each of the four classes in turn, plotting the distribution of these embeddings over the manifold (see Figure 1f), thereby showing where the trained neural network would place a tile with a given manual classification. The manually-labelled dataset produced for the EUREC4A field campaign Stevens et al. (2021) by Schulz (2022) which gives at \(0.01^{\circ}\times 0.01^{\circ}\) resolution for 1/7/2020 to 2/3/2022 in the tropical Atlantic the number of people labelling a given location in the four classes. As Schulz (2022) any location with over 60% agreement in labelling is designated as belonging to a specific class. The per-class distribution is visualised by the 50% density contour (showing the spread) and then 90% density contour (containing the peak) computed using Kernel Density Estimation (KDE).
The distribution of tiles labelled as _Sugar_, _Gravel_ and _Flowers_ peak in distinct regions of the embedding manifold suggesting these three classes of organisation have distinctive characteristics identifiable using unsupervised learning (in contrast to _Fish_, discussed below). The spatial regions labelled _Sugar_ are concentrated in the region of the embedding space where scattered isolated cumuli with low vertical extent are concen
trated, _Gravel_ is associated with (often multi-) cellular cloud structures formed by cold-pool arcs, and _Flowers_ in the region of larger, deeper isolated cumuli, all in agreement with B19.
However, although the distributions of these three classes peak in isolated regions in embedding manifold, they also show a broad and varying overlap between organisation classes. The _Sugar_ class appears the most separated from the rest, fitting with it being the archetypical organisation comprising randomly scattered small shallow cumuli. _Gravel_ organisation extends into the region of _Sugar_, fitting with _Gravel_ being characterised by cloud-free voids (driven by cold-pools) within regions if shallow scattered cumuli. For tiles labelled as _Flower_ organisation, the distribution extends to encompass the peak of the _Gravel_ cloud distribution, which is consistent with these larger isolated clouds in some cases being associated with cold-pool arcs (Cui et al., 2023). Finally, we consider the distribution of tile samples from regions labelled as _Fish_ which is concentrated in same part of the embedding manifold as the _Sugar_ class and extends to include all three of the other classes. We conjecture that this is due to the large characteristic length-scale of _Fish_ organisation (visually often O(\(1000km\))). Specifically, on the length-scale of tile-size used in this work it appears that _Fish_ organisation is comprised from smaller patterns of mesoscale organisation rather than being distinct.
As mentioned above, the overlaps in the distribution across the embedding manifold of regions manually labelled as belonging to separate kinds of organisation can to some degree be explained by the fact that some kinds of mesoscale cloud structures could be expected to occur together based on physical mechanisms expected to be associated with their formation. The isolated nature of the peaks of the manifold distribution of the three _Sugar, Gravel and Flower_ classes above supports findings of (Stevens et al., 2019) that these kinds of organisation have distinct characteristics. However, looking across all tiles these same classes do not show up as isolated distributions across the embedding manifold. Said another way, these three classes of organisation should be considered as useful waymarkers in the full continuous state-space of mesoscale convective organisation, rather than defining the only distinct kinds of organisation to which most kinds of shallow cumulus cloud formations belong.
Figure 2: Bin-mean values of environmental characteristics derived from ERA5 reanalysis across the embedding manifold
### Environmental characteristics of mesoscale organisation
Next we will examine how the environment varies with different forms of organisation by compositing ERA5 reanalysis on the embedding manifold. Across the manifold we will quantify how organisation varies with environmental characteristics (Figure 2) and contrast these relationships with findings in prior work.
In agreement with B21 and S21, in conditions with the lowest wind-speeds (Figure 2a and 2a) and relatively warm sea-surface temperatures (Figure 2f), we find the archetypical scattered shallow cumuli and in conditions with lowest sea-surface temperature and highest wind-speeds, the shallow more cellular cloud structures formed by evaporation-driven cold-pools (these forms of organisation are also called _Sugar_ and _Gravel_, see subsection 3.1.2). Contrasting the manifold regions with the smallest (and shallowest, \(z_{CTH}\;<\;2km\)) and largest (and deepest, \(z_{CTH}\;\approx\;4km\)) isolated clouds we find that the environmental factor which most clearly differentiates these regions to be sub-cloud and cloud-level moisture (Figure 2e and 2b), with higher cloud-level and lower sub-cloud moisture in the latter case. In fact, the moisture and wind-speed appear nearly orthogonal in their variation across the embedding manifold, so that for contours of fixed moisture, the wind-speed uniquely defines a location on the manifold and thus the kind of organisation typically associated with these conditions. In contrast to B21 and S21, we do not find the strongest stability conditions (Figure 2c) to be associated with large and deep isolated clouds, but rather with the strongest degree of cellular organisation.
### Radiative impact of mesoscale organisation
The principal reason for being interested in convective organisation is the possible impact of organisation of shallow convective clouds on Earth's radiation budget. Using the four classes of organisation by S19, B21 concluded that _"for a given low-cloud amount [we] did not find a significant effect of cloud organisation on the shortwave albedo."_ From B21 it is clear that to leading-order that mesoccale albedo is controlled by cloud-fraction, however, as the four regimes of organisation studied show little overlap in cloud-fraction, we wonder strong the effect of organisation is for a given cloud-fraction and so attempt to separate the effect of cloud-fraction and organisation on albedo.
We do this by first fitting a simple model (a random forest, in effect an optimal-binning algorithm producing step-wise predictions) to predict the tile-mean shortwave
Figure 3: Variation of shortwave albedo a) across the embedding manifold, with cloud-fraction based model miss-fit both b) point-wise and c) across the manifold (distribution in inset) showing the effect of organisation alone (discarding cloud-fraction) together with cloud properties affecting albedo d), e) and f)
albedo (Figure 3a) from the tile cloud-fraction. The result of this fit can be seen in Figure 3b) and shows the same monotonic increase in albedo with cloud-fraction as B21. The figure also highlights the significant spread in albedo for a given cloud-fraction. We next plot the mean error of this simple model across the embedding manifold (Figure 3c), thereby getting a direct visualisation of the extent to which organisation alone affects albedo. In contrast to B21, we do find that organisation effects the shortwave albedo besides through the change of cloud-fraction, with the model albedo miss-fit of \(\sigma_{\Delta\alpha}\approx 1.4\%\).
To set the effect of organisation on radiation in context we have included in Figure 3d, 3e and 3f, the mean optical depth, cloud particle radius and liquid water path respectively. Organisation dominated by small scattered cumuli has an anomalously low albedo (compared to what would be predicted from cloud-fraction alone) appearing to result from low cloud optical depth, which in turn results from larger drops and lower total amount of liquid water than other states of organisation across the manifold. In contrast, regimes with higher albedo generally have smaller droplets and higher liquid water path (in cellular organisation) or very small droplets and high liquid water path (large isolated cumuli).
The assumption that cloud organisation only effects the amount of reflected shortwave through changes in cloud-fraction, is in effect assuming that all shallow convective cloudy pixels have the same albedo and the only important factor is how many pixels contain cloud. We have shown here that properties of the cloudy pixels do indeed effect albedo, and the connection between mesoscale organisation and reflected shortwave radiation is more than just cloud-fraction. To fully unpick the role of cloud-properties changing with organisation, further work should be done with in-situ observations and modelling studies to compliment the remote-sensing retrievals used here.
### Mapping the temporal evolution of organisation
In the final use of the embedding manifold we demonstrate how it can be used to study the temporal evolution of cloud organisation. By applying the trained neural network on tiles sampled along a trajectory that follow clouds as they evolve, we can map out transitions between organisational states and thereby investigate the drivers behind these transitions. We use a 4-day trajectory (created with _lagtraj_, Denby & Boeing, 2022), which follows a cloud-layer airmass across the Atlantic as organisation develops from ini
tial isolated scattered cumuli (_Sugar_) into larger isolated cumuli (_Flowers_) (August 2nd 2020, studied by Saffin et al., 2023b; Narenpitak et al., 2021b). Tiles along the trajectory (examples in Figure 4b) are then mapped onto the embedding manifold (Figure 4a). The evolution on the manifold clearly shows, as is also indicated by the tile samples shown, that over the first three days organisation exhibits diurnal cycling that is eventually broken overnight ending in a drastically different regime on February 2\({}^{\text{nd}}\). In future work this form of manifold trajectory analysis will be used to unpick the mechanisms behind this bifurcation behaviour.
## 4 Conclusions
In this work we have demonstrated that a map of all possible states of mesoscale organisation exists as an emergent property of the internal embedding space used by an unsupervised neural network, trained to group tiles of similar cloud patterns together.
Examining this embedding manifold map we find that across the manifold the visual variation in cloud patterns matches values of traditional metrics used to measure clouds. Although we find that traditional metrics used for measuring organisation are
Figure 4: Evolution of cloud organisation along 4-day Lagrangian trajectory arriving at Barbados on 2nd Feb 2020 visualised a) on the embedding manifold and b) with tile samples along the trajectory using IR-tiles (note manifold is different to Figure 1 which is created from model trained on RGB-tiles).
able to capture the continuum of organisation variation, their varying co-linearity may make them challenging to use in isolation to understand processes of mesoscale organisation. We find that the unsupervised neural network "rediscovers" three (_Sugar_, _Flowers_ and _Gravel_) of the four classes of organisation defined by Stevens et al. (2019), demonstrated by well-separated peaks in their manifold distribution. However, rather than appearing as isolated regions, these classes appear as useful waymarkers in the full map of cloud organisation produced in this work. By compositing ERA5 reanalysis onto the embedding manifold we find broad agreement with prior work in stronger winds and lower sea-surface temperature being associated with more cellular organisation. However, we also find that the larger isolated clouds are principally found in conditions with high cloud-level and low sub-cloud moisture (rather than lower tropospheric stability being key). Using the continuum representation of organisation we are able to show that cloud organisation _does_ affect shortwave albedo beyond simply controlling cloud-fraction. We find that, as expected, this is primarily due to changes in the optical depth, the drivers of which can now be examined using the manifold of organisation. And finally, we have demonstrated how the ability to represent and measure the continuum of organisation allows for the study of how convective organisation develops; examining the evolution of _Sugar_ (scattered small cumuli) to _Flowers_ (larger isolated cumuli) and capturing the breakaway from diurnal cycling in organisation.
With the urgency of increased capacity to model Earth's future climate, this novel technique to produce a _map_ of all states of convective organisationa provides a new avenue for how to understand the processes of cloud organisation, both in models and observations, and paves the way for better representation in simulations of Earth's climate.
The author acknowledges funding from the Paracon GENESIS (NERC NE/N013840/1) and EUREC\({}^{4}\)A-UK (NERC NE/S015868/1) projects. GOES-16 data is available on the Amazon Open Data Registry at [https://registry.opendata.aws/noaa-goes/](https://registry.opendata.aws/noaa-goes/). The Synoptic Radiative Fluxes and Clouds data sets (SYN1deg-1Hour_Terra-Aqua, Edition 4A, [https://doi.org/10.5067/TERR/1HOUR_L3.004A](https://doi.org/10.5067/TERR/1HOUR_L3.004A) and CER_GEO_E44_GOE16_NH_V01.2 [https://doi.org/10.5067/GOES16/CERES/GEO_ED4_NH_V](https://doi.org/10.5067/GOES16/CERES/GEO_ED4_NH_V) are made available by the NASA CERES group. |
2309.10328 | Computational Approaches for App-to-App Retrieval and Design Consistency
Check | Extracting semantic representations from mobile user interfaces (UI) and
using the representations for designers' decision-making processes have shown
the potential to be effective computational design support tools. Current
approaches rely on machine learning models trained on small-sized mobile UI
datasets to extract semantic vectors and use screenshot-to-screenshot
comparison to retrieve similar-looking UIs given query screenshots. However,
the usability of these methods is limited because they are often not
open-sourced and have complex training pipelines for practitioners to follow,
and are unable to perform screenshot set-to-set (i.e., app-to-app) retrieval.
To this end, we (1) employ visual models trained with large web-scale images
and test whether they could extract a UI representation in a zero-shot way and
outperform existing specialized models, and (2) use mathematically founded
methods to enable app-to-app retrieval and design consistency analysis. Our
experiments show that our methods not only improve upon previous retrieval
models but also enable multiple new applications. | Seokhyeon Park, Wonjae Kim, Young-Ho Kim, Jinwook Seo | 2023-09-19T05:21:22Z | http://arxiv.org/abs/2309.10328v1 | # Computational Approaches for App-to-App Retrieval
###### Abstract
Extracting semantic representations from mobile user interfaces (UI) and using the representations for designers' decision-making processes have shown the potential to be effective computational design support tools. Current approaches rely on machine learning models trained on small-sized mobile UI datasets to extract semantic vectors and use screenshot-to-screenshot comparison to retrieve similar-looking UIs given query screenshots. However, the usability of these methods is limited because they are often not open-sourced and have complex training pipelines for practitioners to follow, and are unable to perform screenshot set-to-set (_i.e._, app-to-app) retrieval. To this end, we (1) employ visual models trained with large web-scale images and test whether they could extract a UI representation in a zero-shot way and outperform existing specialized models, and (2) use mathematically founded methods to enable app-to-app retrieval and design consistency analysis. Our experiments show that our methods not only improve upon previous retrieval models but also enable multiple new applications.
Machine Learning, Deep Learning, Deep Learning, Few-shot, Deep Learning, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-shot, Few-, Few-shot, Few-shot, Few-, Few-shot, Few-shot, Few-, Few-shot, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few,-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few,-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few,-, Few-shot, Few-, Few-, Few,-shot, Few-, Few-, Few-shot, Few-, Few-, Few-shot, Few-, Few-shot, Few-, Few-, Few-shot, Few-, Few-shot, Few, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-shot, Few-, Few-,-shot, Few-, Few-, Few-shot, Few-, Few-shot, Few,-, Few-shot, Few-, Few-, Few,-shot, Few-, Few-, Few-shot, Few-, Few-shot, Few,-, Few-shot, Few-, Few,-shot, Few-, Few-,-, Few,-shot, Few-, Few-, Few-shot, Few,-, Few-shot, Few-, Few-, Few-, Few,-shot, Few-, Few-,-shot, Few-, Few-,-shot, Few-, Few-,-shot, Few-, Few-shot, Few-,-, Few-shot, Few-,-, Few-, Few-shot, Few-, Few-,-shot, Few-, Few-,-shot, Few-, Few-,-shot, Few-, Few-,-shot, Few-, Few-,-shot, Few-,-, Few-shot, Few-,-, Few-shot, Few-,-, Few-shot, Few-,-, Few-,-shot, Few-,-, Few-shot, Few-,-, Few-shot, Few-,-, Few-shot, Few-,-, Few-,-shot, Few-,-, Few-,-shot, Few-, Few-,-shot, Few-,-, Few-,-shot, Few-,-, Few-,-shot, Few-, Few-,-shot, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-shot, Few-, Few-,-, Few-shot,-, Few-,-shot, Few-,-, Few-,-shot, Few-, Few-,-, Few-shot, Few-,-,-, Few-,-shot, Few-,-, Few-,-, Few-,-shot, Few-,-, Few-,-, Few-,-, Few-,-, Few-,-shot, Few-,-, Few-,-, Few-shot, Few-,-, Few-,--, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-,-, Few-, Few-,-shot, Few-, Few-,-,-, Few-,-, Few,-shot, Few-, Few-,-, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-,-, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-, Few-shot, Few-, Few-,-, Few-,-shot, Few-, Few-, Few-,-, Few-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-, Few-shot, Few-, Few-,-, Few-,-shot, Few-, Few-, Few-,-,-shot, Few-, Few-,-,-shot, Few-, Few-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-,-shot, Few-, Few-,-, Few-,-shot, Few-, Few-,-, Few-shot, Few-,-, Few-, Few-shot, Few-,-, Few-shot, Few-,-, Few-shot, Few-, Few-,-, Few-shot, Few-, Few-,--, Few-shot, Few-,-, Few-,-shot, Few-, Few-,-shot, Few-, Few-, Few-,-shot, Few-, Few-,-, Few-shot, Few-, Few-, Few-shot, Few-,-, Few-shot, Few-,-shot, Few-, Few-,-shot, Few-, Few-,-shot, Few-, Few-,--shot, Few-, Few-,-shot, Few-, Few-,--shot, Few-, Few-,-shot, Few-,-shot- Few-, Few-,-,-shot, Few-,-, Few-shot, Few-, Few-,-
method, we gathered a new dataset from Mobbin2, a curated mobile UI screenshot hub, where each app includes 126 screenshots on average.
Footnote 2: [https://mobbin.com/](https://mobbin.com/)
Another crucial task enabled by the analysis of set-level UI representation is the automated validation of the UI design consistency, a domain that has yet to be extensively explored. Early studies (Mahajan and Shneiderman, 1997; Ivory and Hearst, 2001) along with recent ones (Yang et al., 2021; Burny and Vanderdonckt, 2022) attempted to predict whether queried UIs violated heuristic design guidelines. While valuable, these guidelines often prove insufficient as field designers prioritize the company's own guidelines. Consequently, academic guidelines are often dismissed as they might conflict with design intentions (Colusso et al., 2017), suggesting a promising direction for future research.
We posit that it is vital to employ data-driven design rules directly from designers' queries (_i.e._, pre-existing UIs in the app) without resorting to heuristics. By doing so, the _extracted_ semantics can be more closely aligned with the designers' intention. To achieve such a goal, we exploit the metric of _uniformity_(Wang and Isola, 2020) to measure the consistency among UIs in a specific set (_i.e._, UIs in the app). With the uniformity metric, practitioners would easily measure the effect of newly added sets of UIs and compare different alternatives.
For both retrieval and consistency check tasks, it is required to acquire the _semantic representation_ of the graphical UI. Recent works (Li et al., 2021; Bunian et al., 2021) employ machine learning (ML) models to produce semantic vectors from UI screens, typically trained on small-sized datasets like Rico (Deka et al., 2017). However, advancements in foundation models (Bommasani et al., 2021) demonstrate that models trained on extensive web-scale datasets can surpass those trained on smaller, meticulously curated datasets, as evidenced by the success of GPT-3 (Brown et al., 2020) and CLIP (Radford et al., 2021).
Since it is still unclear whether models trained on uncurated web-crawled datasets can outperform specialized models trained for UI retrieval using well-curated UI screens, we conducted qualitative and quantitative studies to evaluate our model. As a result, we found that the model actually grasps the semantics of UIs in a zero-shot way and its semantic vectors result in more preferred UI retrieval compared to specialized models when tested with human crowd workers.
We summarize our contributions as follows:
* We extend screenshot-to-screenshot retrieval to app-to-app retrieval with an optimal transport method, enabling new applications of the data-driven UI design inspiration process.
* We rethink the task of design consistency check and enable data-driven consistency check with the metric of uniformity.
* We investigate the zero-shot applicability of visual foundation models for UI semantics through comprehensive experiments.
## 2 Background
In this section, we cover related work in the areas of (1) computational UI understanding and its adoption; (2) optimal transport for an app-to-app retrieval; and (3) uniformity for design consistency check.
### Computational UI understanding and its adoption
Multiple studies (Bonnardel, 1999; Wu et al., 2021; Lu et al., 2022; Herring et al., 2009) report that designers prefer to get inspiration from pre-existing design examples. Lee et al. (2010) showed that designers use various examples during the design process, and even showed that interfaces created through the process involving multiple references are preferred over those without references. To meet such needs, the HCI community has studied computational approaches for understanding user interfaces (Jiang et al., 2022). From traditional computer vision algorithms (Kumar et al., 2013; Ritchie et al., 2011) to deep learning models (Huang et al., 2019; Chen et al., 2020; Li et al., 2021; Bunian et al., 2021; Liu et al., 2018), the community tried to distill the semantics of given UI screenshots into usable representations.
Although beneficial _prima facie_, practitioners are frustrated with these tools for multiple reasons (Lu et al., 2022; Colusso et al., 2017). As Lu et al. (2022) pointed out, one of the most common pitfalls of these models is their lack of ability to understand an app's problem domains and functionalities. The problem arises as these models only allow queries with a single UI screenshot, and it's nearly impossible to infer the app's intention with a single screenshot. In this work, we seek ways to support querying and retrieving a similar _app_ instead of a single UI screenshot.
### Optimal transport for an app-to-app retrieval
Defining an app as a set of UI screenshots, we can derive an app's problem domains and functionalities from the relations between screenshots from different apps. For example, in Figure 1, if designers use the _set destination_ UI screenshot of the Uber app, its cosine distance to Google Maps and Lyft app's set destination screenshot is equal. However, considering screens for other functionalities of the Uber app (_e.g._, _confirm ride_, _reserve car_,...) comprehensively, we can more reliably infer that Lyft is semantically closer to Uber than Google Maps. This process is revisited later in Figure 5.
Comparing multiple screenshots congruently can be viewed as the transportation of virtual masses from a query set of UIs to a target set of UIs. As deep-learning-based models make UIs into vectors on the n-dimensional Euclidean space \(\mathbb{R}^{n}\), we can imagine virtual masses distributed over the set of vectors of the app. Then, there exist multiple transportation plans that transport these masses to the target app's UI vectors, and each plan can be written as a doubly stochastic matrix as it should preserve the total amount of the masses. Considering both the transportation plan and the distance between each pair, we can compute the optimal transport (OT) plan (Villani, 2009; Peyre et al., 2019; Santambrogio, 2015), which minimizes the transportation cost. Such cost is a scalar value that describes the distance between two apps and can be used to enable app-to-app retrieval. In this work, we leverage the OT plan to efficiently compute the relationship between apps to boost up both the latency and quality of the retrieval task.
### Uniformity for design consistency check
Beyond inspiration, checking the overall consistency of screens for the same app (_e.g._, checking the consistency of a new design draft against the existing screens) is also one of the primary tasks for app designers (Wu et al., 2021). Traditionally, the HCI community has focused on producing general guidelines predominantly targeted for inclusiveness (_e.g._, accessibility) and detecting violations of the interface guidelines in an automated manner (Mahajan and Shneiderman, 1997; Ivory and Hearst, 2001; Yang et al., 2021; Burny and Vanderdonckt, 2022). Despite the usefulness of guidelines, it is not easy for industrial designers to comply with the guidelines because they might already have the company's own design guidelines, not to mention they have to reinterpret and adapt them to their situation. In this work, we aim to handle this gap by measuring the consistency of an app and treating the violation as a decrement in consistency. Although our approaches do not provide explicitly documented guidelines, it is generally applicable to all situations as long as the reference set of UI screens (_i.e._, app) exists.
Given a set of vectors, we can consider the Gaussian potential kernel that maps a pair of vectors into a positive scalar. If we normalize vectors onto an (n-1)-dimensional unit hypersphere \(\mathcal{S}^{n-1}\), the distributions of vectors minimizing the average pairwise Gaussian potential (i.e. uniformity) weakly converge to the uniform distribution (Wang and Isola, 2020). That says, if we use contrastive representation models such as CLIP (Radford et al., 2021), we can measure how uniformly the vectors in the set are distributed, as contrastive loss contains the term minimizing uniformity. This property can directly be used for measuring the consistency of the app as an app with less uniformity means an app consists of similar UIs, thus high consistency.
## 3 Methods
In this section, we describe our methodologies for collecting a mobile app screenshot dataset, calculating vector representations from screenshots, and the optimal transport to implement an app-to-app retrieval task.
### Dataset
A variety of mobile UI screenshot datasets have been proposed as the basis for data-driven interface studies. However, these datasets are mostly devised for screenshot-level analysis. For example, the widely used Rico dataset (Deka et al., 2017) is limited to being used for app-level analysis as more than 75% of apps in Rico have less than 10 screens. Other datasets also have limitations to be used for our analysis such as the public availability and size of the datasets.
Figure 1: UI screenshots of various functionality from the Uber, Lyft, and Google Maps apps. The red number is a cosine distance between the representations of two screenshots. For _set destination_ UI screenshot, we can see the distances are the same between the Uber and Google Maps pair and the Uber and Lyft pair.
Figure 2: Number of applications in Mobbin dataset by app category and platform
To properly evaluate our methods of app-level UI analysis, we collected a new dataset from Mobbin, which is a UI curation service providing up-to-date app-wise screenshots. The dataset was obtained as of June 2022 and has a total of 320 unique mobile apps, and each app has its own sets of screenshots based on the platform (iOS or Android) and the app version (date), which makes a total of 783 sets of screenshots, 558 for iOS and 225 for Android. It has a total of 99,228 screenshots (62,315 for iOS and 36,913 for Android), and each app has an average of 126 screenshots, which is significantly larger than both Rico's total screenshot counts and the average number of screenshots per app. The Mobbin dataset consists of apps with 19 categories, and the number of apps for each category and the platform is shown in Figure 2.
### UI Representations
To ease the use of machine learning models for practitioners, we employed OpenAI's CLIP model (Radford et al., 2021). CLIP consists of a visual encoder and a text encoder, both of which are trained using a huge dataset of image-text pairs by optimizing contrastive loss. So CLIP ensures that the encoded vectors of positive pairs (i.e. aligned image-text pairs) that are closer than those of negative pairs. By doing so, semantic vectors of both images and texts having similar semantics can be embedded in the vicinity of the joint representation space. Radford et al. (2021) further used these encoders on unseen datasets like ImageNet (Krizhevsky et al., 2017) for a classification task without any fine-tuning of the model. It can be done by first embedding all the ImageNet labels and retrieving the closest label for the given image query. Such application of the model has been named _zero-shot_ classification and is a core functionality of foundation models (Bommasani et al., 2021). We thought that CLIP could understand the representation of UI screenshots, as CLIP understands images that are not seen during training well.
As a sanity check, we first used DALL-E 2 (Ramesh et al., 2022), which is an image-generation AI that internally uses a noisy version of CLIP, to assess how well CLIP understands the semantics of the UI for the appropriate text prompts associated with the UI screenshots. By prompting text such as _"A mobile user interface image of shopping application"_ to DALL-E 2, we observed that the generated mobile UI images usually come with a mobile device _mockup_ rather than just by themselves (_e.g.,_ No Augmentation in Figure 3). Based on this observation, we augmented images with mockups as shown in Figure 3 to yield a better representation from CLIP. We encoded all images with publicly available3 CLIP ViT-L/14@336px
Footnote 3: [https://github.com/openai/CLIP](https://github.com/openai/CLIP)
Uniformity of contrastive representationSince CLIP embeds images onto the unit hypersphere, we can compute the uniformity loss (Wang and Isola, 2020) of an app's screenshots. Let \(I_{e}\in\mathbb{R}^{n\times d}\) be the set of embedding vectors of UI screenshots that make up an app and let \(I_{e}^{(i)}\in\mathbb{R}^{d}\) be the \(i\)-th screenshot of the set having a norm of 1. Then, we can define the uniformity loss \(L_{u}\) of an app \(L_{u}(I_{e})\) as follows:
\[L_{u}(I_{e}) \triangleq\log\frac{1}{n^{2}}\sum_{i,j}G_{t}(I_{e}^{(i)},I_{e}^{( j)}) \tag{1}\] \[=\log\frac{1}{n^{2}}\sum_{i,j}e^{2t\cdot I_{e}^{(i)\intercal}I_{e }^{(j)}-2t},\quad t>0, \tag{2}\]
where \(G_{t}\) is a Gaussian potential, and we set \(t=2\).
The uniformity loss \(L_{u}\) ranges from -4 to 0 for \(t=2\). As uniformity loss is negative of uniformity value, lower uniformity loss means the set has more uniformly distributed UI representations. Since semantically consistent UI screenshots are not uniformly distributed, the lower \(L_{u}\) also means the lower consistency of the set, and vice versa, the high \(L_{u}\) for the high consistency of the screenshot set.
### Optimal Transport
We employ optimal transport (OT) for app-to-app retrieval, where a transportation plan \(\mathbf{T}\in\mathbb{R}_{+}^{n_{a}\times n_{b}}\) is computed to optimize the alignment between two apps \(\mathbf{a}\) and \(\mathbf{b}\). We consider apps \(\mathbf{a}\) and \(\mathbf{b}\) as two discrete distributions \(\alpha\) and \(\beta\), formulated as \(\alpha=\sum_{i=1}^{n_{a}}\mathbf{n}_{i}\delta_{\mathbf{a}_{i}}\) and \(\beta=\sum_{j=1}^{n_{b}}\mathbf{m}_{j}\delta_{\mathbf{b}_{j}}\), where \(\mathbf{a}_{i}\) and \(\mathbf{b}_{j}\) are the embeddings of screenshots making up each app \(\mathbf{a}\) and \(\mathbf{b}\), and \(\delta\) as the Dirac function centered on each screenshot vector. The marginal plan \(\mathbf{n}\) and \(\mathbf{m}\) are belong to the \(n_{a}\)- and \(n_{b}\)-dimensional simplex (i.e., \(\sum_{i=1}^{n_{a}}\mathbf{n}_{i}=\sum_{j=1}^{n_{b}}\mathbf{m}_{i}=1\)). The OT distance between the app \(\mathbf{a}\) and \(\mathbf{b}\) is then defined as:
\[\mathcal{D}_{ot}(\mathbf{a},\mathbf{b})=\underset{\mathbf{T}\in\Pi(\mathbf{n},\mathbf{m})}{\text{min}}\sum_{i=1}^{n_{a}}\sum_{j=1}^{n_{b}}\mathbf{T}_{ij} \cdot c(\mathbf{a}_{i},\mathbf{b}_{j}), \tag{3}\]
where \(\Pi(\mathbf{n},\mathbf{m})=\{\mathbf{T}\in\mathbb{R}_{+}^{n_{a}\times n_{b}}| \mathbf{T}\mathbf{1}_{n_{b}}=\mathbf{n},\mathbf{T}^{\intercal}\mathbf{1}_{n_{ a}}=\mathbf{m}\}\) is a coupling of \(\mathbf{n}\) and \(\mathbf{m}\), and \(c(\cdot,\cdot)\) is the cosine distance.
Figure 3: Example of our image augmentation for mobile screenshots. We applied the appropriate mockup device according to the resolution and the platform of the screenshot.
## 4 Experiments
### Supremacy of Foundation Models
To assess how well the foundation model understands the UI semantics, we examined its ability to classify the app category of individual screenshots. Furthermore, we conducted a Mechanical Turk study, following the settings of Li et al. (2021), to compare the retrieval performance of the foundation model against the existing UI semantic representation models.
#### 4.1.1 Zero-shot App Category Classification
As mentioned earlier, the foundation model we used (CLIP) has the capability to perform image classification in a zero-shot manner without additional training. The zero-shot classification proceeds in the following order: first, we prepare appropriate text prompts those match each class (_e.g._, A photo of a \(\{\text{class}\}\)). Second, we extract text features for each prompt using the CLIP text encoder. Lastly, the top-k labels are predicted by the cosine similarity between their text features and the query image feature.
Each screenshot in the Mobbin dataset is classified by CLIP according to its app category shown in Figure 2. Furthermore, to demonstrate the effectiveness of data augmentation in Figure 3, we compared the classification accuracy between the original screenshots, the screenshots added mockup template, and the square-resized version of them. There are a total of 19 app categories in the Mobbin dataset, thus it has 5.2 and 26.3 by-chance accuracy for top-1 and top-5 classification, respectively. Original CLIP paper uses seven text prompts 4 (_e.g._, _tap of a \(\{\text{category}\}\)., a photo of the [small/large] \(\{\text{category}\}\)_) for ImageNet zero-shot classification. Since the prompts are engineered for ImageNet's images, which are mainly made up of photos of an object, we redesigned twelve prompts for UI screenshots, each of which includes _user interfaces_, _UI_, _mobile screen_, or _screenshot (e.g._, a screenshot of \(\{\text{category}\}\)_app_, _A user interface of \(\{\text{category}\}\) application._, _UI_ of \(\{\text{category}\}\)_app_.)
Footnote 4: [https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb](https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb)
Table 1 shows overall zero-shot app category classification results on the Mobbin dataset. Based on each augmentation result, we can confirm that the CLIP's understanding of UI semantics was improved when our augmentations, mockup templates, and square-resizing are applied. The same classification, but with the custom text prompts designed for the UI screenshots resulted in superior performance when compared to those of prompts for the natural photos (ImageNet). CLIP showed top-1 accuracy of approximately six times (with naive CLIP setting) to eight times (with our engineering) better than randomly classifying the original screenshots without additional training. This result indicates that CLIP has a certain level of understanding of user interfaces and application categories.
#### 4.1.2 Screenshot Retrieval Comparison
To find out whether the foundation model's retrieval results are preferred by humans or not, we conducted a comparative Mechanical Turk user study proposed by Li et al. (2021) and compared the results with Screen2Vec (Li et al., 2021). To carry out the experiment fairly, we used the Rico dataset for the retrieval experiment following the settings and baselines in Li et al. (2021). The models used for their Mechanical Turk experiments are as follows:
Screen2VecScreen2Vec uses Rico's screenshot and its metadata. The GUI components and their location information are encoded into a 768-dimensional vector. Plus, the application information is extracted using Sentence-BERT resulting in another 768-dimensional vector, then the two vectors are concatenated into a 1536-dimensional GUI screen embedding vector.
TextOnlyThis variant reproduces the models proposed in Li et al. (2020). It extracts text features of all texts in a screenshot using pre-trained Sentence-BERT (Reimers and Gurevych, 2019) and averages the features into a single 768-dimensional vector.
LayoutOnlyThis variant reproduces the autoencoder model proposed in Deka et al. (2017). It embeds the layout of a screenshot into a 64-dimensional feature.
CLIP image features were extracted both with mockup templates and square-resized augmentation to quantitatively prove the effect of image augmentation in retrieval tasks. We randomly selected 50 screenshots from Enrico (Leiva et al., 2020), which is a curated subset of Rico. Using these screenshots as the queries, the top-5 screenshots were retrieved from the entire Rico dataset for each model. Since each of the five models (_i.e._, CLIP, CLIP+aug, Screen2Vec, TextOnly, LayoutOnly) retrieves five screenshots, a total of
\begin{table}
\begin{tabular}{l l l l} \hline \hline & & Top-1 (\(\uparrow\)) & Top-5 (\(\uparrow\)) \\ \hline \multirow{3}{*}{ImageNet Prompts} & Random & 5.26 & 26.31 \\ \cline{2-4} & No augmentation & 32.82 & 65.32 \\ & +Mockup & 34.33 & 66.10 \\ & +Mockup+Squared & 36.00 & 66.92 \\ \hline \multirow{3}{*}{Our Prompts} & No augmentation & 37.68 & 70.95 \\ & +Mockup & 39.67 & 74.28 \\ \cline{1-1} & +Mockup+Squared & **40.49** & **74.65** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-1 and top-5 zero-shot classification accuracies on the Mobbin dataset (19 classes) with our augmentations and prompts.
25 similar images were obtained for each query screenshot. We removed duplicates from the retrieved, thus making fewer than 25 screenshots for some of the queries, resulting in a total of 999 pairs of screens (query and retrieved screen), averaging 19.98 per query screen.
Screen pair similarity was measured using the criteria from Li et al. (2021), including app similarity (likeness between two apps), screen type similarity (parity in the roles of two screens), and content similarity (congruity of the displayed contents). Crowd workers were asked to measure the similarity of five sets of query screenshots on a five-point Likert scale and the corresponding retrieved screenshots for each task, and five different workers measured similarities for each pair of screenshots. We paid two dollars for each batch of five source screens and it took an average of 28 minutes.
Table 2 reports the mean similarity value measured by the crowd worker for each model. The study revealed a worker preference for screenshots retrieved via CLIP image features, outperforming the three comparative models (Screen2Vec, TextOnly, LayoutOnly). Moreover, the utilization of mockup templates and square-resize augmentation enhanced the assessment of CLIP. Notably, the Mann-Whitney U test underscored the superior performance of the two CLIP models with statistical significance (\(p<0.0001\)).
### App-to-App Retrieval
To extend data-driven UI inspiration beyond simple single screenshot-level retrieval, we introduce the concept of optimal transport to properly implement app-to-app retrieval. Figure 5 illustrates the detailed process of app-to-app retrieval using optimal transport by example. First, we extract the semantic representation of the augmented screenshot in each application using the CLIP encoder we used throughout the paper. Using these features, we obtain the pairwise cosine distance matrix between the UI semantic vector sets of two applications. We assign uniform mass over the screenshots to form a uniform distribution, which makes the initial \(n\) and \(m\) in Section 3.3 to a uniform distribution. Then using the pairwise distance matrix and the distributions, we solve the optimal transport problem using POT (Flamary et al., 2021) as described in Section 3.3 to get an optimal transportation plan. Finally, we calculate a 1-Wasserstein distance (_i.e._, \(\mathcal{D}_{ot}\)) between the application pair by element-wise multiplying the distance matrix with the transportation plan matrix and summing the elements up, following Equation (3). To assess app-to-app retrieval based on \(\mathcal{D}_{ot}\), we
\begin{table}
\begin{tabular}{l l l} \hline \hline & & Score (\(\uparrow\)) \\ \hline Li et al. (2021) & Screen2Vec & 2.92\(\pm\)1.36 \\ Li et al. (2020) & TextOnly & 3.22\(\pm\)1.30 \\ Deka et al. (2017) & LayoutOnly & 3.00\(\pm\)1.33 \\ \hline \multirow{2}{*}{Ours} & No augmentation & 3.50\(\pm\)1.16 \\ & +Mockup+Squared & **3.57\(\pm\)1.08** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The mean similarity score and its standard deviation (\(N=1250\) each) were measured by Mechanical Turk workers. A higher score means more preferred retrieval results. The Mann-Whitney U test shows our models are statistically significantly better than Screen2Vec and other models (\(p<0.0001\)).
Figure 4: Distribution of pairwise distance of applications on three criteria: (left) app name, (middle) app category, and (right) platform (operating system).
Figure 5: (Left) The list of retrieved apps for given query apps across various categories; retrieved apps are sorted with their \(\mathcal{D}_{ot}\) to their query. (Right) The detailed process of getting \(\mathcal{D}_{ot}\), as described in Section 3.3.
perform a quantitative analysis of \(\mathcal{D}_{ot}\) as a distance between applications and conducted a case study on the possible applications of the transportation matrix.
\(\mathcal{D}_{ot}\) as an app-to-app distanceWe calculated \(\mathcal{D}_{ot}\) for 306,153 pairs of apps for a total of 783 apps of 319 unique app names5 in the Mobbin dataset. The calculation took about 32 minutes to process all pairs, which is about 158 app pairs per second using a machine with a single NVIDIA Titan RTX GPU. Using \(\mathcal{D}_{ot}\) of the pairs, we analyze the statistics of the Mobbin dataset in three criteria: each app's name, category, and platform (iOS or Android). Figure 4 shows distance distribution by the app's name, category, and platform, respectively. There are 944 pairs sharing an app name but differing in version (platform or release date), 21,141 pairs with identical categories, and 180,603 pairs on the same platform Apps with the same app name, category, and platform are displayed in blue, whereas apps with a different app name, category, and platform are displayed in orange. Across all three cases, the distance was significantly shorter when the app pairs shared the same app name, as the overall composition and semantics of screenshots in the app do not change much for different releases of the app. This outcome aligns with the designers' intent of maintaining app identity through updates or cross-platform deployment, affirming the suitability of \(\mathcal{D}_{ot}\) for modeling app distance While category and platform act as group identifiers and are thus less unique than an app's name, our \(\mathcal{D}_{ot}\) model effectively demonstrated a shorter \(\mathcal{D}_{ot}\) for apps sharing the same category/platform compared to those differing in these criteria. Notably, \(\mathcal{D}_{ot}\) also revealed that platform information, representing a larger group of apps, is more general than category information, evidenced by a smaller \(\mathcal{D}_{ot}\) distribution gap for the platform criterion. We want to note that it is a very significant result since the figure is drawn with more than 300,000 pairs of pairs.
Footnote 5: We use the term _name_ to indicate a universally unique identifier (UUID); thus, different apps with the same name are treated individually in this paper.
Interpretability of an optimal transportation planBesides the dataset-wide quantitative analysis, we highlight a few examples of retrieved apps for given queries in Figure 5. The figure demonstrates the method's efficacy by show-casing retrieved apps, from various categories like Airbnb, Spotify, Nike, and Uber, that bear similar semantics. Impressively, these results were acquired using merely app screenshots, sans any metadata like app category or component hierarchy. The transport plan, or optimal transport matrix, describes how to optimally move masses when there are two distributions. The transport plan, or optimal transport matrix, describes the optimal mass movement between two distributions, enabling the identification of similar screenshots between two apps, as they will exhibit similar vector representations.
### Design Consistency Check
As described in Section 3.2, we use \(L_{u}\) to measure the consistency of an app. We calculated \(L_{u}\) for every app in the dataset, Figure 7 shows statistics of \(L_{u}\) grouped by the app categories. \(L_{u}\) in the dataset ranges from \(-1.41\) to \(-0.66\). Categories such as _entertainment, education, and social networking_ turned out to have a lower \(L_{u}\), indicative of inconsistent and diverse screenshots, owing to their UI screens frequently consisting of various media types. On the other hand, the latter categories generally consist of their UI with icons and symbols rather than media, thus resulting in high \(L_{u}\).
To test whether \(L_{u}\) could serve as a metric for data-driven design consistency check discussed in Section 2.3, we designed two studies on the Mobbin dataset. The first, simulating a hypothetical scenario, involved designers assessing the alignment of new UI screen drafts with existing screens.
Figure 6: Our experiment conditions and sample result (Netflix app) of validating uniformity loss \(\Delta L_{u}\) as design consistency check metric.
Figure 7: Distribution of \(L_{u}\) by app category, sorted by median
This process, labeled as _Random Change_ in Figure 6, consisted of substituting \(N\) images per app with random ones from other apps in the dataset and subsequently calculating \(L_{u}\). Testing different \(N\) values facilitated an examination of the robustness of our uniformity metric in relation to the set size intended for inspection by the designer.
The difference between two \(L_{u}\)s (\(\Delta L_{u}\)) is shown in the first figure of Figure 8. As shown in the figure, \(\Delta L_{u}\) decreases as the number of randomly changed images increases. Through the t-test, the drop of \(L_{u}\) is statistically significant regardless of \(N\) (\(p<0.0001\)).
The second study is to test whether this drop occurs within the same design semantics, as the metric would be meaningless if it drops regardless of the semantics of the changed images. To implement the study, we first set aside five UI screenshot images from each app6 in the dataset. Subsequently, we replaced \(N\) images for each app with the images we had set aside, as opposed to images from random apps as in the previous study. The second figure of Figure 8 indicates no changes in \(\Delta L_{u}\) in this scenario. A t-test confirmed that \(L_{u}\) remained constant irrespective of \(N\) (\(p>0.28\)). These studies collectively underscore \(L_{u}\) as a robust proxy for assessing design consistency.
Footnote 6: For brevity, we also used this setting that set aside five UI screenshots while measuring \(L_{u}\) of the random screenshot change experiment.
## 5 Limitations and Future Work
Although we explored the conceptual applications of app-to-app retrieval and in-app design consistency based on various reports on designers' preferences (Colusso et al., 2017; Wu et al., 2021; Lu et al., 2022), it is still required to prove their efficacy in the real environment. As such studies require conducting formative studies and in-depth user studies, making it out of the scope of our paper proposing novel applications of UI representation models, we leave it for future work.
Optimal transport allows various initial marginal distributions, assigning more _mass_ to one screenshot than another. Since we assumed the condition with no information on which screenshot is more important than another in the app, we used a uniform distribution throughout the paper. A uniform distribution is a good choice to admit the maximum entropy probability distribution (Bernardo and Smith, 2009), but assigning different initial marginal distributions could enable different applications such as focusing more on certain screenshots or neglecting user-selected UI components during \(\mathcal{D}_{ot}\) computation. The manipulation of the initial marginal distribution remains for future work.
Another possible future work could be to extend the dataset to group screenshots with their feature or functionality rather than an app. Although we only tested primitive app groups of screenshots, since an app is made up of at least a few dozen screenshots, grouping screenshots with other criteria would be much easier to make a dataset since it requires fewer screenshots per group and yields different retrieval opportunities.
Regarding design consistency, \(L_{u}\) is innately a relative metric, and hence, cannot evaluate design quality in absolute terms. Although it was not our focus to build absolute guidelines, future work could involve defining a golden standard for specific design principles and utilizing \(\Delta L_{u}\) to quantify the deviation of the query set from this standard.
## 6 Conclusion
In this paper, we envisioned the applications of UI representation models including app-to-app retrieval and design consistency check. As the first stage of this research, we investigated the supremacy of zero-shot adaptation of a foundation model, CLIP, and how its representation appeals to humans by conducting a Mechanical Turk study on a single-screenshot retrieval task. Using the CLIP UI representation, we devised (1) \(\mathcal{D}_{ot}\) that measures the distance between apps by computing the optimal transportation plan between their UI screenshots, and (2) \(L_{u}\) that measures the semantic design consistency of an app by computing the pairwise Gaussian potentials between the UI screenshots of the app. Through multiple proofs of concept and analysis on our newly collected Mobbin dataset, we showed that both \(\mathcal{D}_{ot}\) and \(L_{u}\) are valid metrics for app-to-app retrieval and in-app design consistency check, respectively. We would like to highlight that our proposed methods can be executed on a personal laptop without expensive equipment such as GPU clusters while opening numerous opportunities for computational UI engineering. Going forward, we are excited to continue our endeavors toward building interfaces for designers that are equipped with our computational approaches.
Figure 8: The amount of change of uniformity loss \(\Delta L_{u}\) according to our experimental conditions. (Top) \(\Delta L_{u}\) when we change the images in the app randomly, (Bottom) \(\Delta L_{u}\) when we change the images in the app with the reserved screenshots from the same app |
2309.17447 | A Large Language Model Approach to Educational Survey Feedback Analysis | This paper assesses the potential for the large language models (LLMs) GPT-4
and GPT-3.5 to aid in deriving insight from education feedback surveys.
Exploration of LLM use cases in education has focused on teaching and learning,
with less exploration of capabilities in education feedback analysis. Survey
analysis in education involves goals such as finding gaps in curricula or
evaluating teachers, often requiring time-consuming manual processing of
textual responses. LLMs have the potential to provide a flexible means of
achieving these goals without specialized machine learning models or
fine-tuning. We demonstrate a versatile approach to such goals by treating them
as sequences of natural language processing (NLP) tasks including
classification (multi-label, multi-class, and binary), extraction, thematic
analysis, and sentiment analysis, each performed by LLM. We apply these
workflows to a real-world dataset of 2500 end-of-course survey comments from
biomedical science courses, and evaluate a zero-shot approach (i.e., requiring
no examples or labeled training data) across all tasks, reflecting education
settings, where labeled data is often scarce. By applying effective prompting
practices, we achieve human-level performance on multiple tasks with GPT-4,
enabling workflows necessary to achieve typical goals. We also show the
potential of inspecting LLMs' chain-of-thought (CoT) reasoning for providing
insight that may foster confidence in practice. Moreover, this study features
development of a versatile set of classification categories, suitable for
various course types (online, hybrid, or in-person) and amenable to
customization. Our results suggest that LLMs can be used to derive a range of
insights from survey text. | Michael J. Parker, Caitlin Anderson, Claire Stone, YeaRim Oh | 2023-09-29T17:57:23Z | http://arxiv.org/abs/2309.17447v2 | # A Large Language Model Approach to Educational Survey Feedback Analysis
###### Abstract
This paper assesses the potential for the large language models (LLMs) GPT-4 and GPT-3.5 to aid in deriving insight from education feedback surveys. Exploration of LLM use cases in education has focused on teaching and learning, with less exploration of capabilities in education feedback analysis. Survey analysis in education involves goals such as finding gaps in curricula or evaluating teachers, often requiring time-consuming manual processing of textual responses. LLMs have the potential to provide a flexible means of achieving these goals without specialized machine learning models or fine-tuning. We demonstrate a versatile approach to such goals by treating them as sequences of natural language processing (NLP) tasks including classification (multi-label, multi-class, and binary), extraction, thematic analysis, and sentiment analysis, each performed by LLM. We apply these workflows to a real-world dataset of 2500 end-of-course survey comments from biomedical science courses, and evaluate a zero-shot approach (i.e., requiring no examples or labeled training data) across all tasks, reflecting education settings, where labeled data is often scarce. By applying effective prompting practices, we achieve human-level performance on multiple tasks with GPT-4, enabling workflows necessary to achieve typical goals. We also show the potential of inspecting LLMs' chain-of-thought (CoT) reasoning for providing insight that may foster confidence in practice. Moreover, this study features development of a versatile set of classification categories, suitable for various course types (online, hybrid, or in-person) and amenable to customization. Our results suggest that LLMs can be used to derive a range of insights from survey text.
**Keywords:** Large Language Models (LLMs), survey analysis, GPT-4, GPT-3.5, ChatGPT, qualitative methodology
## 1 Introduction
Education feedback, much of it in the form of unstructured text comments from learners as part of survey responses, is considered an important aspect of course evaluation as well as facilitates course improvement [1]. This holds true regardless of whether a course is online, in-person, or in a blended or hybrid format [2, 3, 4].
During the COVID-19 pandemic, many educators shifted their courses online. This change necessitated updating knowledge of course design and teaching to incorporate rules of learning and engagement that share principles with those of in-person courses, but that also differ to some extent based on changes in the medium, types of course resources, teaching modality, and methods of course delivery. Even with a widespread return to in-person or hybrid learning, many of the tools and media from online teaching have persisted, such that learning about how to best design, teach, and deliver courses is a continual process, with a strong need for understanding how to make courses that have high learning value and are well-received.
In this context, collecting course feedback plays a critical role not only for educators, but also for course designers, educational administrators, and course providers (for example, organizations that create online courses for widespread use). Each of these roles has a set of specific questions and goals against which they seek to evaluate courses using results of feedback tools like surveys. For example, educators and course designers would like to know what content and modalities resonated with students or were received poorly, such that a course can be improved in a more rapid iterative cycle. For those involved in course delivery, understanding how the scheduling, timing, cost, ease of access, and other such factors affected the student experience can provide valuable information for process improvement. At a higher level, educational administrators often seek to evaluate their faculty as teachers, feeding into aspects such as teaching awards, promotion, or determination of the need for faculty development. With an eye toward long-term planning, course providers try to identify gaps in course content and formats to maximize learning value and engagement.
### Types of tasks associated with analysis unstructured survey data
Using survey textual responses to explore these types of high level goals of stakeholders requires chaining together multiple NLP tasks in the form of workflows. Such workflows can be implemented with a small set of natural language processing (NLP) tasks, including classification, extraction, and sentiment analysis, that form composable building blocks for similar workflows.
Classification of comments may be single-label (binary or multi-class, the latter involving classifying into one of a set of tags) or multi-label (classification of each comment with one or more of a set of tags), and the tags (also called labels, classes, or categories) are frequently custom-chosen, reflecting the goals of a particular analysis. Often those doing the analysis have a specific objective or goal focus that they are investigating (e.g. suggestions for improvement), and text extraction is a useful technique for this purpose.
Sentiment analysis can be used to lend nuance and insight to the quantitative ratings that are gathered through Likert scales or "star" ratings.
A high-level breakdown of objectives and NLP tasks is shown in Table 1.
### Previous approaches and challenges in analyzing education feedback
Despite the high motivation to learn from education feedback, significant challenges remain. Prior to recent developments in machine learning, systematic analysis of feedback comments, needed for forming data-backed conclusions, required manual (human) annotation and classification or extraction of key passages, tasks which can be time-consuming, costly, and significantly lengthen the course improvement cycle. This type of manual analysis is still the primary approach in many settings. In more specialized courses or course platforms, those familiar with the use case (domain experts, in the form of course educators or those involved in other ways in course delivery) are needed for annotation or extraction, making the process even more difficult. In courses with many students and hence a large volume of feedback, common for both in-person courses as well as online courses, the time and/or cost of human annotation of feedback can be prohibitive.
Employing crowdworkers, for example via Amazon's Mechanical Turk platform, reduces the cost and time of manual annotation. However, the quality of results may vary, particularly in cases where some degree of domain expertise is needed. Additionally, a recent study [5] provided evidence that a substantial fraction of crowdworkers used generative AI (LLMs) to assist with a summarization task, leading to a mix of results from humans and LLMs and raising doubt that crowdworkers will continue to be a reliable source of human annotations.
For feature extraction from text, techniques like TF-IDF, and Word2Vec have been applied for short text classification and sentiment analysis [6, 7, 8, 9]. Topic modeling using latent semantic analysis or latent Dirichlet allocation has been useful for discovering
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Objective** & **Question** & **NLP Tasks** & **Notes** \\ \hline High-level initial analysis & What did students say (and how did they feel about the course)? & Multi-label classification, inductive thematic analysis, sentiment analysis & Depends on whether analysis is top-down (using pre-determined labels or areas of interest) or bottom-up (deriving themes from comments) \\ Answering a focused question & What did students say about x (particular focus)? & Extraction & Results are amenable to multi-class classification or inductive thematic analysis \\ Quantification of textual survey responses & How many comments were there on each aspect? & Classification (binary, multi-label, or multi-class) & Helps the person performing analysis find themes of greater importance \\ \hline \end{tabular}
\end{table}
Table 1: NLP tasks that may be used for analysis of textual survey responses.
themes and trends in collections of student feedback [10; 11; 12]. For evaluating text, sentiment analysis techniques like CNN and Bi-LSTM models have been used to classify student evaluations [13; 14]. Overall, these techniques have shown utility for gaining insights from student feedback.
With the advent of recent machine learning (ML) techniques, great strides have been made in dealing with unstructured text. BERT (Bidirectional Encoder Representations from Transformers, [7]) and related models allow for transformation of text passages into numerical formats (high dimensional dense vectors called embeddings) that are then amenable to classification via conventional ML methods such as logistic regression. Good results have been achieved in certain contexts using such models [15]. Despite such advances, challenges remain that present obstacles to routine use of such models in practice.
Specialized ML models often require a "fine-tuning" process using labeled data (data that human annotators have classified) to best adapt to a specific use case. Depending on the amount of human labeling needed, this aspect may provide a stumbling block based on the time and effort involved. Although there are many examples of labeled datasets [16; 17; 18; 19], real-world use cases often rely on custom labels for which there is no pre-existing labeled data for fine-tuning. Even supposing such fine-tuning takes place, there are additional barriers to practical use of this technology.
One such barrier is that multiple distinct AI models may be needed, depending on the range of tasks. The model that is suitable for classification may not be the same one that performs text extraction, and each model may need its own fine-tuning or adaptation.
Even for a core task like classification, there are a number of challenges. Difficulty of classification increases in situations where multiple labels may concurrently be assigned to the same survey comment, often leading to a degree of inter-rater disagreement even among highly-skilled human annotators who have high familiarity with the domain. Other challenges include data imbalance, multi-topic comments, and domain-specific terminology [20; 21].
In classifying unstructured textual feedback, data imbalance exists when the labels chosen are not attributable in equal proportions across a dataset; some labels may be comparatively rare. If there are few examples of particular labels, this scarcity can create difficulties in training machine learning models that classify new comments. If human labeling is being used as ground truth, rarity of certain labels may require labeling a larger set of feedback to enable training an ML classifier.
Another challenge is that of multi-topic comments. Depending on how feedback is collected and how open-ended the survey questions are, students may provide feedback that encompasses multiple topics (for example, "I found the quizzes incredibly difficult, but the teacher was great and I felt I got what I paid for. If I had had more time to complete the course, this would have been even better."). Such multi-topic comments present a challenge for ML techniques based on embeddings (dense vector representations) derived from models such as BERT (or BERT related, such as Sentence-BERT, [22]), given that the embedding of a comment is related to the comment's semantic meaning. A comment with multiple topics may have an embedding that doesn't adequately localize to the semantic
"neighborhood" of any of the individual topics associated with that comment, decreasing the performance of downstream classifiers. Use of context-specific, specialized terms in the text data, known as domain-specific language, can also decrease the performance of ML techniques.
Deep learning models like BERT that perform feature selection by creating embeddings have been pre-trained on a large corpus of text, usually publicly accessible and mostly from the internet. Depending on the pre-training, terms specific to a specialized domain such as immunology or biomedical engineering may not have been seen during training, or seen only in very limited quantities. In those cases, the pre-trained model cannot adequately capture the semantics of such terms via its embeddings, again impacting the performance of downstream applications such as classification and clustering that may rely on those embeddings.
In sentiment analysis, pre-trained sentiment analysis models may not adapt well to settings where it is important to take into account the context. For example, in analyzing comments from biomedical science courses that cover cancer as a topic, learners' comments may include the words 'cancer' or 'oncology' or 'tumor', simply as referring to parts of the curriculum. These comments may end up being classified as negative even by a state-of-the-art existing model, given that discussions of cancers and tumors in many training datasets (often from internet settings) may be in the context of negative emotions being expressed.
Finally, a common challenge is that of lack of interpretability of results coming from specialized machine learning models. Although there has been significant work on approaches like visualizing factors that contribute to a neural network-based model's predictions, complex models may still be viewed as "black boxes" by downstream users in areas like education, with this perception potentially inhibiting usage.
### LLM Background and Related Research
Education feedback analysis seeks to extract insights from open-ended written responses, such as student surveys or teacher evaluations, and automated techniques can be seen as a particular application of the broader field of natural language processing (NLP). The introduction of transformer-based neural network architectures in 2017 led to an explosion of new AI models for NLP with increasing capabilities. BERT (mentioned above) was developed shortly thereafter (2018), with multiple related models (e.g., RoBERTa) being further developed over the last five years, with effectiveness at various NLP tasks that often exceeded those of pre-transformer models. Such models have been applied to a wide range of tasks, both with fine-tuning and without.
Large language models are neural networks based on transformer architectures, including not only those in the BERT lineage but also other models such as GPT-2, GPT-3, T5, and many others, with tremendous scale in terms of the number of model parameters (billions and sometimes trillions) and the internet scale volume of text on which they are trained (billions or even trillions of tokens, with tokens being numerical representations of words or parts of words). BERT (the large variant) has approximately 345 million parameters and was trained on about 3.3 billion words; in comparison, GPT-3 has 175 billion
parameters and was trained on approximately 500 billion tokens (approximately 375 billion words). Many of the newer models have generative AI capabilities, with the ability to do tasks like summarization, translation, and generation of high-quality text output. As their scale has grown, the range of tasks of which they have shown to be capable has increased, along with a level of performance that has surprised many. With the recent popularization and wider spread availability of LLMs, in part due to ChatGPT, with its underlying GPT-3.5 and GPT-4 models, as well as other LLMs like Claude (Anthropic), Command (Cohere), Bard (Google), LLaMA (Meta), and a range of open-source models, interest has grown in applying these to use cases like analysis of short text comments such as are seen in Tweets [23], customer feedback, and education survey feedback [24, 25].
Multiple recent studies have examined using ChatGPT for text annotation and classification tasks, with mixed results based on variations in prompts, datasets, parameters, and complexity of tasks. Reiss [26] focused on sensitivity to the prompts and parameters used in classification, in the context of performing classification on a German dataset. Pangakis et al. [27] argues that researchers using LLMs for annotation must validate against human annotation to show that LLMs are effective for particular tasks and types of datasets, given that there is variation in the quality of prompts, the complexity of the data, and the difficulty of the tasks. Other studies ([28, 29]) demonstrate the potential for ChatGPT to perform text annotation or provide natural language explanations at levels approaching or matching those of humans.
### Research Significance and Objectives
Exploration of the use cases for LLMs is in its relative infancy, and education is an important area of focus for LLM applications. A primary focus of recent related research has been on direct use of LLMs in teaching and learning, with less exploration of the capabilities in education feedback analysis. Education feedback surveys are a valuable source of information for evaluation and iterative improvement of course experiences, but remain difficult to process in a data-driven fashion, in part due to the manual labor associated with conventional analysis of the unstructured (text) responses component. Machine-learning approaches have shown promise in aiding analysis, but often require conditions that make their use less feasible to most educators, such as the need for fine-tuning and use of separate models for the natural language processing (NLP) tasks involved.
In this context, we:
* demonstrate a versatile approach that uses an LLM to perform multiple unstructured text analysis tasks on survey responses, including multi-label classification, multi-class classification, binary classification, extraction, inductive thematic analysis, and sentiment analysis.
* evaluate performance in a zero-shot approach across all tasks, a scenario that mimics many real-world practical use cases in the education setting.
* show the potential of LLMs to offer a form of insight into the trajectory ("reasoning") of how they arrive at their answers, providing a degree of transparency that may help foster confidence in real world usage.
As part of the evaluation process, we also developed a set of classification categories that can be applied to a variety of course types (online, hybrid, or in-person), and which are amenable to customization depending on specific requirements.
## 2 Methodology
### Survey data used for evaluation
2500 survey responses were selected at random from a larger set of survey responses received as end-of-course feedback on a range of biomedical science courses, including courses on genetics, immunology, and pharmacology. Additional survey comments were chosen as a development set that could be used for LLM prompt tuning. The courses all use a single, uniform end-of-course survey. In addition to quantitative ratings (e.g., net promoter scores) and optional demographic data, the survey included open-ended text responses to four questions/directives:
* "Please describe the best parts of this course." [Q1]
* "What parts of the experience enhanced your learning of the concepts most?" [Q2]
* "What can we do to improve this course?" [Q3]
* "Please provide any further suggestions, comments, or ideas you have for this series." [Q4]
On average, learners answered approximately two of the four questions. The shortest responses containing content were one word, and the longest responses were several paragraphs. Example survey responses are shown in Table 2.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Q1 responses** & **Q2 responses** & **Q3 responses** & **Q4 responses** \\ \hline βThe teachers they are incredible and their & βthe whole concept, the short videos with the & βImplement more & βI really enjoyed the \\ fascination about this & explanations written & checkpoints that review & course and learned a lot \\ topic make it more & down and then the & previous material & of applicable information \\ interesting.β & interactive modulesβ & throughout the course.β & for my job. I would have \\ & & interactive modulesβ & like a little more time \\ & & & between new releases of \\ & & & information. It would \\ & & & also be nice to have a \\ & & & live question/answer \\ & & & session.β \\ βthe structure of this & βThe quizzes after each & βThe course was & βA visit to meet the \\ course is just great. & module made me think & fantastic and & tutors and a summary \\ however i would love to & about the material I just & informative. However, I & discussion on location \\ have the chance to & learned.β & had to rewatch the & would be fabulous - I am \\ repeat all the modules as & & videos several times to & aware not many people \\ i am from a very & & write down everything & would make it, but a \\ different background.β & & that is said. I learn best & thought nonetheless.β \\ & & by looking at the words. & The videos should come \\ & & with either a transcript & or written words or some \\ & & sort that convey the \\ & & same informationβ & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example actual survey responses.
Survey responses were collected via Qualtrics, with minor processing with Pandas 2.0.1 for elimination of leading and trailing white space and automated removal of responses with no content (NA or None variants).
Survey responses were inspected manually and via named entity recognition (NER), running locally, to ensure that no private or sensitive information was transmitted to publicly available LLMs.
### Development of course tagging system
The authors spent considerable time developing and testing a set of labels that would work well not only for online courses like those that the survey responses in this paper were a part of, but also other types of educational offerings. The label development process started with a much larger set of labels (71 total), based on the goals of those involved in course production and delivery. Given that each survey response could cover multiple topics, the task was to assign as many labels to each survey response as were applicable (a multi-label classification task). The four authors (all of whom have been involved in either course development or delivery for multiple years and can be considered domain experts in the course resources and processes) each labeled a test set of 2000 survey responses (from the same educational program overall, but distinct from the set of 2500 comments ultimately labeled), with resulting relatively low inter-rater agreement. Based on this experiment, tag categories were combined to arrive at a much smaller set of generalizable tags (see Table 3). In addition, best practices were followed to ensure generalizability [1, 30, 31, 32].
A one to three sentence description of each tag was created to provide guidance so that tags could be applied appropriately in testing rounds. The intent is also that others can adapt these same tags by modifying the description portion for their own purposes. The same descriptions that served as context for the human annotators were also used in the prompts for the LLMs in the multi-label classification task as a form of deductive thematic analysis.
We then iteratively tested the new, much smaller set of tags on several sets of 100 survey responses, with all four authors independently tagging the same entries, followed by examination of inter-rater agreement. This yielded good results. With this set of tags, we then independently labeled 2500 survey responses, and evaluated inter-rater agreement using Jaccard similarity coefficient between pairs of raters and averaged across all pairs of raters.
### LLM processing
All LLM tasks were performed via calls to the OpenAI API endpoints. GPT-3.5 (model: gpt-3.5-turbo-0301) and GPT-4 (model: gpt-4-0314) were used for the multi-label classification task; all other tasks described used GPT-3.5 (model: gpt-3.5-turbo-0613) and GPT-4 (model: gpt-4-0613). All tests used a temperature of 0 with other parameters set to their default values, other than the functions parameter and the function_call parameter, which were set to specify the applicable function schema and the function name where applicable. Tests were run with calls to the models' asynchronous endpoints, in
order to run many model calls in parallel for suitable tasks (e.g., classification of individual survey responses). "Function calling", a capability specific to these models, was used to generate the JSON structured output for all tasks. Comments were run in batches that fit within the rate limits (tokens per minute) of each model. Prompts used (see Appendix A) involve function schemas, which count in the context limits, as well as the system and user messages to the model.
For the LLM approach to the multi-label classification task, the multi-class classification task for extracted excerpts, the binary classification task, and the sentiment analysis task, zero-shot chain-of-thought (CoT) prompting was used (where a model is prompted to reason step-by-step but without examples of such reasoning provided) [33, 34]. In addition to use of CoT enhancing the accuracy of the model output, the reasoning was included in the output to allow for error analysis and prompt tuning, as well as to allow inspection of the model's reasoning, something potentially helpful for those using the results in practice. For sentiment analysis, we had the LLM output a sentiment classification based on the possible categories 'negative','slightly negative', 'neutral','slightly positive', and 'positive', along with its reasoning.
For the LLM approach to inductive thematic analysis of survey responses, a two-step approach was used. The first step involved prompting the LLM to derive themes representing feedback from multiple students and summarize the themes. This step was run in parallel on batches of survey responses that would fit within the model's context window. The second step involved prompting the LLM to coalesce derived themes based
\begin{table}
\begin{tabular}{l l} \hline
**Tag** & **Description** \\ \hline course logistics and fit & course delivery (policy, support), cost, difficulty, time commitment, grading, credit, schedule, user fit, access, background (e.g., prereqs and appropriateness of course level). \\ curriculum & course content, curriculum, specific topics, course structure. This focuses on the content and the pedagogical structure of the content, including flow and organization. This also includes applied material such as clinical cases and case studies. Includes references to pre-recorded discussions between experts or between a doctor and a patient. Includes specific suggestions for additional courses or content. \\ teaching modality & video, visual, interactive, animation, step-by-step, deep dive, background builder (the format rather than the content/topic). \\ teaching & instructors, quality of teaching and explanations \\ assessment & quizzes, exams \\ resources & note taking tools, study guides, notepads, readings. Includes other potential static resources like downloadable video transcripts. \\ peer and teacher interaction & includes chances for the student to interact with another person in the course (teacher or student). This includes discussion forums, teacher-student or student-student interactions. Includes requests for live sessions with teachers or live office hours. \\ other & catch-all for the rarer aspects that weβll encounter and also the βnaβ, βthank youβ, etc. comments that donβt really belong in the above bins. Also for sufficiently general comments like βall the course was terrificβ that canβt be narrowed down to one of the other categories. \\ \hline \end{tabular}
\end{table}
Table 3: Final tags and descriptions.
on similarity to arrive at a final set of themes and descriptions. These steps could be considered analogous to part of the human inductive thematic analysis qualitative analysis workflow [35].
Various prompting techniques were used in this study to improve the results. These include:
1. Zero-shot CoT - This technique involves asking the model to think step-by-step to arrive at a correct result and to provide its detailed reasoning. In the absence of providing examples of CoT reasoning in the prompt, this type of prompting is categorized as zero-shot.
2. Prompt tuning via inspection of CoT reasoning - In testing, error analysis was supplemented with inspection of CoT reasoning to help discern where prompts might need refinement. As prompts were updated, we observed corresponding changes in the output and the stated reasoning, with improvement in the development set metrics.
3. Additional descriptive context for labels - Given that there was no fine-tuning to allow the model to learn the appropriate context and meaning of labels, we added context to prompts in the form of definitions for each label and the types of elements for which each label applied.
4. Additional context through injection of the survey questions into the prompt - Inclusion of additional context, such as the survey question that a given comment is in reply to, may improve the performance of LLMs and was used in this study.
5. Use of function calling for reliable structured output - This technique is specific to the GPT-3.5 and GPT-4 models, for which the June 2023 checkpoint (0613) has been fine-tuned to enable structured output (e.g., JSON) when provided with information about a function schema that could be called with the output. For this study, in which thousands of rows of data were processed into structured output, the function calling capability vastly reduced the need for elaborate prompting to elicit structured output, as well as error-handling and parsing of variations in output formatting. We started this project well before the models had been fine-tuned for structured output and saw the benefits of greater reliability once these capabilities existed.
6. Memetic proxy, also known as the persona pattern [36, 37] - Asking the LLM to act as a certain persona, for example as an expert in survey analysis tasks, has been described as another way to improve results, potentially by helping the model access a portion of its memory that holds higher quality examples of the task at hand. Guiding the model to imitate correct examples is more likely to result in good answers than asking the model simply to produce results.
### Other models
In addition to comparison to human ground truth labels, for multi-label classification, comparison was made to SetFit [38], a SentenceTransformers finetuning approach based on Sentence-BERT and requiring very little labeled data; for sentiment analysis, comparison was made to a publicly available RoBERTa-based model trained on 124M Tweets. These comparisons provide some context for the LLMs' performance relative to recent specialized models.
### Evaluation metrics
Scikit-learn 1.2.0 was used for statistical tests, along with numpy 1.23.5 and Pandas 2.0.1 for data analysis. Weights & Biases was used for tracking of model evaluation results. For the multi-label classification task, model results were compared to the human ground
truth labels. Two ways were used to arrive at ground truth labels aggregating results from multiple annotators: 1) using consensus rows: only the subset of survey responses (dataset rows) where all 4 annotators had majority agreement on all selected tags were kept; and 2) using consensus labels: all survey responses were kept but only labels with majority agreement were chosen as selected.
To fine-tune the SetFit model, we used a portion of each ground truth dataset (the first 20 examples for each label). Those examples were omitted from the test set, leaving 2359 rows in the consensus labels test set and 1489 rows consensus rows test set.
For each of the above scenarios, model results for multi-label classification were evaluated against aggregated human annotator results via the following metrics: 1) Jaccard similarity coefficient, comparing the model against aggregated human results for each row (survey response) and then averaged over all rows; 2) average precision per tag; 3) average recall per tag; 4) macro average precision, recall, and F1 score across all tags; 5) micro average precision, recall, and F1 score across all tags; 6) Hamming loss; and 7) subset accuracy.
For the binary classification task, accuracy, precision, recall, and F1 score were calculated, comparing the model results to one expert human annotator.
For the extraction task, extracted excerpts were evaluated by GPT-4 using a rubric created specifically for this task, examining performance on multiple aspects of performance, including the presence of excerpts that were not exact quotes from the original (part of the original extraction instructions), the completeness of capturing relevant excerpts, the presence of excerpts irrelevant to the initial goal focus, the inclusion of relevant context from the original comment, and several others. The results were also evaluated by human annotation to determine the presence of hallucinations (excerpts that were substantial changes from the original survey responses, rather than just changes in punctuation, spelling, or capitalization), with the percent of the total number of excerpts representing hallucinations being reported.
For the inductive thematic analysis task, there is not an accepted evaluation method given that this is a complex, compound task, and evaluation consisted of inspecting the derived themes and descriptions as well as inspecting the results of the associated multi-label classification step.
The sentiment analysis results of GPT-3.5 and GPT-4 were compared to those of a RoBERTa sentiment classifier trained on 124 million tweets [39, 40], as well as to results from a human annotator, with accuracy, precision, recall, and F1 scores reported for the prediction of sentiment as negative, neutral, or positive. Comparison was made by grouping 'negative' and'slightly negative' into a single class, keeping 'neutral' as its own class, and grouping 'positive' and'slightly positive' into a single class to allow for comparison across sentiment analysis methods. The RoBERTa classifier produced a dictionary with negative, neutral, and positive classes, with probabilities summing to 1.0. The class with the maximum probability score was chosen as the label for comparison to the human annotations.
## 3 Results
In this section, we first provide an overview of the rationale for using specific NLP tasks to accomplish different types of survey analysis goals, in order to provide motivation for the workflows that follow. We then demonstrate examples of potential workflows, applied to our real-world dataset, followed by presentation of examples of one model's chain-of-thought reasoning, and finally show evaluations of the individual tasks involved in the workflows. For the examples, we use GPT-4 as the LLM; the evaluations compare GPT-4 and GPT-3.5 as well as the other models used.
### Approach to LLM Workflows
The main types of workflows demonstrated support the goals shown in Table 1 of 1) high-level analysis, in which the desire is to understand the main categories and areas of emphasis across all student feedback, or 2) more focused analysis, e.g., answering specific questions about a particular aspect of a course. In both cases, quantification of results is a consideration, which is supported by classification tasks.
For initial, high-level analysis across the entire set of survey comments, we demonstrate two approaches: 1) inductive thematic analysis, a "bottom-up" approach supporting the use case where no predetermined labels (areas of interest) have been defined, similar to topic modeling, and 2) multi-label classification using predefined labels, a "top-down" approach, also referred to as deductive thematic analysis. When categories of interest are known in advance, multi-label classification is an appropriate first step, binning survey responses into relevant categories that provide a sense of the type of feedback learners are providing. These categories also provide groupings of comments for further focused analysis (e.g., via extraction), as well as allow for quantification based on the number of comments labeled with each category.
For focused analysis, in which there is a specific question or goal for the analysis, not necessarily known in advance, we demonstrate extraction as a key step, followed by either a classification step or thematic analysis. To provide output for further downstream analysis and quantification, multi-class classification can be used as a step, as demonstrated here with the generalizable set of labels used in this study, or with an adapted or fully customized version for one's own use case. This step is shown used after extraction, given that short excerpts are more likely to be adequately classified with a single label versus multi-sentence comments. The output of other forms of classification (binary or multi-label) also lends itself well to quantification of results.
Sentiment analysis was applied as a final step for workflows where finding positive or negative excerpts was of interest, as demonstrated in the example related to the level of difficulty of the course.
Although the full model responses were in JSON format, only the relevant output text is shown for brevity and clarity.
### Workflow Examples
#### 3.2.1 Example - High-level analysis by inductive thematic analysis ("bottom-up" approach)
A workflow for finding and summarizing the main themes (ideas expressed by multiple students) of survey responses is shown in Figure 1, and consists of three LLM steps: 1) themes are first derived and summarized for batches of comments, each of which is sized to fit within the context window of the model used; 2) comments are classified using the derived themes; and 3) sets of themes from these batches are coalesced to arrive at a final set of themes. Additionally, label counts are aggregated from the themes that were combined. In qualitative research, steps 1 and 3 are called inductive thematic analysis; this is similar to topic modeling, in that themes are inductively derived from comments. In general, depending on the input size (context window) for the model used (8K tokens in this example) and the number of comments being analyzed, dividing into batches and coalescing the themes from each batch may be unnecessary.
Results for running this process on the 625 comments from Q1 ('Please describe the best parts of this course') are shown in Figure 1. The number of comments that the LLM identified as corresponding to each theme is shown, along with the theme titles and descriptions.
Figure 1: Derivation of themes from student comments (results shown using GPT-4).
#### 3.2.2 Example - High-level analysis by categorizing student comments ("top-down" approach)
Multi-label classification of survey responses, using the set of predetermined labels developed for this study (Table 3) was run on the 625 comments from Q1 ('Please describe the best parts of this course') and results are shown in Figure 2. The categorized comments can be used for analysis (for example, comparing the categorization of responses to 'Please describe the best parts of this course' to the categorization of responses to 'What can we do to improve this course?') or as a starting point for further downstream tasks.
#### 3.2.3 Example - Finding suggestions for improvement
A workflow for finding and quantifying suggestions for course improvement is shown in Figure 3, and consists of extraction of relevant excerpts, followed by multi-class classification, based on the labels in Table 3, to facilitate quantification as well as routing of comments to the appropriate stakeholders. Excerpts resulting from the extraction step were assumed to be focused enough that they could each be categorized with a single class from among the pre-existing labels in Table 3. Results for several representative real comments from the larger set of survey comments are shown in Figure 3. The model's CoT reasoning for each step is shown elsewhere, but is omitted here for clarity.
Figure 2: Multi-label classification of student comments (results shown using GPT-4).
Figure 3: Finding suggestions for improvement from student comments (results shown using GPT-4).
#### 3.2.4 Example - What other content or topics were students interested in seeing covered?
A common goal in analyzing student feedback is to better understand the gaps in course content, in order to decide whether to develop additional material or even new courses. To see if this type of information could reliably be derived from survey responses, we focused on responses to relevant survey questions (Q3 and Q4) for immunology courses with the workflow shown in Figure 4. Results for several representative real comments are shown. First, just the portions containing new content or topic area suggestions are extracted from the survey responses. Content suggestion themes are then derived and summarized from the excerpts; this is done in batches if they cannot be fit within a single prompt to the LLM (i.e., if there are too many excerpts to fit in the model's maximum context size). Multi-class classification is performed on the excerpts with the themes from each batch. If thematic analysis is done in batches, sets of themes from these batches are then coalesced to arrive at a final set of content themes. The results suggest that GPT-4 is capable of finding content suggestions despite many being specific to the biomedical domain. This may be due to the volume and diversity of the model's pre-training data (although this training mixture has not been disclosed). Immunology is used as an example, but the workflow is not specific to the type of course.
#### 3.2.5 Example - What feedback did students give about the teaching and explanations?
Feedback about teachers and the quality of teaching and explanations in a course is a frequent objective of academic course surveys. Here, we show a workflow where multi-label classification has already been run as an initial step in high-level analysis, and we use the results of that classification as our initial filter to focus on the identified subset of comments related to teaching (corresponding directly to one of the pre-existing labels), with extraction used to further narrow the output of analysis. The workflow, shown in Figure 5, consisted of multi-label classification, using the pre-existing labels developed (Table 3) followed by extraction of relevant excerpts from the comments that were classified into the 'teaching' category (9% of total comments). If multi-label classification hadn't previously been run, extraction could have been performed on the broader group of comments as the initial step. For our dataset, which includes numerous multi-topic comments, the extraction step was used to further filter the information to only content related to the goal. Results for several representative real comments (de-identified in pre-processing) from the larger set of survey comments are shown in Figure 5, including one where the model improperly filtered out the comment despite it containing a reference to the quality of explanations. An error such as the one shown could be considered somewhat subtle and highlights the need with zero-shot prompting of LLMs for clear specification of the goal of the extraction.
Figure 4: Finding suggestions for new immunology content from student comments (results shown using GPT-4).
Figure 5: Feedback about teaching and explanations (results shown using GPT-4). The red βxβ indicates an error by the model.
#### 3.2.6 Example - How did students feel about the level of difficulty of the course?
Feedback about the level of difficulty of a course can help guide decisions on prerequisites and messaging about the intended target audience. Again, we show a workflow where multi-label classification, using the pre-existing labels developed (Table 3), has been run as an initial step in high-level analysis, and we use the results of that classification as an initial filter. In this case, the desired goal (level of difficulty) falls within the 'course logistics and fit' label but is not an exact match. As shown in Figure 6, after filtering to comments that were classified as 'course logistics and fit', a further binary classification step was applied to filter only to comments containing passages about the level of difficulty of the course; the binary classification step was optional, but significantly reduced the number of comments that needed to be processed with the more complex extraction task. Finally, extraction of the comment passages about level of difficulty and classification of sentiment were applied. Results for several representative real comments are shown in Figure 6. The results of the evaluating the sentiment analysis portion (see Section 3.4.4) suggest that sentiment analysis can be a challenging zero-shot task in areas such as biomedical online learning where the course context, the feedback context, and the inclusion of multiple topics in a single survey comment may differ significantly from the model's public pre-training data distribution.
### Chain-of-Thought Reasoning
The prompts for binary classification, multi-label classification, multi-class classification, sentiment analysis, and evaluation of extraction results all used zero-shot chain-of-thought (CoT) to enhance the quality of the results while maintaining the zero-shot conditions of this study. The CoT reasoning was included in the structured output, allowing for inspection. Only the reasoning from GPT-4 was consistently reliable, and examples are shown here.
Example results for binary and multi-class classification tasks are shown in Figure 7 and Figure 8, and reasoning for sentiment analysis is also shown in Figure 8. The reasoning, inspected manually over several hundred comments, is consistent with the classification results and appears to provide logical justification that is grounded in the contextual information (e.g., labels and descriptions) included as part of the prompts (see Appendix). This suggests that the CoT reasoning from GPT-4 meets a threshold of consistency and logic that allows for potential downstream use cases such as prompt tuning and insight into reasoning for end-users. Potential benefits and caveats of such uses are explored in the Discussion.
Figures 7 and 8 show the model's CoT reasoning related to Example 3.2.3 (suggestions for improvement) and Example 3.2.6 (level of difficulty of the course) above.
Evaluation of the extraction task used a custom LLM evaluation (see Appendix), developed for this study. In order to refine the evaluation to align results with human preferences, we inspected the CoT reasoning along with the structured eval results for the separate development set of survey responses and made modifications to the evaluation prompts in an iterative fashion. An example of the CoT output for GPT-4 is shown in
Figure 9: As prompts were altered based on human review, the eval results changed in a consistent fashion, suggesting that GPT-4 provided CoT reasoning may be useful in refining LLM evaluations.
Figure 6: Finding feedback about level of difficulty (LLM: GPT-4).
Figure 7: Examples of GPT-4 CoT reasoning for binary classification and multi-class classification related to the task of finding suggestions for improvement.
Figure 8: Examples of GPT-4 CoT reasoning for multi-label classification, binary classification, and sentiment analysis related to the task of finding how students felt about the level of difficulty of the course.
**GPT-4:****wals**: _missed_,_oxcepts_: _no_, _has_,_ambiguous_,_oxcertify_: _yes_, _missed_,_existing_context_: _yes_, _has_,_irelevelant_,_excertify_: _no_, _has_,_irelevelant_,_context_: _no_, _has_,_implied_,_goal_,_focus_: _no_, _has_,_non_,_quotes_: _no_, _'no_, _has_,_non_,_contiguous_,_excertify_: _no_, _has_,_ redundant_,_excertify_: _no_, _reasoning_: _"The program did not miss any relevant experts as it captured all the suggestions for improvement. However, the excerpt if given a certain inflection, unsure how the adaptive response will occur is ambiguous without the context of the course content being about immune responses. This context is present in the original comment but was not captured in the excerpts. All the excerpts are relevant to the goal focus and there is no irrelevant context included. The suggestions for improvement are explicit in all the excerpts. All the excerpts are exact quotes from the original comment and are contiguous. There are no redundant excerpts as each one presents a unique suggestion for improvement."
**Student comment:**
"the course structure may be more systematic currently at the end of the course, things are still somewhat sparse, if given a certain infection, unsure certain infection, unsure how the adaptive response will occur a.g. Can add an example of bacteria and the following all possible adaptive immune response it may be the nature of the current knowledge."
### Individual NLP Task Evaluations
To better assess the reliability of workflows such as those shown in the examples, we evaluated the individual tasks, including multi-label classification, binary classification, extraction, and sentiment analysis.
#### 3.4.1 Multi-label classification metrics
The difficulty of multi-label classification tasks varies widely [15], depending on the the content to which the labels are being applied, the design of the labels (for example, the clarity of their specification and the potential for overlap), and the number of labels. To put the LLM results in context, we show the inter-rater agreement for application of the eight-label set (Table 3) to our dataset and also compare the LLM results to SetFit, another classification technique.
_Inter-rater agreement_: 1413 (57%) of 2500 rows had all 4 human raters in agreement across all selected labels and 1572 (63%) had majority (3 of 4) agreement on all selected labels. The average Jaccard similarity coefficient including all 2500 rows (averaged across the six unique pairings of four human raters for all rows) was 81.24% (Table 4), suggesting that this was a challenging task even for expert human annotators who developed the custom label set in close collaboration. GPT-4 agreement with human annotators is shown; the average across all pairings including GPT-4 was 80.60%.
_LLM and SetFit evaluation_: In addition to evaluating the GPT models, we also performed multi-label classification using SetFit (Tables 5 and 6).
For the consensus rows evaluation, the zero-shot results for GPT-4 are similar to what might be expected of fine-tuned classifiers [15]. The other models have strengths and
\begin{table}
\begin{tabular}{l r r r r r} \hline & annotator 1 & annotator 2 & GPT-4 & annotator 3 & annotator 4 \\ \hline annotator 1 & - & 81.27 & 80.18 & 83.37 & 82.35 \\ annotator 2 & 81.27 & - & 79.40 & 80.84 & 78.42 \\ GPT-4 & 80.18 & 79.40 & - & 80.74 & 78.22 \\ annotator 3 & 83.37 & 80.84 & 80.74 & - & 81.18 \\ annotator 4 & 82.35 & 78.42 & 78.22 & 81.18 & - \\ \hline \end{tabular}
\end{table}
Table 4: Inter-rater Jaccard similarity coefficients, including human annotators and GPT-4 as another rater/annotator (human pairs average = 81.24%; all pairs average = 80.60%).
\begin{table}
\begin{tabular}{l r r r r r r r r} \hline & & & \multicolumn{4}{c}{Macro average} & \multicolumn{4}{c}{Micro average} \\ \cline{3-8} Model & Jaccard & Average precision & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline GPT-4 & 92.97 & 93.91 & 89.88 & 90.59 & 89.78 & 93.66 & 93.26 & 93.46 \\ GPT-3.5 & 72.61 & 74.79 & 69.34 & 82.18 & 72.63 & 72.36 & 84.48f & 77.96 \\ SetFit & 73.86 & 78.01 & 84.37 & 57.59 & 66.85 & 91.92 & 71.43 & 80.39 \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation on consensus rows, with majority agreement on all tags (1572 rows for LLMs, 1489 rows for SetFit).
weaknesses, with SetFit having relatively high precision and lower recall, and GPT-3.5 following the converse pattern. The overall results for SetFit and GPT-3.5, focusing on Jaccard coefficient and F1 scores, are similar. The results emphasize 1) the fact that fine-tuning is desirable when feasible, approaching the performance of powerful LLMs like GPT-3.5 even with a few-shot fine-tuning approach; and 2) the quality of the zero-shot performance of GPT-4.
#### 3.4.2 Binary classification metrics
1250 comments were classified as to whether or not they contained'suggestions for improvement', and results were compared against one expert human annotator. Binary classification could be considered the simplest of the evaluated NLP tasks, and both LLM models exhibited good performance (Table 7).
#### 3.4.3 Extraction evaluation
Using'suggestions for improvement' as an example target of extraction, comments were first classified via GPT-4 as containing the target or not (see binary classification task above). Of the 1250 comments, 716 were labeled as containing suggestions for improvement. These comments were then run through extraction to find the individual excerpts. The excerpts for each comment were scored by applying a custom evaluation rubric with nine questions (Table 24) via GPT-4. Only GPT-4 was capable of applying the evaluation reliably. The extracted excerpts were also examined by a human annotator to determine the percentage of the 716 rows that contained hallucinations in the excerpts, as defined by substantial edits or complete fabrication of additional language not present in the original comment. Table 8 shows the error rate across all target-containing comments for each model's extraction results for all categories where error rates were not close to 0 and the percent hallucinations as determined by human annotation.
The GPT-4 model included some ambiguous excerpts; however, those were most commonly due to lack of context in the comment itself, rather than the model failing to
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Macro average} & \multicolumn{4}{c}{Micro average} \\ \cline{3-8} Model & Jaccard & Average precision & Precision & Recall & F1 & Precision & Recall & F1 \\ \hline GPT-4 & 80.17 & 81.53 & 73.91 & 88.38 & 79.69 & 78.32 & 89.70 & 83.63 \\ GPT-3.5 & 63.00 & 65.18 & 60.42 & 79.79 & 65.75 & 60.31 & 83.45 & 70.02 \\ SetFit & 62.72 & 67.52 & 73.22 & 53.08 & 59.61 & 79.40 & 65.14 & 71.57 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Evaluation on all rows using consensus labels (2500 rows for LLMs, 2359 rows for SetFit).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Accuracy & Precision & Recall & F1 \\ \hline GPT-4 & 95.20 & 96.20 & 95.39 & 95.79 \\ GPT-3.5 & 90.16 & 89.01 & 93.35 & 91.14 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Binary classification task performance.
extract that context. GPT-4 followed directions very closely, and its results did not contain hallucinations. In contrast, the output of GPT-3.5 contained hallucinations at a rate of about 4% and edits to comments at a rate of about 6%. GPT-3.5 also missed relevant excerpts significantly more frequently than GPT-4. Additional prompt tuning may reduce the rate of these errors; nonetheless, the results suggest that a degree of caution should be applied in using GPT-3.5 for extraction.
#### 3.4.4 Sentiment analysis metrics
Using GPT-4 and GPT-3.5, comments related to course suggestions and improvement (Q3 and Q4) were classified as 'negative','slightly negative', 'neutral','slightly positive', or 'positive'. Table 9 shows accuracy, and macro precision, recall, and F1 scores for three models; comparison was made by grouping 'negative' and'slightly negative' into a single negative class, keeping 'neutral' as its own class, and grouping 'positive' and'slightly positive' into a single positive class.
GPT-4 is substantially better on each metric than the other models; however, the results are lower than what has been seen for fine-tuned models on in-domain datasets, indicating that the sentiment expressed in student course feedback may differ from the range of sentiment expressed in the internet training data of these models. The negative class was the most challenging for all models, suggesting that negative course feedback may differ significantly from negative internet feedback.
### LLM Cost and Time
The cost of using the OpenAI APIs for GPT-4 and GPT-3.5 depends on the number of prompt tokens and number of completion tokens. For the final prompts and tasks used in this study, the average price of running 100 comments is shown in Table 10 for each model for different tasks (cost as of June 2023). These provide an approximate gauge given
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Model & Missed & Ambiguous & Missed & Irrelevant & Implied & Non & Redundant & Hallucinations \\ & Excerpts & Excerpts & Exist- & Excerpts & Goal & Quotes & Excerpts & \\ & & & Con- & & Focus & & & \\ & & & text & & & & & \\ \hline GPT-4 & 2.37 & 4.61 & 0.28 & 0.14 & 3.07 & 0.00 & 0.28 & 0.00 \\ GPT-3.5 & 7.82 & 4.75 & 0.84 & 0.84 & 2.79 & 6.01 & 2.79 & 3.91 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Error rate (%) of extraction for βsuggestions for improvementβ from comments classified as containing βsuggestions for improvementβ (worst-performing metrics from rubric and human annotation for hallucinations).
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Model & Accuracy & Precision & Recall (macro) & F1 \\ & & (macro) & & (macro) \\ \hline GPT-4 & 80.86 & 82.65 & 80.28 & 80.78 \\ GPT-3.5 & 65.17 & 73.68 & 66.44 & 64.88 \\ twitter-roberta-base-sentiment-latest & 66.69 & 71.38 & 64.86 & 61.10 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Classification of comments as negative, positive, or neutral relative to human annotator.
that comments vary in length. Total API cost for this study including prompt tuning was approximately $300.
The time for model calls for GPT-4, the slower of the OpenAI models, was approximately 10 seconds for running 100 comments in parallel for most tasks listed. For the extraction evaluation, it took approximately 1 minute to run 100 comments in parallel. For batches, sleep intervals were also incorporated to stay conservatively within maximum token rates. A small percentage of API calls received errors and automatic retries were used after wait intervals.
## 4 Discussion
Analysis of education feedback, in the form of unstructured data from survey responses, is a staple for improvement of courses. However, this task can be time-consuming, costly, and imprecise, hampering the ability for educators and other stakeholders to make decisions based on insights from the data. The objective of this research was to demonstrate the capability of recent LLMs to perform a range of relevant natural language processing tasks that aid in this process, using a zero-shot approach that would be feasible for many educators, and to determine through evaluation whether the quality is acceptable for practical use in educational settings. Additionally, we proposed that chain-of-thought reasoning (CoT), used to improve the accuracy of results, can also potentially be useful for providing a degree of insight into the model's stated logic for those using the results. Being able to peer into the "thinking" of the LLM may provide confidence, increase adoption, and reduce the perception of the models as "black box" algorithms.
This study's evaluation of specific tasks with real-world data is not meant to be a benchmark or to show that performance exceeds fine-tuned or few-shot prompted models; rather its purpose was threefold: 1) demonstrate that the model results for the most capable models are viable for these types of use cases for education feedback surveys; 2) determine if there were steps that were particularly difficult for LLMs (weak links in the chain, as it were) that might benefit from specialized models or be amenable only to the best-performing LLMs; and 3) better understand the overall strengths and limitations of this approach.
Some of the aspects from our work that we discuss below are:
\begin{table}
\begin{tabular}{l c c} \hline \hline Task & GPT-4 & GPT-3.5 \\ \hline binary classification & \$0.93 & \$0.04 \\ multi-label classification & \$2.63 & \$0.12 \\ multi-class classification & \$2.13 & \$0.10 \\ text extraction & \$1.10 & \$0.05 \\ text extraction evaluation & \$3.01 & \$0.13 \\ sentiment analysis & \$1.17 & \$0.05 \\ inductive thematic analysis & \$0.13 & \$0.006 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Cost per 100 comments for GPT-4 and GPT-3.5.
* the quality of the results compared to expert human annotators
* the need for prompting techniques and prompt tuning
* the tasks where few-shot prompting might have the highest impact
* the possibility of using CoT reasoning for purposes beyond enhancing model results
* the viability of composing LLM workflows from NLP tasks for the purpose of survey feedback analysis
Individual LLMs' results varied significantly, and GPT-4 performed on par with expert humans even with zero-shot prompting
Our tasks and dataset were drawn from real-world data and actual use cases, motivated by common goals of those evaluating unstructured educational feedback. Some of the tasks could be considered challenging, with ambiguity involved, even for expert annotators. For example, as we gathered human ground truth data through annotating the sample dataset for multi-label classification, we found that inter-rater agreement metrics, shown previously, reflected the difficulty of the task. There were significant differences in the performance of models, with only the most capable model tested, GPT-4, reaching a level that was indistinguishable from any single expert human annotator in its multi-label classification results, based on Jaccard similarity coefficient (see Table 4). The human-like level of GPT-4 extended to other tasks and can be seen in examples of the reasoning results as well (Figures 7 and 9). In addition, it outperformed label-efficient fine-tuned classifiers like SetFit. For workflows that chain together two or more NLP tasks, like those examined in this study, it is important that the performance on each task is reliable enough such that errors do not accrue in the process of obtaining a final result.
### LLM results were highly prompt-dependent
Even within the most capable models, we observed that prompting techniques and prompt tuning made a significant difference. There is considerable literature on effective methods of prompting. There is an interplay of prompting techniques with the behavior of instruction-tuned models in a way that may or may not fully elicit the capabilities of each model, with prompts being seen as a form of hyperparameter to the model and with responses changing depending on updates to model training [41].
Although this study focused on zero-shot prompting to reflect realistic use cases in educational settings, the results suggest that there are tasks that likely could benefit from few-shot prompting to reach a level necessary for inclusion in workflows. For sentiment analysis, the models' zero-shot results differed from human annotation particular for negative comments. What constitutes negative feedback for an online course is subjective; for comments that are critical of certain course aspects but still adopt a civil tone, an educator may still choose to count those as negative. Therefore, providing examples in the prompt to help the model calibrate may be helpful, given potential differences between the data and the model's internet pre-training. For the example of finding what other content or topics students were interested in seeing covered, even GPT-4 failed to distinguish suggestions for changes that were focal and focused on existing course content from those that were for new content or topics. In cases like this, where it might be difficult
to fully specify the objective sufficiently in abstract terms, providing few-shot examples could also be beneficial.
### Inspecting a model's chain-of-thought reasoning may have multiple uses
We have shown examples of GPT-4's CoT results that provide apparent insight into the model's reasoning or trajectory. We use the word "apparent" because it is possible that the LLM is imitating plausible reasoning rather than providing insight into how it actually arrived at its answer; however, this distinction may be immaterial given that 1) GPT-4's reasoning was logical and highly consistent with the chosen label, excerpt, or extraction evaluation results, displaying elements of causal reasoning; and 2) when prompts were changed, reasoning results changed accordingly. This has been discussed in recent work; GPT-4 has been shown to score highly on evaluations of causal reasoning [42]. In Peng et al. [43], GPT-4 was used for evaluation of other LLMs and was able to provide consistent scores as well as detailed explanations for those scores. Whether or not the stated reasoning is actually how GPT-4 arrived at the answer, we observed that it was useful as an additional signal for prompt tuning in conjunction with metrics and for providing a logical justification for each response. While specialized non-LLM models can provide signals like confidence scores in individual classes, they lack more detailed explanations of results; we believe that seeing a version of logical reasoning behind complex output can foster confidence and reduce the perception of these models as black boxes. Furthermore, it is important to consider that having human annotators reliably provide consistent, logical justification for each decision is prohibitive for datasets of any appreciable scale.
### LLMs form the backbone of a versatile and composable approach
We demonstrate good performance on individual tasks on a real-world dataset, allowing the composition of those tasks into useful, entirely LLM-based workflows,. At the point of writing, only GPT-4 shows sufficient levels of performance on all tasks to provide confidence in the results of a multi-step workflow. However, given the rapid pace of improvement of other models, it is to be expected that these multi-step processes will be more broadly feasible in the near future.
The real impact of this new paradigm is scalability in comparison to specialized models or annotation by human domain experts; with minor modification of categories and prompts, rather than time-consuming expert annotation or fine-tuning models, the same workflows can be used for other types of courses and potentially for other types of surveys.
## 5 Limitations and Future Research
The data used in this study was from a specific domain (online biomedical science courses) and was in English. Greater performance could potentially have been seen with additional prompting techniques, for example through the use of self-consistency [44], reflection
(iterative self-refinement, [45], and few-shot learning; these were out-of-scope for the zero-shot premise of this article but are worth exploring.
Other than the comparison to SetFit and to the RoBERTa sentiment analysis model, we limited our exploration to recent OpenAI models; future work may expand this to include other models such as Claude v2, Command, and Llama 2. Use of open source models may have certain advantages, including greater stability, for example, based on having control over any changes that may impact the model's responses, e.g., instruction fine-tuning or reinforcement learning from human feedback (RLHF).
The ability to compose survey analysis workflows is also amenable to the use of agents [46, 47, 48]. An educator or other stakeholder analyzing survey feedback should be able to state a goal or intent to an LLM-powered agent, with the agent picking and running tasks as a chain to get the desired analysis. Such an agent could also incorporate non-LLM tools, for example if a fine-tuned model is available that excels on a given task and is well-matched to the dataset at hand. Ideally, users of such tools should be able to operate by stating intent rather than tuning prompts or fine-tuning specialized models. Future work may incorporate these concepts.
Acknowledgments.We wish to thank members of the HMX team for their contributions to creating and administering the surveys used in this study.
## Declarations
### Competing interests
The authors have no relevant financial or non-financial interests to disclose.
### Ethics and conflict of interest statements
This study was determined not to be human subjects research by the Harvard Medical School Office of Human Research Administration.
### Author contributions
Conceptualization, methodology, software, analysis, and drafting of the manuscript were performed by Michael J. Parker. Development of labels, annotation/labeling for multi-class classification, and refinement of the manuscript were performed by all authors (equal contributions).
### Availability of data and materials
The prompts used in this study are shared in the appendix. Function schemas are shared in supplementary material. To help preserve the anonymity of students and of feedback about teachers, the survey responses are not included in an open-access repository. The data may be provided upon request to the authors and approval of the university research ethics board.
|
2305.19967 | Dark Matter Detection with Strongly Correlated Topological Materials:
Flatband Effect | Dirac materials have been proposed as a new class of electron-based detectors
for light dark-matter (DM) scattering or absorption, with predicted
sensitivities far exceeding superconductors and superfluid helium. The
superiority of Dirac materials originates from a significantly reduced
in-medium dielectric response winning over the suppression of DM scattering
owing to the limited phase space at the point-like Fermi surface. Here we
propose a new route to enhance significantly the DM detection efficiency via
strongly correlated topological semimetals. Specifically, by considering a
strongly correlated Weyl semimetal model system, we demonstrate that the strong
correlation-induced flatband effects can amplify the coupling and detection
sensitivity to light DM particles by expanding the scattering phase space,
while maintaining a weak dielectric in-medium response. | Zhao Huang, Christopher Lane, Sarah E. Grefe, Snehasish Nandy, Benedikt Fauseweh, Silke Paschen, Qimiao Si, Jian-Xin Zhu | 2023-05-31T15:54:30Z | http://arxiv.org/abs/2305.19967v1 | # Dark Matter Detection with Strongly Correlated Topological Materials: Flatband Effect
###### Abstract
Dirac materials have been proposed as a new class of electron-based detectors for light dark-matter (DM) scattering or absorption, with predicted sensitivities far exceeding superconductors and superfluid helium. The superiority of Dirac materials originates from a significantly reduced in-medium dielectric response winning over the suppression of DM scattering owing to the limited phase space at the point-like Fermi surface. Here we propose a new route to enhance significantly the DM detection efficiency via strongly correlated topological semimetals. Specifically, by considering a strongly correlated Weyl semimetal model system, we demonstrate that the strong correlation-induced flatband effects can amplify the coupling and detection sensitivity to light DM particles by expanding the scattering phase space, while maintaining a weak dielectric in-medium response.
_Introduction.--_ Dark matter (DM) makes up the majority of the total matter in the universe, and yet has been difficult to detect. In recent years there is a growing interest in identifying powerful targets for detection. Direct identification of DM particles relies on observing the response of a target to its interaction with the DM through the scattering or absorption process. This in turn depends on the DM energy deposited on the target. Current experimental techniques that can probe the DM in the mass range of 10 GeV to 10 TeV [1; 2; 3] are based on collision induced nuclei recoils. Because the electron mass is much smaller than its nuclear counterpart, electron-based targets enable the detection of much lighter DM particles. Electron materials with an excitation energy of 1 eV order of magnitude are optimal for the detection of DM in the 1 MeV to sub-GeV mass range via direct scattering processes [4; 5; 6; 7; 8; 9; 10; 11], or bosonic DM with mass greater than 1 eV via absorption processes [12; 13]. For direct detection of even lighter DM (e.g., particles with masses below MeV through scattering and below 1 eV through absorption), a material with an even narrower excitation energy gap of few meV or a few decades of meV is desirable. In this regime, superconducting [14; 15; 16] and superfluid helium [17; 18] have been proposed as possible targets, but each lacks optimal sensitivity due to their larger in-medium response.
More recently, Dirac materials [19] have attracted enormous attention [20; 21; 22] for being a new class of electron-based targets for light DM detection, due to their superior sensitivity compared to superconductors and superfluid helium. Here, a significantly reduced in-medium dielectric response wins over the suppression of DM scattering owing to the limited phase space at the point-like Fermi surface in these materials. However, while the community has been focusing on weakly interacting Dirac semimetals, a key barrier to progress is to identify new mechanisms that promote enhanced sensitivity to the dark matter. In this Letter, we propose to utilize topological quantum materials with strong correlations to fill this void. Our work takes advantage of substantial recent advances in Weyl semimetals driven by strong correlations, especially Weyl-Kondo semimetals [23; 24; 25]. Starting with a strongly correlated Weyl semimetal model system, we reveal that the correlation effects can significantly magnify the coupling or detection sensitivity to light DM particles. This enhancement is associated with the strong correlation-induced flatband effect due to band renormalization, which not only enhances the scattering phase space but also retains the reduced in-medium effect.
_Correlation-Driven Band Renormalization and Topological Phase Transition in Weyl Semimetal Model.--_ As a proof-of-principle, we consider a minimal model of a topological Weyl-Hubbard semimetal defined on a three-dimensional (3D) cubic lattice. Within the Gutzwiller projected wavefunction method [26; 27; 28; 29; 30; 31; 32], Weyl-Hubbard model Hamiltonian reduces to:
\[H = \alpha\sum_{i,s^{\prime}}\biggl{[}-t\sigma_{x,s^{\prime}}(c^{ \dagger}_{js}c_{j+\hat{x},s^{\prime}}+c^{\dagger}_{js}c_{j+\hat{y},s^{\prime}} +c^{\dagger}_{js}c_{j+\hat{z},s^{\prime}}) \tag{1}\] \[-it^{\prime}(\sigma_{y,ss^{\prime}}c^{\dagger}_{js}c_{j+\hat{y},s ^{\prime}}+\sigma_{z,ss^{\prime}}c^{\dagger}_{js}c_{j+\hat{z},s^{\prime}})+ \text{H.c.}\biggr{]}\] \[+m\sum_{j,ss^{\prime}}\sigma_{x,ss^{\prime}}c^{\dagger}_{js}c_{j ^{\prime}}+UdN_{L}\;.\]
Here \(t\), \(t^{\prime}\) are the hopping integrals, \(m\) is the strength of an on-site effective in-plane spin Zeeman energy, and \(U\) is the Hubbard interaction between two electrons of
opposite spin directions on the same site. The parameters \(\alpha\) and \(d\) are the renormalization factor and double occupancy subject to self-consistency conditions, respectively, and \(N_{L}\) is the number of 3D lattice sites, see the Supplemental Material (SM) [33] for more details. In the following discussions, energy is measured in units of \(t\), and length in units of cubic lattice constant \(a\). Both \(t\) and \(a\) are assumed to be one unless specified otherwise. We further fix \(t^{\prime}=0.2\), and \(m=0.125\).
Figure 1 shows the variation of \(\alpha\) and \(d\) as a function of the Hubbard-\(U\) strength. For \(U=0\), the system is non-interacting and yields the well known values \(\alpha=1\) and \(d=1/4\). Both \(\alpha\) and \(d\) decrease with increasing value of \(U\) and vanish at the Brinkman-Rice (BR) [34] transition point for \(U_{c}=16.67\). We caution that while the BR picture based on the Gutzwiller approximation gives a physically transparent description of the Mott metal-insulator transition for a half-filled particle-hole symmetric single-band Hubbard model, the BR transition itself is an artifact of the infinite-dimension limit [35; 36]. However, since the purpose of the present work is focused on capturing correlation-driven band structure renormalization effects and associated topological phase transitions, which occur before the BR transition, we anticipate the consequence of flat electron bands on dark matter detection to be robust especially for the lighter dark matter, for which the quasiparticle excitions are long-lived.
By obtaining the self-consistent Gutzwiller variational parameters, the quasiparticle band dispersion is expressed as
\[E_{\pm,\mathbf{k}}=\pm\sqrt{\xi_{\mathbf{k}}^{2}+|\Delta_{\mathbf{k}}|^{2}}= \pm E_{\mathbf{k}}^{(0)}\;, \tag{2}\]
where \(\xi_{\mathbf{k}}=2\alpha t^{\prime}\sin k_{z}\), \(\Delta_{\mathbf{k}}=Z_{\mathbf{k}}-2i\alpha t^{\prime}\sin k_{y}\) with \(Z_{\mathbf{k}}=m-2\alpha t(\cos k_{x}+\cos k_{y}+\cos k_{z})\). Figure 2 displays the electronic band structure along paths parallel to the \(k_{x}\)-axis (\([k_{x},\pi,0]\) or \([k_{x},0,0]\)) in the \(k_{z}=0\) plane for decreasing values of \(\alpha\). Since the model described in Eq. (1) breaks both time-reversal symmetry (through the effective Zeeman term) and inversion symmetry (by the additional \(\frac{\pi}{2}\)-shifted hopping along the \(y\)- and \(z\)-directions), it produces a Weyl semimetallic phase that hosts Weyl nodes with locally linear band dispersions. In particular, as \(U\) increases (\(\alpha\) decreases) in strength, we find the system to undergo two topological phase transitions: WSM phase-I (\(\alpha>m/2t\)) time-reversal symmetry weakly broken and four Weyl nodes appear at momenta points \((\pm\cos^{-1}(m/2\alpha t),\pi,0)\) and \((\pm\cos^{-1}(m/2\alpha t),0,\pi)\). In WSM phase-II (\(m/6t<\alpha<m/2t\)) time-reversal symmetry is strongly broken, with only two Weyl nodes located at \((\pm\cos^{-1}(m/2\alpha t-2),0,0)\) in the Brillouin zone. For \(\alpha<m/6t\), all Weyl nodes are gapped out leaving the system in a topologically trivial insulating narrow band phase. Interestingly, the latter transition has recently been observed experiments [37] and appears in a Kondo lattice model [38]. The Weyl nature of the semimetallic phases and the topologically trivial insulating phase were determined by analyzing the Berry curvature and \(\mathbb{Z}_{2}\) topological index, as detailed in SM [33]. Note that the electron correlations not only drive topological phase transitions, but also flatten the band dispersion, as also found in experiments [23].
_Correlation-Driven Flatband Effects on Dark Matter Detection Rates.--_ To study the flatband effect on the
Figure 1: Hubbard-\(U\) dependence of the hopping renormalization parameter \(\alpha\) (a) and double occupancy parameter \(d\) (b).
Figure 2: 2D and 3D plots for band dispersion for various values of \(\alpha=1\) [\(U=0\)] (a-b), \(0.5\) [\(U=11.7869\)] (c-d), \(0.025\) [\(U=16.4595\)] (e-f), and \(0.01\) [\(U=16.5857\)] (g-h). For the 2D plots, the wavevector path is taken to be parallel to \(k_{x}\)-axis at \(k_{y}=\pi\) for (a) and (c), and \(k_{y}=0\) for (e) and (g). \(k_{z}\) is fixed to be \(0\).
dark matter detection, we consider both the scattering and absorption mechanisms of dark matter [6; 20; 39; 40]. The detailed formalism is given in the SM [33]. The central quantity entering into both the scattering and absorption rates of DM is the dynamical momentum dependent dielectric function \(\overleftarrow{\mathbf{\epsilon}}^{\prime}(\mathbf{q},\omega)\). In existing literature [20; 21; 22], this key quantity is calculated through the density-density correlator, which is valid for isotropic and weakly anisotropic media. In addition, for the calculation of scattering rates in non-interacting Dirac materials like ZrTe\({}_{5}\)[20], the material parameters were derived from density functional theory. This procedure is valid only near the Dirac nodes, which limits a realistic treatment of the dielectric function in real materials. In the present work, we propose to evaluate this quantity via the conductivity tensor according to the relation \(\epsilon_{\alpha\beta}(\mathbf{q},\omega)=\delta_{\alpha\beta}+4\pi i\sigma_{ \alpha\beta}/\omega\), where \(\omega\) is the circular frequency and \(\sigma_{\alpha\beta}\) is the matrix element of the conductivity tensor. For the Hamiltonian given in Eq. (1), the dynamical conductivity tensor is calculated using the current-current correlator through the Kubo formula [41] and evaluated by integrating over the entire Brillouin zone; see SM [33] for details. This general formalism not only enables a full description of anisotropic effects in real materials but also allows for the inclusion of correlation-driven band renormalization effects in a transparent way. To discuss the correlation-driven flatband effect from a realistic materials perspective, we choose a nearest-neighbor hopping integral of \(t=0.1\) eV and \(a=4\) A.
Figure 3(a) shows the sensitivity reach projection for DM scattering in a correlated Weyl semimetal through a light kinetically mixed dark photon for various values of the Hubbard interaction. As is plainly seen, the behavior of the fiducial cross section as a function of DM energy sits between a fully gapped standard semiconductor and a perfect Dirac system, since our Weyl semimetal system contains both features in the entire Brillouin zone. Furthermore, our results show that the optimally detectable DM, corresponding to the region of the minimal threshold of the fiducial cross section, is in the range of 10 to 100 keV. The correlation effects do not change notably the detection depth of the DM energy. Instead, they significantly reduces the threshold of the fiducial cross section from \(10^{-41}\) cm for smaller \(U\)-values down to \(10^{-44}\) cm for large \(U\)-values. To further quantify this sensitivity enhancement by correlation effects, Fig. 3(b) shows the DM scattering rate for various Hubbard-\(U\) values at a fixed fiducial cross section of \(10^{-42}\). It demonstrates that the optimal scattering rate increases from an order of 1 for small \(U\)-values up to \(10^{2}\) for large \(U\)-values. This result suggests that correlation-driven flatband Weyl semimetals or semiconductors have significant advantages not only for maximal phase space availability as in conventional metals and the reduced optical response as in semiconductors, but also for very narrow "effective" band gap, which can be used to optimize the DM scattering sensitivity by reducing the in-medium effect.
For the detection of kinetically mixed dark photons via the absorption mechanism, where the momentum transfer due to the deposited DM particles is much smaller than their energy (or equivalently their mass), a vertical
Figure 3: Correlation-driven flatband effect on dark matter scattering. (a) Projected reach of dark matter scattering in Weyl semimetals, for varying band renormalization parameter \(\alpha=1.0\), 0.5, 0.025, and 0.01 through a light kinetically mixed dark photon mediator with in-medium effects included. By following Ref. [20], we show the expected back-ground-free 95% confidence level sensitivity (3 events) that can be obtained with 1 kg-yr exposure. (b) Scattering rate as a function of light dark matter mass for varying \(\alpha\) at a fiducial cross section of \(10^{-42}\).
transition between the valence band and conduction band can occur. However, because the effective in-medium mixing angle between dark and regular photons involves both real and imaginary parts of the polarization tensor (related to the dielectric tensor), the absorption probability is proportional to the ratio, \(\frac{\text{Im}\epsilon(m_{A}^{\prime})}{|\epsilon(m_{A}^{\prime})|^{2}}\), where \(m_{A}^{\prime}\) is the mass of the kinetically mixed dark photon. This indicates the DM absorption process is more complicated than the regular optical absorption process in a material.
Figure 4(a-d) shows the correlation effects on the projected sensitivity reach of the Weyl semimetal for the direct absorption of kinetically mixed dark photons. By comparing the projected depth for different values of the band renormalization \(\alpha\) (controlled by the Hubbard-\(U\) interaction), it is clear that correlation effects do not impact significantly the depth of the kinetic mixing parameter \(\varepsilon\) in contrast to DM scattering processes. Instead they reduce the upper bound of the detectable DM photons from the order of 1 eV for weak interactions down to 100 meV for strong interaction strengths. Additionally, the lower bound of DM photon masses is pushed down to 1 meV in the Weyl semimetal phases due to the presence of Weyl nodes irrespective of interaction strength. In the insulating phase (\(\alpha=0.01\)), the appearance of a narrow semiconducting gap produces a finite lower bound threshold of DM photons around 10 meV. To elucidate this behavior, we evaluate the absorption efficiency as defined by the ratio \(\mathscr{M}=\sum_{n}\frac{\text{Im}\epsilon_{n}(m_{A}^{\prime})}{|\epsilon_{n }(m_{A}^{\prime})|^{2}}\) with \(n\) being the eigen index of the dielectric tensor for the Weyl-Hubbard system. Here, we find the maximal efficiency to be governed by a delicate balance between the imaginary part and the norm squared of the dielectric function. Figure 4(e) shows the energy dependence of \(\mathscr{M}\) for various values of renormalization parameter \(\alpha\). As shown, the efficiency \(\mathscr{M}\) is bounded by the effective band width of the material. In the Weyl semimetal phases, it exhibits a tail behavior as \(\omega\to 0\); while in the insulating phase, it exhibits a gap-like nature in the low frequency region. Noticeably, due to the flattened band dispersion, the efficiency intensity is dramatically increased in WSM phase-II and the insulating phase, which significantly enhances the sensitivity.
_Concluding remarks.--_ In conclusion, we have used a model Weyl-Hubbard system as an example for strongly correlated topological quantum matter to investigate the flatband effects on DM detection. We have found that with an order of 1 eV of non-interacting electron bandwidth, the strong correlation effects can push deeper the threshold of the fiducial cross section for the optimally detectable DM in the range of 10 to 100 keV through the scattering process while they can tune the detection regime of the dark photons (1 meV to 100 meV) through the absorption process. More importantly, we have discovered that the correlation-driven flatband behavior can significantly enhance the DM detection sensitivity in both scattering and absorption mechanisms. In addition, our results suggest, for direct absorption of ki
Figure 4: Correlation-driven flatband effect on dark matter absorption. (a-d) Projected reach of kinetically mixed dark photon absorption in Weyl semimetals, for varying band renormalization parameter \(\alpha=1.0\), 0.5, 0.025, and 0.01. It is given in terms of the parameter of \(\varepsilon\) for the kinetic mixing between photon and dark photon [42]. By following Ref. [20], we show the expected back-ground-free 95% confidence level sensitivity (3 events) that can be obtained with 1 kg-yr exposure. (b) Dark photon independent target specific absorption efficiency as a function of dark photon energy for varying band renormalization parameter \(\alpha\).
netically mixed dark photons, one can use flatband features as a design principle for constructing highly sensitive DM detectors with selective dark photon energy regimes. In real materials, it is required that the low-energy correlated bands are well separated from high-energy bands. Recently, the nontrivial band topology and strong correlation-driven flatbands were observed in Ce\({}_{3}\)Pd\({}_{3}\)Bi\({}_{4}\)[24; 25; 37; 43], which could be a promising candidate material platform for direct experimental verification of our dark matter detection predictions. We comment that although we have focused on the flatband effect driven by strong correlation in a Weyl system on the DM detection, the flatband behavior can also be realized in other settings, for example, quantum Hall systems [44], Kagome [45] and twisted bilayer graphene [46] systems and we believe that the findings in the present work might be general. This expanded class of materials may provide a new paradigm for the direct detection of fundamental particles.
_Acknowledgments.--_ We thank Yonit Hochberg, Felix Kahlhoefer, Yonatan Kahn, Filip Ronning, and Kathryn Zurek for stimulating discussions. This work was carried out under the auspices of the U.S. Department of Energy (DOE) National Nuclear Security Administration under Contract No. 89233218CNA000001. It was supported by the LANL LDRD Program (C.L., S.N., B.F., & J.-X.Z.), the Center for the Advancement of Topological Semimetals, a DOE BES EFRC (Z.W. & S.G.). S.P. acknowledges funding by the European Union (ERC, CorMeTop, project 101055088). Q.S. acknowledges the primary support of the U.S. DOE BES under Award No. DE-SC0018197, and by the Robert A. Welch Foundation Grant No. C-1411. S.P and Q.S. acknowledge the hospitality of the KITP, UCSB, where support was provided by the National Science Foundation under Grant No. NSF PHY-1748958. It was also supported in part by the Center for Integrated Nanotechnologies, a DOE BES user facility, in partnership with the LANL Institutional Computing Program for computational resources.
|
2309.09679 | Dynamical Chiral Symmetry Breaking in Quantum Chromo Dynamics: Delicate
and Intricate | Dynamical Chiral Symmetry Breaking (DCSB) in Quantum Chromo Dynamics (QCD)
for the light quarks is an indispensable concept for understanding hadron
physics, i.e., the spectrum and the structure of hadrons. In Functional
Approaches to QCD the respective role of the quark propagator has been evident
since the seminal work of Nambu and Jona-Lasinio has been recast in QCD's
terms. It not only highlights one of the most important aspects of DCSB, the
dynamical generation of constituent quark masses, but also makes plausible that
DCSB is a robustly occurring phenomenon in QCD. The latter impression, however,
changes when higher $n$-point functions are taken into account. In particular,
the quark-gluon vertex, i.e., the most elementary $n$-point function describing
the full, non-perturbative quark-gluon interaction, plays a dichotomous role:
It is subject to DCSB as signalled by its scalar and tensor components but it
is also a driver of DCSB due to the infrared enhancement of most of its
components. Herein, the relevant self-consistent mechanism is elucidated. It is
pointed out that recently obtained results imply that, at least in the
covariant gauge, DCSB in QCD is located close to the critical point and is thus
a delicate effect. And, requiring a precise determination of QCD's three-point
functions, DCSB is established, in particular in view of earlier studies, by an
intricate interplay of the self-consistently determined magnitude and momentum
dependence of various tensorial components of the gluon-gluon and the
quark-gluon interactions. | Reinhard Alkofer | 2023-09-18T11:36:24Z | http://arxiv.org/abs/2309.09679v1 | # Dynamical Chiral Symmetry Breaking
###### Abstract
Dynamical Chiral Symmetry Breaking (D\(\chi\)SB) in Quantum Chromo Dynamics (QCD) for the light quarks is an indispensable concept for understanding hadron physics, i.e., the spectrum and the structure of hadrons. In Functional Approaches to QCD the respective role of the quark propagator has been evident since the seminal work of Nambu and Jona-Lasinio has been recast in QCD's terms. It not only highlights one of the most important aspects of D\(\chi\)SB, the dynamical generation of constituent quark masses, but also makes plausible that D\(\chi\)SB is a robustly occurring phenomenon in QCD. The latter impression, however, changes when higher \(n\)-point functions are taken into account. In particular, the quark-gluon vertex, i.e., the most elementary \(n\)-point function describing the full, non-perturbative quark-gluon interaction, plays a dichotomous role: It is subject to D\(\chi\)SB as signalled by its scalar and tensor components but it is also a driver of D\(\chi\)SB due to the infrared enhancement of most of its components. Herein, the relevant self-consistent mechanism is elucidated. It is pointed out that recently obtained results imply that, at least in the covariant gauge, D\(\chi\)SB in QCD is located close to the critical point and is thus a delicate effect. And, requiring a precise determination of QCD's three-point functions, D\(\chi\)SB is established, in particular in view of earlier studies, by an intricate interplay of the self-consistently determined magnitude and momentum dependence of various tensorial components of the gluon-gluon and the quark-gluon interactions.
## 1 Introduction
Investigations of QCD with the aim of gaining an understanding of hadron physics have been undertaken since QCD has been formulated almost 50 years ago [1]. The recent review [2] summarises on more than 700 pages quite a number of highlights arising from these studies. With its almost 5000 references it makes clear how much this area of research has maturet. Nevertheless, it is agreed upon by the community that the understanding of several essential features of QCD and their implications for hadron physics is far from being satisfactory.
In the following short notes I focus on a very specific property of QCD, namely the approximate chiral symmetry of the light quarks and how it is dynamically broken. Despite the importance of D\(\chi\)SB for the phenomenological consequences with respect to the spectrum and structure of hadrons I am concentrating herein on the underlying mechanisms for D\(\chi\)SB, or more precisely, on a detailed analysis within the picture that a super-critically strong attraction in between massless fermions triggers D\(\chi\)SB, see, e.g., [3; 4; 5]. To this end one may note that more than 60 years ago Nambu and Jona-Lasinio realised that in four spacetime dimensions a certain coupling strength has to be exceeded for D\(\chi\)SB to occur [6; 7].
At this point a disclaimer is in order: Herein, I will summarise and briefly review some investigations of D\(\chi\)SB, the choice of which is based on my own attempts within this field of research. By no means it is intended to disregard different approaches to the topic which are based on complementary techniques and/or pictures (as, e.g., an explanation of Chiral Symmetry Breaking by considering ensembles of QCD vacua containing lumps of gluon fields with non-vanishing topological winding number densities). And given the wealth of literature on D\(\chi\)SB, even if one restricts oneself (i) to the picture of a super-critically strong attraction as underlying mechanism and (ii) to functional methods, it is impossible within such a short synopsis as the one presented here to discuss or even mention all relevant
research on this topic. Such omissions are also in accord with the intention of the presented discussion: To provide evidence that D\(\chi\)SB in QCD is quite delicate, its manifestations in the properties of quarks and quark-gluon interactions exhibit many facets, and the interplay in between those features makes D\(\chi\)SB an intricate idiosyncrasy of QCD.
## 2 How robust is D\(\chi\)SB in QCD?
### The Nambu-Jona-Lasinio picture
The seminal papers by Nambu and Jona-Lasinio [6; 7] introduced the notion of D\(\chi\)SB in analogy to the shortly before formulated BCS model of superconductivity [8; 9]. The generic idea of Nambu and Jona-Lasinio has been that massless (light) nucleons interact via a four-fermion interaction which in turn leads to massive nucleons and (almost) massless pions as (pseudo-) Goldstone bosons. As their starting point was to describe nucleons as massless Dirac fermions interacting via a SU\({}_{L}\)(2) \(\times\) SU\({}_{R}\)(2) chirally invariant interaction the dynamics of their model respects chiral symmetry but the ground state symmetry was broken down to a vector SU\({}_{L+R}\)(2) symmetry. Therefore, given the three-dimensional coset space, three pseudo-scalar massless, resp., light excitations arise as (would-be) Goldstone bosons, the pions.
From this one can take away three important lessons:
* In contradistinction to spontaneous symmetry breaking the mechanism of dynamical symmetry breaking introduces a dichotomous nature for the (would-be) Goldstone bosons, they are not only Goldstone bosons but at the same time bound states of a highly collective nature. This is true for the original picture based on nucleons but, of course, also if one starts with light quarks interacting at the tree-level in a chirally symmetric way, see, e.g., the discussion in [10].
* D\(\chi\)SB implies the generation of dynamical masses for originally massless and/or light fermions. This solves the puzzle why in the quark model for the light quarks the so-called constituent quark masses at the order of \(\gtrsim 350\) MeV are required instead of the much smaller current quark masses.
* In contradistinction to non-relativistic superconductivity where Cooper pairs are formed at arbitrary small couplings1[9] D\(\chi\)SB in four spacetime dimensions only takes place if the coupling exceeds a critical value. Footnote 1: As a matter of fact, this statement is only true in the mean-field approximation. When taking into account fluctuations also a certain minimal coupling is required to form Cooper pairs.
Although these three statements are correct, they alone provide an incomplete picture. Before explicating in which sense the second statement has to be augmented it is instructive to have a closer look at the third one. In the chiral limit any order parameter will show as a function of the coupling a non-analyticity at the critical value of the coupling. For light quarks, i.e., in the case of approximate chiral symmetry, one has a cross-over characterised by a rapid change of the would-be order parameter. This is illustrated in Fig. 1 in which for a calculation within a Nambu-Jona-Lasinio model the constituent quark mass is shown as a function of the four-fermion coupling, for the details see Ref. [10]. This calculation seems to imply that the physical point is such that the corresponding coupling is much larger than the physical one, and correspondingly all order parameters would depend only mildly on the precise value of the coupling. In the following I will argue that this behaviour seen in a Nambu-Jona-Lasinio model (and certain truncations to QCD) is not correct for QCD. The most important effect of this is the resulting sensitivity of all chiral order parameters on the precise value of the quark-quark interaction strength.
### On the Dyson-Schwinger / Bethe-Salpeter approach in Rainbow-Ladder truncation
As D\(\chi\)SB is a non-perturbative phenomenon methods beyond perturbation theory are needed to investigate it. If it comes to the study of dynamical symmetry breaking an approach based on Dyson-Schwinger and Bethe-Salpeter equations has been widely employed, see, e.g., the textbook [5] for an introduction. In particular, this approach has
been used widely in the context of QCD and hadron physics as documented by a number of reviews [11; 12; 13; 14; 15; 16; 17].
An essential element in this approach is the choice of a symmetry-preserving truncation of the infinite set of equations for the \(n\)-point correlation functions. Within a Poincare-covariant setting (implying, at least implicitly, the choice of a covariant gauge, cf. the discussion below in sect. 2.4) the simplest non-trivial of such approximations is the rainbow-ladder truncation. It owes its name because the infinitely many re-summed diagrams look like rainbows for the quark propagator's Dyson-Schwinger equation and like ladders for the mesons' bound state equations, the Bethe-Salpeter equations.
Since Ref. [18] several hundred solutions of the quark propagator's Dyson-Schwinger equation in rainbow approximation have been published, and since Ref. [19; 20] a similar number of solutions for the pion Bethe-Salpeter equation in ladder approximation have been described in the literature. For many but not all hadrons such an approximation works astonishingly well, see, e.g., Ref. [16] for a detailed discussion and Refs. [21; 22; 23; 24; 25] for some examples of beyond-rainbow-ladder calculations.2
Footnote 2: For a study of the functional renormalisation group taking a dynamical quark-gluon vertex into account, see, e.g., [26].
For the purpose of these notes the use of the rainbow-ladder truncation will not allow to resolve the issues raised in the preceding section. The reason is quite simple: In this truncation the quark-gluon vertex is given by a model, and for technical reasons the employed models are incomplete. Important aspects of the effect of D\(\chi\)SB on the quark-gluon interaction are thereby excluded by assumption. Phrased otherwise, one cannot find what one excludes by approximation.
A further "twist" of the rainbow-ladder truncation lies in the reduction to the tree-level tensor component in the quark-gluon vertex followed by a fitting of the overall interaction strength to phenomenological data. This then leads, as argued in the next section, to an overestimate of the coupling strength between quarks and gluons.
Figure 1: An example for the generated constituent quark mass as a function of the coupling within a NambuβJona-Lasinio model calculation. (Adapted from Ref. [10].)
### On the onset of the Conformal Window
It is evident from hadron phenomenology that D\(\chi\)SB takes place in QCD. For a Gedankenexperiment let us consider a gauge theory with \(N_{f}\) massless (or light) fermions in the fundamental representation of the gauge group. If \(N_{f}\) is small the anti-screening caused by the gauge bosons dominates, and consequently the running coupling will increase when tuning the scale from larger to smaller scales. Eventually, it will exceed the critical coupling, and D\(\chi\)SB will take place. At very large \(N_{f}\) the screening caused by the fermions will dominate, and asymptotic freedom will be lost.
However, in between this two extremes there will exist an interval for \(N_{f}\) where the balance in between the anti-screening due to the gauge bosons and the screening due to the fermions is such that the anti-screening effect wins only so slightly against the screening. Correspondingly, the coupling will increase when lowering the scale but only so weakly that the critical coupling is never exceeded. Thus, D\(\chi\)SB will not take place. In the chiral limit, such a theory possesses an infrared fixed point, it will be effectively scale-invariant in the deep infrared. For that reason the corresponding interval for \(N_{f}\) is called the conformal window.
Although the above described generic picture has been verified by studies based on coupled Dyson-Schwinger equations [27; 28], however, the critical value for the numbers of flavours at which the conformal window sets in, \(N_{f}^{crit}\), is severely underestimated when compared to studies employing other methods, see, e.g., [29; 30; 31; 32]. The decisive hint why Dyson-Schwinger studies in rainbow-ladder truncation show such a deficiency comes from the sensitivity of \(N_{f}^{crit}\) on the quark-gluon vertex if one goes (slightly) beyond the rainbow-ladder truncation. This behaviour makes plain that the distribution of the overall quark-gluon interaction strength in the sub-GeV region over several of the quark-gluon vertex tensor structures, as it happens without any doubt in QCD, is essential in an understanding how the increased screening by an increasing number of massless, resp., light, quark flavours drives the system into a chirally symmetric phase with an IR fixed point.
### A note on gauge dependence
Since the seminal work by Curtis and Pennington [33] it has become evident how important the fermion-gauge-boson vertex is in achieving gauge independence in the Dyson-Schwinger approach. Although for QED substantial progress has been achieved, see, e.g., [34; 35; 36] and references therein, in the studies of the role of the fermion-photon vertex for gauge independence the corresponding question in QCD, namely on the impact of the quark-gluon vertex on the gauge (in-)dependence of hadron observables, has proven to be an extremely hard question. Even the much more humble question how the different tensors of the quark-gluon vertex may depend on the gauge parameter within the class of linear covariant gauges and how this will effect the underlying mechanism for D\(\chi\)SB in this class of gauges seems beyond reach given the status of Dyson-Schwinger studies of the Yang-Mills sector in the linear covariant gauge, see, e.g., [37; 38; 39].
Therefore, although the question whether D\(\chi\)SB in QCD is delicate and intricate only in the Landau gauge and might be a robust phenomenon in other gauges is highly interesting it will likely remain to be unanswered in the next years. Nevertheless, in view of the insights which may be gained in studying the role of the quark-gluon vertex and its impact on D\(\chi\)SB in different gauges an extension of the approach based on Nielsen identities (as performed in [39]) to the quark sector is certainly desirable. One might also apply the technique of interpolating gauges [40; 41; 42; 43] to relate the existing Landau and Coulomb gauge results on D\(\chi\)SB. Until such studies will succeed the herein described analysis will only be applicable to QCD in the Landau gauge.
## 3 Correlation functions in the Yang-Mills sector
In order to investigate the interplay between the quark propagator and the quark-gluon vertex within functional methods one needs to be able to determine the propagators and the three-point functions in the Yang-Mills sector accurately. In the last two decades there have been enormous progress in this direction, see, e.g., the reviews [44; 45; 46], and it is fair to say that in the Landau gauge the gluon and the ghost propagators as well as the three-gluon and the ghost-gluon vertex are well understood.
Hereby, two features are important.
First, the gluon propagator's renormalisation function as function of the gluon virtuality \(p^{2}\) displays on the space-like side a maximum slightly below one GeV, and then decreases towards the infrared. This unusual behaviour not only signals a strongly reduced spectral dimension [47] and relates the gluon long-range properties to non-vanishing \(p^{2}\)[47; 48] but it also leads to the fact that the gluon propagator alone, i.e., without quark-gluon vertex dressings, is much too small in the sub-GeV region to trigger D\(\chi\)SB, see, e.g., the discussion in [49].
Second, the three-gluon vertex gets suppressed towards the infrared, and the corresponding form factors display in the most accurate available calculations even a zero at small values of \(p^{2}\). As in the Dyson-Schwinger equation for the quark-gluon vertex the three-gluon vertex turns out to be decisive in determining the infrared enhancement of the quark-gluon vertex form factors which in turn determines the size and the proximity to criticality of D\(\chi\)SB these two observations together explain why in QCD in Landau gauge D\(\chi\)SB is so delicate in distinction from models ignoring these two facts.
## 4 Quark propagator and quark-gluon vertex
### Structure of the quark-gluon vertex
The arguments provided above elucidate the special role of the quark-gluon vertex in the description of D\(\chi\)SB in the Landau gauge. Unfortunately, this vertex possesses a rich structure, and it is exactly the interplay in between parts of this structure which turn out to be relevant for the physics of D\(\chi\)SB.
There is one straightforward property of the fully dressed quark-gluon vertex: To the best of our knowledge it possesses the same colour structure as its tree-level counter part.
When it comes to flavour, and in particular to the dependence on the current quark mass, a careful assessment of the properties of the substructures is in order. To this end one notes first that in the Landau gauge only that parts of the vertex are relevant which are strictly transverse to the gluon momentum. As the quark-gluon vertex transforms as four-vector under Lorentz and as a Dirac matrix under spin rotations this leaves one with eight possible tensor structures, each tensor structure being multiplied with a form factor depending on three Lorentz-invariant variables which in turn are built from the three involved momenta.
Instead of choosing immediately a definite basis for this eight tensors it is worthwhile to discuss some generic aspects first. The Feynman integrals for the form factor multiplying the tree-level tensor are ultraviolet divergent, and thus this one form factor needs renormalisation. Choosing the other seven tensors orthogonal to the tree-level one the corresponding form factors are determined from ultraviolet-finite expressions, and correspondingly they decrease power-like for large momenta. This leads to the expectation, later on confirmed by calculations, that these form factors are only sizeable if at least one of the involved momenta is small, i.e., in the sub-GeV region.
The eight tensors of the transverse part of the quark-gluon vertex can grouped according to their behaviour under chiral transformations: Four of them are chirally symmetric, and thus they will be generically non-vanishing even in the chiral limit and the symmetric Wigner-Weyl phase of chiral symmetry. In that latter case the form factors of the other four chirally non-symmetric tensor structures vanish. In the Nambu-Goldstone phase they will be dynamically generated, and phrased otherwise this exactly means that D\(\chi\)SB also
includes the generation of chirality-violating scalar and tensor quark-gluon interaction. As can be seen below they are quite sizeable.
If it comes to the dependence of the quark-gluon vertex on the current quark mass, i.e., on the explicit breaking of chiral symmetry, this distinction in between the chirally symmetric and non-symmetric parts lead to a quite astonishing behaviour of the latter components. The Feynman diagrams for the quark-gluon vertex contain at least one quark propagator within a loop. Of course, this quark propagator goes to zero as the quark mass goes to infinity. Therefore, naively one might conclude that the fully dressed quark-gluon vertex will approach the tree-level one for larger and larger current quark masses. However, one has to take into account that the chirally non-symmetric form factors by the mere virtue of their transformation properties also will have a factor of at least one current quark mass and/or dynamically generated constituent quark mass in the numerator. Therefore the suppression by the current quark mass in the denominator of the integrand introduced via the quark propagator can and generically will be canceled.
As a matter of fact, this mechanism is already at work in QED w.r.t. the Pauli term and the resulting anomalous magnetic moments (g-2): There is a cancelation of factors of the fermion mass in the QED contributions to, e.g., the (g-2) of the electron and the muon.
### Dynamical generation of scalar and tensorial quark-gluon interactions
In the following the above statements will be quantified on the basis of the results obtained in [50], see also [51; 52; 53].3 I want to emphasise here that the corresponding results of other groups would have been equally valid, the choice is only based on the availability of the data. And to be concise within this short note only results in the chiral limit will be discussed.
Footnote 3: The interested reader will find figures of the quark-gluon vertexβ form factors in these references.
The following kinematics is chosen:
Gluon momentum: \(k^{\mu}=p^{\mu}-q^{\mu}\) with \(p^{\mu}\) outgoing and \(q^{\mu}\) incoming quark momentum.
Define furthermore:
(i) Normalised gluon momentum:
\[\hat{k}^{\mu}:=k^{\mu}/\sqrt{k^{2}}.\]
(ii) Averaged quark momentum, \(\frac{1}{2}(p^{\mu}+q^{\mu})\), project it transverse to gluon momentum and normalise it
\[s^{\mu}:=(\delta^{\mu\nu}-\hat{k}^{\mu}\hat{k}^{\nu})\frac{1}{2}(p^{\nu}+q^{ \nu})\,,\quad\hat{s}^{\mu}=s^{\mu}/\sqrt{s^{2}}\,.\]
As a three-point function the quark-gluon vertex, or more precisely the factors multiplying the tensors in a decomposition, depend on three Lorentz invariants, and we choose them to be \(p^{2}\), \(q^{2}\) and \(p\cdot q\). The transverse part of the quark-gluon vertex is expanded then in the form
\[\Gamma^{\mu}_{trans}(p,q;k)=\sum_{j=1}^{8}g_{i}(p^{2},q^{2};p\cdot q)\rho_{i} ^{\mu}\,, \tag{1}\]
and in the following we will approximate the transverse part of the quark-gluon vertex. First, as the angular dependence turns out to be weak we will neglect it. The functions \(g_{i}(p^{2},q^{2};p\cdot q)\) are symmetric in \(p\) and \(q\), therefore we will substitute them by functions \(g_{i}(p^{2})\) of only the averaged momentum-squared, i.e., \(p^{2}=\frac{1}{2}(p^{2}+q^{2})\). The model functions \(g_{i}(p^{2})\) are fitted to the numerical results at symmetric momenta, \(g(p^{2},p^{2};p\cdot q=0)\) obtained from a coupled set of quark propagator and quark-gluon vertex Dyson-Schwinger equations in the chiral limit with a model for the three-gluon vertex, see [50; 51; 52; 53] for more details.
Hereby it turns out that \(g_{1}\), \(g_{2}\), \(g_{3}\propto g_{2}\) and \(g_{4}=g_{7}\) are important whereas based on the underlying results for \(g_{5}\) and \(g_{8}\) it is safe to neglect these two functions.
Employing that to numerical accuracy \(g_{4}=g_{7}\), and that one observes \(g_{3}\propto g_{2}\) in the sense that \(1.45\,g_{2}(p^{2},p^{2},0)+g_{3}(p^{2},p^{2};0)\) is for all momenta smaller than 0.08, one is left **with effectively three tensor structures**.
1. Tree-level tensor structure (with \(x=\hat{p}^{2}/\ 1\ \text{GeV}^{2}\)): \(\rho_{1}^{\mu}=\gamma_{T}^{\mu}=(\delta^{\mu\nu}-\hat{k}^{\mu}k^{\nu})\gamma^{\mu}\), with \(g_{1}(\hat{p}^{2})=1+(1.6673+0.2042x)/\left(1+0.6831x+0.0008509x^{2}\right)\) Of course, the tree-level tensor structure is allowed in the chirally symmetric phase.
2. The further sizeable chirally symmetric tensor structure is given by: \(\rho_{4}^{\mu}+\rho_{7}^{\mu}=\hat{k}\ \hat{s}^{\mu}+\hat{k}\ \hat{\chi}\ \gamma_{T}^{\mu}\), with \(g_{4}(\hat{p}^{2})=g_{7}(\hat{p}^{2})=2.589x/(0.8587+3.267x+x^{2})\) 3. The one important tensor structure due to dynamical or explicit) chiral symmetry breaking is a combination of \(\rho_{2}^{\mu}=i\hat{s}^{\mu}\) and \(\rho_{3}^{\mu}=i\hat{k}\gamma_{T}^{\mu}\).
The corresponding form factors are \(g_{3}(\hat{p}^{2})=0.3645x/\left(0.01867+0.3530x+x^{2}\right)\), \(g_{2}(\hat{p}^{2})=-g_{3}(\hat{p}^{2})/1.45\), and the latter relation also fixes the relative weight of the 2nd and the 3rd component in the expansion (1).
Hereby, \(\rho_{2}^{\mu}\) is a Dirac scalar (i.e., proportional to the unit matrix), and \(\rho_{3}^{\mu}\) a rank-2 tensor.
Therefore, the one main conclusion of this section is that in QCD in Landau gauge a **scalar and a tensorial quark-gluon interaction** is dynamically generated. Phrased otherwise, non-perturbatively fully dressed gluons interact with quarks as if they had a spin-0 and spin-2 component.
### The coupled system and its lessons for D\(\chi\)SB
Putting all the above pieces together one realises that a description of D\(\chi\)SB in QCD in Landau gauge and based on the fully dressed quark, gluon and ghost propagators as well the fully dressed three-point functions displays quite an elaborate web of self-consistent interdependencies. Contrary to what has been assumed in the early days of QCD, namely that the gluon propagator is the main driver of a robust version of D\(\chi\)SB, it turns out that the intricate interplay between all the involved functions puts the whole system close to criticality. Although amongst these functions the quark-gluon vertex is the richest in structure it is the one quantity which allows to improve on our understanding of the complicated way the fully dressed gluon interacts with fully dressed quarks in the strongly interacting domain.
From a bird's eye perspective this should not come as a surprise. It is obvious from the experimental results in hadron physics that thresholds which are apparent in scattering cross sections stem from intermediate hadron resonances. Despite its rich structure the quark-gluon vertex is still the simplest among all the QCD correlation functions which could seed such dependencies. Together with an understanding how the kinetic terms for hadrons might emerge from the QCD degrees of freedom (for a corresponding discussion, see, e.g., [54]) this opens up the possibility to map out the wealth of hadron physics with less than a dozen functions derived from QCD. Therefore, the richness of these functions and of the equations determining them should not come as a surprise.
## 5 Conclusions and Outlook
In this short note I argued that the view on D\(\chi\)SB in QCD needs to take into account the results obtained over the last two decades for the correlation functions of gluons and quarks. Having been seduced by some older results to believe that D\(\chi\)SB in Strong Interactions is a robust phenomenon (due to the reason that interactions are strong) the more recent results urge us to re-think this point of view: It looks much more that D\(\chi\)SB is delicate and intricate.
At this point one might argue that this distinction between robust & straightforward vs. delicate & intricate might only be an interpretational one. To my opinion there are at least three reasons to pay attention to the view advocated here. The first one is within hadron physics itself. Being aware about the sensitivity in the description of D\(\chi\)SB provides
some guidance in understanding which hadron observables will inherit this sensitivity on the details of the underlying quark and glue dynamics. In this respect the question of the formation of a hadron provides quite likely one of the main examples of an intricate process. Second, quite a number of models beyond the Standard Model as, e.g., technicolor, exploit a potential proximity to the lower end of the conformal window to generate a "walking" coupling and correspondingly a vast separation of scales. Needless to say that an understanding of the transition to the conformal window and the physics therein (as well as close to it) will build on the details of the fate of chiral symmetry in this parameter domain. Third (but not least), I'd like to remind the reader that the Standard Model possesses another chiral transition triggered by the Higgs-Yukawa couplings and happening at the electroweak scale. (Some insight into how intricate these two chiral transitions intertwine can be inferred from the recent investigation reported in ref. [55]). Therefore, a deepened insight into the chiral properties of the Standard Model fermions will always need to include the very nature of D\(\chi\)SB within QCD.
###### Acknowledgements.
It is a pleasure to cordially thank all colleagues who collaborated with me on the topics presented here. I am in particular grateful for the insights gained in my many respective discussions with Per Amund Amundsen, William Detmold, Gernot Eichmann, Christian Fischer, Markus Hopfer, Markus Huber, Felipe Llanes-Estrada, Axel Maas, Pieter Maris, Angel Miramontes, Mario Mitter, Jan Pawlowski, Hugo Reinhardt, Alexandre Salas-Bernardez, Helios Sanchis-Alepuz, Lorenz von Smekal, Milan Vujinovic, Herbert Weigel, Richard Williams, Andreas Windisch, Fabian Zierler and Daniel Zwanziger.
|
2305.00588 | Bayesian Finite Mixtures of Ising Models | We introduce finite mixtures of Ising models as a novel approach to study
multivariate patterns of associations of binary variables. Our proposed models
combine the strengths of Ising models and multivariate Bernoulli mixture
models. We examine conditions required for the identifiability of Ising mixture
models, and develop a Bayesian framework for fitting them. Through simulation
experiments and real data examples, we show that Ising mixture models lead to
meaningful results for sparse binary contingency tables. | Zhen Miao, Yen-Chi Chen, Adrian Dobra | 2023-04-30T21:55:57Z | http://arxiv.org/abs/2305.00588v1 | # Bayesian Finite Mixtures of Ising Models
###### Abstract
We introduce finite mixtures of Ising models as a novel approach to study multivariate patterns of associations of binary variables. Our proposed models combine the strengths of Ising models and multivariate Bernoulli mixture models. We examine conditions required for the identifiability of Ising mixture models, and develop a Bayesian framework for fitting them. Through simulation experiments and real data examples, we show that Ising mixture models lead to meaningful results for sparse binary contingency tables.
## 1 Introduction
Loglinear models have been widely used in the analysis of multivariate categorical data in many scientific fields, such as biological sciences, natural language processing, data mining (Bishop et al., 1975; Christensen, 1997) due to their ability to capture first, second and higher order interactions among the observed variables. They originated from testing for the absence of interactions in \(2\times 2\times 2\) contingency tables (Bartlett, 1935), and were later generalized to multidimensional contingency tables (Roy and Kastenbaum, 1956; Darroch, 1962; Good, 1963; Goodman, 1963). In this paper we focus on Ising models (Kindermann and Snell, 1980) which can be viewed as graphical loglinear models for binary variables that include only first order interaction terms (Lauritzen, 1996). Ising models have particular relevance in network analysis of binary data (van Borkulo et al., 2014).
Various frequentist approaches for estimation of loglinear models have been proposed in the literature. For example, the existence and uniqueness of maximum likelihood estimators (MLEs) have been studied for different types of tables, e.g., three-way contingency tables (Birch, 1963), general contingency tables (Haberman, 1974; Aickin, 1979; Verbeek, 1992), and sparse contingency tables (Fienberg and Rinaldo, 2012). The computation of the MLEs can be performed using closed-form expressions (Bishop et al., 1975, Chapter 3.4), via iterative proportional fitting based on matrix inversion techniques (Goodman, 1964) or Newton-Raphson techniques (Haberman, 1974). Fienberg (2000) provides a comprehensive review. Bayesian approaches for loglinear modelling have also received a lot of interest, with a particular focus on the development of suitable prior distributions.
Key examples include the multivariate normal prior (Knuiman and Speed, 1988; Dellaportas and Forster, 1999; Brooks and King, 2001; Dobra et al., 2006), the spike-and-slab prior (Rockova, 2018), the hyper Dirichlet conjugate prior (Dawid and Lauritzen, 1993) and its generalization, as well as the Diaconis-Ylvisaker (DY) conjugate prior (Massam et al., 2009). The use of the DY prior for model selection has been studied in Dobra and Massam (2010); Letac and Massam (2012).
Sparse contingency tables raise key issues related to estimation and fit for Ising models as well as for the larger family of loglinear models. Scalability of Ising models to discrete datasets with many variables has been solved in various ways - see, among others, Ravikumar et al. (2010). Sparse contingency tables are also characterized by the imbalance of their cell counts (Dobra and Lenkoski, 2011). Ising models together with the richer class of loglinear models tend to oversmooth the fitted cell probabilities which makes them unable to capture the magnitude of the larger cell counts. To this end, several classes of mixture models have been proposed as alternatives. Multivariate Bernoulli mixture models (Carreira-Perpinan and Renals, 2000; Allman et al., 2009) have shown promising results in applications (Juan and Vidal, 2002, 2004). Other classes of mixture models for categorical data include parallel factor analysis (PARAFAC) (Bro, 1997), simplex factor models (Bhattacharya and Dunson, 2012), sparse PARAFAC (Zhou et al., 2015), the Tucker decomposition (De Lathauwer et al., 2000), and the collapsed Tucker decomposition (Johndrow et al., 2017). While these mixture models perform well with respect to fit for sparse contingency tables, they are not easily interpretable especially when it comes to inferring relevant patterns of multivariate associations among discrete variables. We note two studies exploring the connections between the two modeling paradigms. Papathomas and Richardson (2016) leverages mixture models to reduce the number of parameters in a loglinear model, while Johndrow et al. (2017) delves into the relationship between dimension reduction in mixture models and loglinear models.
A common issue in finite mixture models is their identifiability (Teicher, 1967; Yakowitz and Spragins, 1968; Titterington et al., 1985). Identifiability has been studied for various types of mixture models such as uniform mixtures and binomial mixtures (Teicher, 1961), normal mixtures, exponential mixtures, Gamma mixtures (Teicher, 1963), Poisson mixtures (Teicher, 1960), and negative binomial mixtures (Titterington et al., 1985). For general mixture models, Teicher (1963) suggests using moment generating functions to prove identifiability, and Teicher (1967) presents sufficient conditions for the mixture of product densities. The most recent identifiability results are for multivariate Bernoulli mixture models through the conditional independence assumption (Allman et al., 2009; Xu, 2017).
The novel contributions of our work are as follows. In Section 2, we propose a novel Bayesian method for fitting finite mixtures of Ising models with the modeling goal of inferring associations between binary variables. In Sections 3 and 4, our novel framework is illustrated through simulation experiments and real data applications. We provide sufficient and necessary conditions for identifiability of the model in Section 5 with proofs in Section 7. Finally, in Section 6, we discuss our results together with several potential extensions.
## 2 Ising Mixture Models
### Notation
Let \(\mathbf{X}:=(X_{1},\ldots,X_{d})^{T}\) be a vector of \(d\in\mathbb{N}^{+}\) binary random variables, each taking values of \(0\) or \(1\). The set of possible values for \(\mathbf{X}\) is denoted as \(I:=\{0,1\}^{d}\) with elements \(\mathbf{i}=(i_{1},\ldots,i_{d})\in I\) assumed to be order lexicographically. The vector of cell probabilities of \(\mathbf{X}\) are \((P(\mathbf{X}=\mathbf{i}):\mathbf{i}\in I)^{T}\).
Let the main effect of \(X_{v}\) be denoted by \(\theta_{v}\in\mathbb{R}\), where \(v\in[d]=\{1,2,\ldots,d\}\). The interaction effect between \(X_{v^{\prime}}\) and \(X_{v}\) is denoted by \(\theta_{v^{\prime}v}\in\mathbb{R}\), where \(v^{\prime}<v\) and both \(v^{\prime},v\in[d]\). We say that \(\mathbf{X}\) follows an Ising model if the logarithm of the probability associated with cell \(\mathbf{i}\in I\) is proportional to a linear combination of main effects \((\theta_{v}:v\in[d])^{T}\) and interaction effects \((\theta_{v^{\prime}v}:v^{\prime}<v)^{T}\):
\[\log p_{\mathbf{i}}(\mathbf{\theta})=\sum_{v=1}^{d}\theta_{v}i_{v}+\sum_{v^{\prime}=1} ^{d-1}\sum_{v=v^{\prime}+1}^{d}\theta_{v^{\prime}v}i_{v^{\prime}}i_{v}+C(\mathbf{ \theta}),\]
where \(C(\mathbf{\theta})\) is the logarithm of the normalizing constant, \(\mathbf{\theta}\) represents the union of main effects and interaction effects, i.e., \(\mathbf{\theta}=(\theta_{1},\ldots,\theta_{d},\theta_{12},\ldots,\theta_{(d-1)d})\) and \(p_{\mathbf{i}}(\mathbf{\theta})=P(\mathbf{X}=\mathbf{i}\mid\mathbf{\theta})\). If \(\theta_{v^{\prime}v}=0\), variables \(X_{v^{\prime}}\) and \(X_{v}\) are conditionally independent given the rest. The main effects and the interaction terms can be interpreted as conditional log odds and log odds ratios (Agresti, 2002).
The Ising model can be expressed as
\[\mathbf{p}(\mathbf{\theta})=(p_{\mathbf{i}}(\mathbf{\theta}):\mathbf{i}\in I)^{T}=\exp(A^{T}\mathbf{ \theta})/[\mathbf{1}_{|I|}^{T}\exp(A^{T}\mathbf{\theta})], \tag{2.1}\]
where \(\mathbf{1}_{|I|}\) is a \(|I|\)-dimensional constant vector of all ones, \(A\in\mathbb{R}^{[d(d+1)/2]\times|I|}\) is a conventionally defined constant design matrix (Wang et al., 2019). Here applying the exponential function \(\exp(\cdot)\) to a vector means applying it element-wise to obtain a vector. We illustrate the definition of \(A\) through an example.
**Example 2.1**.: Suppose we have two binary random variables \(\mathbf{X}=(X_{1},X_{2})^{T}\) that follow an Ising model with main effects \(\theta_{1},\theta_{2}\) and interaction effect \(\theta_{12}\). The following linear combination
\[\log p_{\mathbf{i}}=\log p_{(i_{1},i_{2})}=\theta_{1}i_{1}+\theta_{2}i_{2}+ \theta_{12}i_{1}i_{2}+C(\mathbf{\theta}),\]
for \(\mathbf{i}\in I=\{(0,0),(1,0),(0,1),(1,1)\}\), is equivalent to
\[\begin{pmatrix}\log p_{(0,0)}\\ \log p_{(1,0)}\\ \log p_{(1,1)}\\ \log p_{(1,1)}\end{pmatrix}-C(\mathbf{\theta})=\begin{pmatrix}\theta_{1}\cdot 0+ \theta_{2}\cdot 0+\theta_{12}\cdot 0\cdot 0\\ \theta_{1}\cdot 1+\theta_{2}\cdot 0+\theta_{12}\cdot 1\cdot 0\\ \theta_{1}\cdot 0+\theta_{2}\cdot 1+\theta_{12}\cdot 0\cdot 1\\ \theta_{1}\cdot 1+\theta_{2}\cdot 1+\theta_{12}\cdot 1\cdot 1\end{pmatrix}= \begin{pmatrix}0\\ \theta_{1}\\ \theta_{2}\\ \theta_{1}+\theta_{2}+\theta_{12}\end{pmatrix}=A^{T}\begin{pmatrix}\theta_{1} \\ \theta_{2}\\ \theta_{12}\end{pmatrix},\]
where \(A^{T}=[[0,0,0],[1,0,0],[0,1,0],[1,1,1]]\).
We say \(X\) follows an Ising mixture model if its vector of cell probabilities is expressed as a finite mixture of Ising models, i.e.
\[(P(\mathbf{X}=\mathbf{i}):\mathbf{i}\in I)^{T}=\mathbf{p}_{\text{mix}}(\mathbf{w},\mathbf{\Theta}):=(p_ {\text{mix},\mathbf{i}}(\mathbf{w},\mathbf{\Theta}),\mathbf{i}\in I)^{T}:=\sum_{k=1}^{K}w^{(k)} \mathbf{p}(\mathbf{\theta}^{(k)}), \tag{2.2}\]
with
\[\mathbf{p}(\mathbf{\theta}^{(k)})=\exp(A^{T}\mathbf{\theta}^{(k)})/[\mathbf{1}_{|I|}^{T}\exp(A^{T} \mathbf{\theta}^{(k)})].\]
Here \(K\in\mathbb{N}^{+}\) is the number of components, \(\mathbf{w}=(w_{k}:k\in[K])^{T}\in(0,1)^{K}\) represents the weights of the \(K\) components with \(\sum_{k\in[K]}w_{k}=1\), \(\mathbf{\theta}^{(k)}:=(\theta_{1}^{(k)},\ldots,\theta_{d}^{(k)},\theta_{12}^{(k) },\ldots,\theta_{(d-1)d}^{(k)})\in\mathbb{R}^{d(d+1)/2}\) is the vector of main effects and interaction effects for component \(k\). We define the vector of parameters of the Ising mixture model as \(\mathbf{\Theta}:=(\mathbf{\theta}^{(k)},k\in[K])\in\mathbb{R}^{Kd(d+1)/2}\).
The Ising mixture model says that the binary random vector \(\mathbf{X}\) is drawn from \(K\) subpopulations with probabilities \(\mathbf{w}\). Given the \(k\)-th subpopulation, \(k\in[K]\), \(\mathbf{X}\) follows an Ising model with parameters \(\mathbf{\theta}^{(k)}\).
We assume that the observed data consist of \(N\) i.i.d. observations of \(\mathbf{X}\) under the simple multinomial sampling theme (Cochran, 1952). It then follows that the resulting cell counts \(\mathbf{n}:=(n_{\mathbf{i}}:\mathbf{i}\in I)^{T}\) follow a Multinomial\((N,\mathbf{p})\) distribution, where \(N=\sum_{\mathbf{i}\in I}n_{\mathbf{i}}\). Let \(\|\cdot\|_{2}\) be the Euclidean norm of a vector.
### Prior specification
In our proposed Bayesian framework, we assume that the mixture weights \(\mathbf{w}\) follow a Dirichlet distribution \(\text{Dirichlet}(\mathbf{\alpha})\) with parameters \(\mathbf{\alpha}:=(\alpha^{(k)},k\in[K])\) with \(\alpha^{(k)}>0\). This is a common choice for nonnegative parameters that sum to 1 (Olkin and Rubin, 1964). For each component \(k\), we assume that the main effects \((\theta_{v}^{(k)},v\in[d])^{T}\) independently and identically follow a normal distribution with mean \(0\) and variance \(\sigma_{1}^{2}>0\), denoted by \(N(0,\sigma_{1}^{2})\). We also assume that the interaction effects \((\theta_{v^{\prime}v}^{(k)},v^{\prime}<v)^{T}\) independently and identically follow a continuous spike-and-slab prior with spike variance \(\sigma_{0}^{2}\) and slab variance \(\sigma_{1}^{2}\) where \(0<\sigma_{0}<\sigma_{1}\). Specifically,
\[\theta_{v^{\prime}v}^{(k)}|\gamma_{v^{\prime}v}^{(k)}\sim(1-\gamma_{v^{\prime }v}^{(k)})N(0,\sigma_{0}^{2})+\gamma_{v^{\prime}v}^{(k)}N(0,\sigma_{1}^{2}), \tag{2.3}\]
where \(\gamma_{v^{\prime}v}^{(k)}\) is the indicator of the association between variables \(X_{v^{\prime}}\) and \(X_{v}\) in the \(k\)-th component, and it is assumed to follow a Bernoulli distribution with known parameter \(\beta\in(0,1)\). More precisely, \(\gamma_{v^{\prime}v}^{(k)}=0\) indicates that the interaction effect between variables \(v^{\prime}\) and \(v\) in the \(k\)th component is more likely to be close to \(0\), while a value of \(\gamma_{v^{\prime}v}^{(k)}=1\) implies that this interaction effect is more likely to be non-zero. This is particularly clear when the spike variance, \(\sigma_{0}^{2}\), is set to zero, resulting in a point-mass mixture of a point mass at \(0\) and a normal distribution for the interaction effects. Denote \(\mathbf{\gamma}^{(k)}:=(\gamma_{v^{\prime}v}^{(k)},v^{\prime}<v)^{T}\) and let \(\mathbf{\Gamma}:=(\mathbf{\gamma}^{(k)},k\in[K])\in\{0,1\}^{Kd(d-1)/2}\). The collection of binary random variables \(\mathbf{\Gamma}\) is of key interest since it represents the presence of non-zero interaction effects between variables in each component. Moreover, We use a directed acyclic graph to illustrate the associations between parameters in our model specification, see Figure 1.
The Normal distributions in the spike-and-slab prior can be changed to other distributions such as the Laplace distribution. The continuous spike-and-slab prior, which serves as the predecessor to the point-mass mixture, has been gaining renewed attention in recent years (Rockova and George, 2014, 2018). The continuity of this prior allows for a more fluid exploration of posteriors using both MCMC and optimization techniques due to its ability to decrease the spike variance to zero and explore the entire path of posteriors as it approaches the point mass mixture. In addition to
computational benefits, continuous mixture priors can also exhibit optimal posterior behavior, such as the oracle property of the posterior mean (Ishwaran and Rao, 2005; Rockova, 2018).
### Posterior distribution
To obtain the joint posterior distribution of \(\mathbf{w}\), \(\mathbf{\Theta}\) and \(\mathbf{\Gamma}\), we begin by discussing the Ising model (2.1), and then consider the Ising mixture model (2.2) with \(K\geq 2\) components.
#### 2.3.1 The Ising model
Under Multinomial sampling, we have \(\mathbf{n}\mid\mathbf{\theta}\sim\text{Multinomial}(N,\mathbf{p}(\mathbf{\theta}))\). It then follows that the probability mass function of \(\mathbf{n}\) given \(\mathbf{\theta}\) is
\[\pi(\mathbf{n}\mid\mathbf{\theta})=\frac{N!}{\prod_{\mathbf{i}\in I}n_{\mathbf{i} }!}[\mathbf{1}_{|I|}\cdot\exp(A^{T}\mathbf{\theta})]^{-N}\exp(\mathbf{n}^{T}A^{T}\mathbf{ \theta})=\frac{N!}{\prod_{\mathbf{i}\in I}n_{\mathbf{i}}!}\exp[N\ell(\mathbf{\theta}|\mathbf{ n})],\]
where \(\ell(\mathbf{\theta}\mid\mathbf{n}):=-\log[\mathbf{1}_{|I|}\cdot\exp(A^{T}\mathbf{\theta})]+ \mathbf{n}^{T}A^{T}\mathbf{\theta}/N\) is the log-likelihood function of the Ising model. Thus, the joint distribution \(\pi(\mathbf{n},\mathbf{\theta},\mathbf{\gamma})=\pi(\mathbf{n}|\mathbf{\theta})\pi(\mathbf{\theta}|\bm {\gamma})\pi(\mathbf{\gamma})\) can be further written as
\[\pi(\mathbf{n},\mathbf{\theta},\mathbf{\gamma}) =\frac{N!}{\prod_{\mathbf{i}\in I}n_{\mathbf{i}}!}\exp[N\ell(\mathbf{\theta}) ]\cdot\prod_{v\in[d]}\frac{1}{\sigma_{1}\sqrt{2\pi}}\exp\Big{(}-\frac{\theta_{ v}^{2}}{2\sigma_{1}^{2}}\Big{)}\] \[\quad\cdot\prod_{v^{\prime}<v}\Big{(}\frac{1}{\sigma_{0}\sqrt{2 \pi}}\exp\Big{(}-\frac{\theta_{v^{\prime}v}^{2}}{2\sigma_{0}^{2}}\Big{)} \Big{)}^{1-\gamma_{v^{\prime}v}}\Big{(}\frac{1}{\sigma_{1}\sqrt{2\pi}}\exp \Big{(}-\frac{\theta_{v^{\prime}v}^{2}}{2\sigma_{1}^{2}}\Big{)}\Big{)}^{ \gamma_{v^{\prime}v}}\] \[\quad\cdot\prod_{v^{\prime}<v}\beta^{\gamma_{v^{\prime}v}}(1- \beta)^{1-\gamma_{v^{\prime}v}}.\]
After some algebra, we obtain
\[\pi(\mathbf{\gamma}\mid\mathbf{n})\propto\int\exp\left[N\ell(\mathbf{\theta} \mid\mathbf{n})-\frac{\sum_{v\in[d]}\theta_{v}^{2}}{2\sigma_{1}^{2}}\right]\cdot \prod_{v^{\prime}<v}\left[\exp\left(\frac{-\theta_{v^{\prime}v}^{2}}{2\sigma_{ 0}^{2}}\right)\right]^{1-\gamma_{v^{\prime}v}}\left[\frac{\beta\sigma_{0}}{(1- \beta)\sigma_{1}}\exp\left(\frac{-\theta_{v^{\prime}v}^{2}}{2\sigma_{1}^{2}} \right)\right]^{\gamma_{v^{\prime}v}}\mathsf{d}\mathbf{\theta}. \tag{2.4}\]
Figure 1: A directed acyclic graph to illustrate the associations between parameters in our model specification.
We note that the posterior distribution of \(\mathbf{\gamma}\) is a mixture of Bernoulli distributions. Specifically, given \(\mathbf{\theta}\), the posterior distribution of \(\mathbf{\gamma}\) is
\[\gamma_{v^{\prime}v}|\mathbf{\theta}\sim\text{Bernoulli}\left(\frac{1}{1+(1-\beta) \sigma_{1}/(\beta\sigma_{0})\cdot\epsilon^{\theta^{2}_{v^{\prime}v}(1/\sigma_{ 1}^{2}-1/\sigma_{0}^{2})/2}}\right),v^{\prime}<v,\text{ independently}.\]
As a consequence, the posterior mean of \(\mathbf{\gamma}\) can be written as \(E_{\theta_{v^{\prime}v}\sim\pi(\mathbf{\theta}|\mathbf{n})}[r(\theta_{v^{\prime}v})], v^{\prime}<v\), where
\[\theta\mapsto r(\theta):=\frac{1}{1+(1-\beta)\sigma_{1}/(\beta\sigma_{0}) \cdot\epsilon^{\theta^{2}(1/\sigma_{1}^{2}-1/\sigma_{0}^{2})/2}}, \tag{2.5}\]
For any fixed \(\sigma_{1}>0\), we have \(r(\theta)\to I(\theta\neq 0)\) as \(\sigma_{0}\to 0\) for any \(\theta\in\mathbb{R}\). This means that \(r(\theta)\) can be interpreted as a smoothed measurement of whether the interaction effect \(\theta\) is zero. For a small \(\sigma_{0}\), \(r(\theta)\) will be still close to \(1\) even if \(|\theta|\) is small. This is interpreted that the sensitivity of measuring \(I(\theta\neq 0)\) increases as \(\sigma_{0}\) decreases. It is worth noting that the function \(r(\theta)\) has a lower bound that is always positive due to the continuous spike-and-slab prior. Specifically, the lower bound is given by \(r(\theta)\geq r(0)=1/[1+(1-\beta)\sigma_{1}/(\beta\sigma_{0})]\), where \(r(0)\) is a function that monotonically increases as \(\sigma_{0}\) increases. The posterior density \(\pi(\mathbf{\theta}\mid\mathbf{n})\) is proportional to
\[h_{1}(\mathbf{\theta}):=\exp\left(-N\log[\mathbf{1}_{|I|}^{T}\exp(A^{T}\mathbf{\theta})]+ \mathbf{n}^{T}\mathbf{A}^{T}\mathbf{\theta}-\frac{\mathbf{\theta}^{T}\mathbf{\theta}}{2\sigma_{1} ^{2}}\right)\cdot\prod_{v^{\prime}<v}\left[\frac{(1-\beta)\sigma_{1}}{\beta \sigma_{0}}\exp\left(\frac{\theta_{v^{\prime}v}^{2}}{2\sigma_{1}^{2}}-\frac{ \theta_{v^{\prime}v}^{2}}{2\sigma_{0}^{2}}\right)+1\right].\]
In other words, \(E(\mathbf{\gamma}\mid\mathbf{n})=E_{\mathbf{\theta}\sim\pi(\mathbf{\theta}|\mathbf{n})}[r(\mathbf{ \theta})]\), where applying \(r(\cdot)\) to a vector means applying it element-wisely, i.e., \(r(\mathbf{\theta})=(r(\theta_{v^{\prime}v}):v^{\prime}<v)^{T}\).
#### 2.3.2 The Ising mixture model
The posterior density of \(\mathbf{w}\), \(\mathbf{\Theta}\) and \(\mathbf{\Gamma}\) can be written as
\[\pi(\mathbf{w},\mathbf{\Theta},\mathbf{\Gamma}\mid\mathbf{n})=\prod_{k,v^{\prime}<v}\pi(\gamma _{v^{\prime}v}^{(k)}\mid\theta_{v^{\prime}v}^{(k)})\cdot\pi(\mathbf{w},\mathbf{\Theta} \mid\mathbf{n}),\]
where the posterior distribution of \(\gamma_{v^{\prime}v}^{(k)}\) given \(\theta_{v^{\prime}v}^{(k)}\) follows a Bernoulli \(\left(r(\theta_{v^{\prime}v}^{(k)})\right)\) distribution and \(\pi(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})\) is the posterior density of \(\mathbf{w},\mathbf{\Theta}\) which is proportional to
\[h_{4}(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n}):=\exp[N\tilde{\ell}(\mathbf{w},\mathbf{\Theta} \mid\mathbf{n})]\cdot\prod_{v^{\prime}<v,k}\Big{[}\frac{(1-\beta)\sigma_{1}}{\beta \sigma_{0}}\exp\Big{(}\frac{[\theta_{v^{\prime}v}^{(k)}]^{2}}{2\sigma_{1}^{2}} -\frac{[\theta_{v^{\prime}v}^{(k)}]^{2}}{2\sigma_{0}^{2}}\Big{)}+1\Big{]},\]
where
\[\tilde{\ell}(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})=\ell(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n} )+\sum_{k\in[K]}(\alpha_{k}-1)\log(w_{k})/N-\|\mathbf{\Theta}\|_{2}^{2}/(2N\sigma_ {1}^{2}),\]
and \(\ell(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})\) is the log-likelihood function of the Ising mixture model
\[\ell(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})=\frac{\mathbf{n}^{T}}{N}\log\Big{(}\sum_{k\in[ K]}w^{(k)}\exp[A^{T}\mathbf{\theta}^{(k)}-\log(\mathbf{1}_{|I|}^{T}\exp(A^{T}\mathbf{ \theta}^{(k)}))]\Big{)}.\]
Given \(\mathbf{\Theta}\), the posterior distribution of \(\mathbf{\gamma}\) does not depend on the data \(\mathbf{n}\). The posterior distribution of \(\gamma_{v^{\prime}v}^{(k)}\) given \(\mathbf{n}\) is a mixture of Bernoulli distributions with \(E(\gamma_{v^{\prime}v}^{(k)}\mid\mathbf{n})=E_{\theta_{v^{\prime}v}^{(k)}\sim\pi( \mathbf{w},\mathbf{\Theta}|\mathbf{n})}[r(\theta_{v^{\prime}v}^{(k)})]\).
The posterior mean of \(\mathbf{w}\) is \(E(\mathbf{w}\mid\mathbf{n})=E_{\mathbf{w}\sim\pi(\mathbf{w},\mathbf{\Theta}|\mathbf{n})}\mathbf{w}.\) We are specifically interested in the posterior mean of \(\gamma^{(k)}_{v^{\prime}v}\) because its value between \(0\) and \(1\) reflects the magnitude of \(|\theta^{(k)}_{v^{\prime}v}|\) through the function \(|\theta|\mapsto r(|\theta|)\). As the true value of \(|\theta|\) increases, the posterior mean of \(\mathbf{\Gamma}\) also approaches \(1\), allowing us to identify the most significant non-zero interaction effects.
### Computing posterior means
We present importance sampling algorithms for computing the posterior means of \(\mathbf{\Gamma}\) in the Ising mixture model. In the case of a single component \(K=1\), we use the normal approximation as the sampling distribution. In the case of multiple components \(K\geq 2\), we use the normal mixture approximation instead.
#### 2.4.1 The Ising model
Recall that \(E(\mathbf{\gamma}\mid\mathbf{n})=E_{\mathbf{\theta}\sim\pi(\mathbf{\theta}|\mathbf{n})}[r(\mathbf{ \theta})]\), where \(r(\cdot)\) is defined in (2.5) and
\[\pi(\mathbf{\theta}\mid\mathbf{n})\propto h_{1}(\mathbf{\theta})=\exp(N\tilde{\ell}(\mathbf{ \theta}))\cdot\prod_{v^{\prime}<v}\left[\frac{(1-\beta)\sigma_{1}}{\beta\sigma _{0}}\exp\left(\frac{\theta^{2}_{v^{\prime}v}}{2\sigma_{1}^{2}}-\frac{\theta^{ 2}_{v^{\prime}v}}{2\sigma_{0}^{2}}\right)+1\right]\]
with \(\tilde{\ell}(\mathbf{\theta}):=\ell(\mathbf{\theta})-\mathbf{\theta}^{T}\mathbf{\theta}/[2N \sigma_{1}^{2}]\) and \(\ell(\mathbf{\theta}):=-\log[\mathbf{1}_{|I|}^{T}\exp(A^{T}\mathbf{\theta})]+\mathbf{n}^ {T}\mathbf{A}^{T}\mathbf{\theta}/N\). Note that \(\ell(\mathbf{\theta})\) is the log-likelihood function of Ising models and \(\tilde{\ell}(\mathbf{\theta})\) is its regularized version. Because the function
\[(y_{1},\ldots,y_{n})\mapsto\log[\exp(y_{1})+\ldots+\exp(y_{n})],\]
is convex, the function \(\mathbf{\theta}\mapsto\tilde{\ell}(\mathbf{\theta})\) is strictly concave, thus it has a unique maximum at the point \(\tilde{\mathbf{\theta}}:=\operatorname*{argmax}_{\mathbf{\theta}}\tilde{\ell}(\mathbf{ \theta})\). It follows from the Taylor series of the regularized log-likelihood \(\tilde{\ell}\) that
\[\tilde{\ell}(\mathbf{\theta})\approx\tilde{\ell}(\tilde{\mathbf{\theta}})-\frac{1}{2 }(\mathbf{\theta}-\tilde{\mathbf{\theta}})^{T}\widetilde{\Sigma}^{-1}(\mathbf{\theta}- \tilde{\mathbf{\theta}}),\]
where \(\widetilde{\Sigma}\) is the inverse of Hessian matrix of \(\mathbf{\theta}\mapsto-\tilde{\ell}(\mathbf{\theta})\) at \(\tilde{\mathbf{\theta}}\). Thus \(\exp(N\tilde{\ell}(\mathbf{\theta}))\) can be approximated by a Normal density with mean \(\tilde{\mathbf{\theta}}\) and covariance \(\widetilde{\Sigma}/N\) (up to multipling a constant). After some algebra it follows that
\[\widetilde{\Sigma}=\left[A\left\{\frac{\operatorname*{diag}(\exp(A^{T}\mathbf{ \theta}))}{\mathbf{1}_{|I|}^{T}\exp(A^{T}\mathbf{\theta})}-\frac{\exp(A^{T}\mathbf{ \theta})\exp(A^{T}\mathbf{\theta})^{T}}{[\mathbf{1}_{|I|}^{T}\exp(A^{T}\mathbf{\theta}) ]^{2}}\right\}A^{T}+\frac{\mathbf{I}_{d(d+1)/2}}{N\sigma_{1}^{2}}\right]^{-1},\]
where \(\operatorname*{diag}(\exp(\mathrm{A}^{T}\mathbf{\theta}))\) represents a diagonal matrix with diagonal \(\exp(A^{T}\mathbf{\theta})\). Let \(h_{2}(\mathbf{\theta})\) be the density function of \(N(\tilde{\mathbf{\theta}},\widetilde{\Sigma}/N)\). We obtain that the ratio \(h_{1}(\mathbf{\theta})/h_{2}(\mathbf{\theta})\) is proportional to
\[h_{3}(\mathbf{\theta}):=\exp\left(N\tilde{\ell}(\mathbf{\theta})-N\tilde{\ell}(\tilde{ \mathbf{\theta}})+\frac{N}{2}(\mathbf{\theta}-\tilde{\mathbf{\theta}})^{T}\widetilde{ \Sigma}^{-1}(\mathbf{\theta}-\tilde{\mathbf{\theta}})\right)\cdot\prod_{v^{\prime}<v} \left[\frac{(1-\beta)\sigma_{1}}{\beta\sigma_{0}}\exp\left(\frac{\theta^{2}_{v ^{\prime}v}}{2\sigma_{1}^{2}}-\frac{\theta^{2}_{v^{\prime}v}}{2\sigma_{0}^{2}} \right)+1\right].\]
The importance sampling estimate of the posterior mean of \(\mathbf{\gamma}\) is
\[E(\mathbf{\gamma}\mid\mathbf{n})=\frac{E_{\mathbf{\theta}\sim h_{2}}r(\mathbf{\theta})h_{3}( \mathbf{\theta})}{E_{\mathbf{\theta}\sim h_{2}}h_{3}(\mathbf{\theta})}\approx\frac{\frac{1 }{M}\sum_{m\in[M]}r(\mathbf{\theta}_{m})h_{3}(\mathbf{\theta}_{m})}{\frac{1}{M}\sum_{m \in[M]}h_{3}(\mathbf{\theta}_{m})}=\sum_{m\in[M]}r(\mathbf{\theta}_{m})\frac{h_{3}( \mathbf{\theta}_{m})}{\sum_{m\in[M]}h_{3}(\mathbf{\theta}_{m})},\]
where \((\mathbf{\theta}_{m},m\in[M])^{T}\) are i.i.d. sampled from \(N(\tilde{\mathbf{\theta}},\widetilde{\Sigma}/N)\).
#### 2.4.2 The Ising mixture model
Given the complexity of computing the posterior mean of \(\gamma\) in the Ising model, one can easily see that the computation of the posterior mean under Ising mixture model is a lot harder. To resolve this issue, we use the normal mixture approximation (Gamerman and Lopes, 2006, page 85-86) instead of the normal approximation because the log-likelihood function \(\mathbf{w},\mathbf{\Theta}\mapsto\ell(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})\) may have multiple modes. The normal mixture sampling distribution denoted by \(h_{5}(\mathbf{w},\mathbf{\Theta})\) can be constructed as follows.
1. Initialize a random start value of \(\mathbf{w},\mathbf{\Theta}\) and find a local optimal point \(\tilde{\mathbf{w}},\tilde{\mathbf{\Theta}}\) that minimizes \(\mathbf{w},\mathbf{\Theta}\mapsto\tilde{\ell}(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})\) based on this start value.
2. Repeat the last step \(J\) times. These local optimal points are denoted by \(\{\tilde{\mathbf{w}}_{j},\tilde{\mathbf{\Theta}}_{j}\}_{j=1}^{J}\) and the corresponding optimal value is \(\tilde{\ell}_{j}:=\tilde{\ell}(\tilde{\mathbf{w}}_{j},\tilde{\mathbf{\Theta}}_{j} \mid\mathbf{n})\). Let \(\tilde{\Sigma}_{j}\) be the inverse of Hessian matrix of \(\mathbf{\Theta}\mapsto-\tilde{\ell}(\tilde{\mathbf{w}}_{j},\mathbf{\Theta})\) at \(\tilde{\mathbf{\Theta}}_{j}\).
3. Let \(f(\mathbf{\Theta}\mid\tilde{\mathbf{\Theta}}_{j},\tilde{\Sigma}_{j}/N)\) be the density function of \(N(\tilde{\mathbf{\Theta}}_{j},\tilde{\Sigma}_{j}/N)\) and let \(f(\mathbf{w}\mid N\tilde{\mathbf{w}}_{j}+\mathbf{1}_{K})\) be the density function of Dirichlet distribution with parameters \(N\tilde{\mathbf{w}}_{j}+\mathbf{1}_{K}\).
4. Then let \(h_{5}(\mathbf{w},\mathbf{\Theta})\) be \(\sum_{j\in[J]}\frac{\exp{(\tilde{\ell}_{j})}}{\sum_{j\in[J]}\exp{(\tilde{\ell }_{j})}}f(\mathbf{w}\mid N\tilde{\mathbf{w}}_{j}+\mathbf{1}_{K})f(\mathbf{\Theta}\mid\tilde{ \mathbf{\Theta}}_{j},\tilde{\Sigma}_{j}/N)\).
The number of components, \(J\), in the normal mixture sampling distribution can be chosen arbitrarily, but using \(J=5\) or \(10\) is typically sufficient to capture the majority of important modes.
The first two steps identify main local maximum points and preparing for normal approximations for each of them. The sampling distributions are constructed in the third step. We choose to use the normal approximation for the main effects and interaction effects \(\mathbf{\Theta}\). We note that the mode of the Dirichlet sampling distribution of the weights \(\mathbf{w}\in(0,1)^{K}\) corresponds to the local maximum points. Its parameters are then scaled by \(N\) to account for the sample size, which can be justified by equating the second derivative of the objective function with that of the sampling density function in the specific scenario where the number of components \(K\) is \(2\). These sampling distributions are combined into the full sampling distribution in which their weights are given by the corresponding values of the likelihood. The components with higher likelihood receive higher weights in the full sampling distribution.
The posterior mean of \(\mathbf{\Theta}\) and \(\mathbf{w}\) is given by
\[E(\mathbf{\Gamma}\mid\mathbf{n}) =E_{\mathbf{\Theta}\sim\pi(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})}r(\mathbf{ \Theta})=\int\pi(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})r(\mathbf{\Theta})\mathrm{d}\mathbf{ \Theta}\mathrm{d}\mathbf{w}=\int\frac{\pi(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})}{h_{5}(\bm {w},\mathbf{\Theta})}h_{5}(\mathbf{w},\mathbf{\Theta})r(\mathbf{\Theta})\mathrm{d}\mathbf{\Theta} \mathrm{d}\mathbf{w}\] \[=E_{\mathbf{w},\mathbf{\Theta}\sim h_{5}(\mathbf{w},\mathbf{\Theta})}\frac{\pi( \mathbf{w},\mathbf{\Theta}\mid\mathbf{n})}{h_{5}(\mathbf{w},\mathbf{\Theta})}r(\mathbf{\Theta})=\frac {E_{\mathbf{w},\mathbf{\Theta}\sim h_{5}(\mathbf{w},\mathbf{\Theta})}\frac{h_{4}(\mathbf{w},\mathbf{ \Theta}\mid\mathbf{n})}{h_{5}(\mathbf{w},\mathbf{\Theta})}}{E_{\mathbf{\Theta},\mathbf{w}\sim h_{5 }(\mathbf{w},\mathbf{\Theta})}\frac{h_{4}(\mathbf{w},\mathbf{\Theta}\mid\mathbf{n})}{h_{5}(\mathbf{w },\mathbf{\Theta})}},\]
where \(\pi(\mathbf{\Theta},\mathbf{w}\mid\mathbf{n})\propto h_{4}(\mathbf{\Theta},\mathbf{w}\mid\mathbf{n})\).
As the sample size \(N\) increases, the regularized log-likelihood \(\tilde{\ell}\) gets closer to the log-likelihood function \(\ell\). Its maximum optimal point \(\tilde{\mathbf{\theta}}=\operatorname*{argmax}_{\mathbf{\theta}}\tilde{\ell}(\mathbf{\theta})\) gets closer to that of the log-likelihood function, \(\operatorname*{argmax}_{\mathbf{\theta}}\ell(\mathbf{\theta})\). If the Ising mixture model is identifiable, the mean of this sampling distribution converges to the true values of \(\mathbf{\Theta}\) as \(N\) goes to infinity. Additionally, as \(N\) increases, the covariance matrix of the sampling distribution, \(\tilde{\Sigma}/N\), converges to the zero matrix, indicating that the sampling distribution is becoming more concentrated at the true values of interaction effects. It is worth noting that the classical MLE usually plays an important role in the sampling algorithm in Bayesian analysis, as demonstrated in various studies (Dobra and Massam, 2010; Fienberg and Rinaldo, 2012). For Ising mixture models the regularized log-likelihood function \(\tilde{\ell}\) can also be replaced by the log-likelihood function \(\ell\) in the sampling algorithm.
Since the weights \(\mathbf{w}\) are between \(0\) and \(1\), it is not appropriate to sample them from a Normal distribution. Instead, we sample them from a Dirichlet distribution with mode \(\hat{\mathbf{w}}\). The parameter of this Dirichlet distribution is set to \(N\hat{\mathbf{w}}+1\) in order to reflect the increased concentration around the mode as the sample size \(N\) increases.
The posterior mean of \(\mathbf{\Gamma}\) remains a meaningful method for inferring associations between variables, even if the density function \(\pi(\mathbf{n}\mid\mathbf{\Gamma})\) is non-identifiable. This is due to the fact that the posterior distribution of \(\mathbf{\Gamma}\) is proportional to the product of the likelihood function \(\pi(\mathbf{\Gamma}\mid\mathbf{n})\) and the prior distribution of \(\mathbf{\Gamma}\), \(\pi(\mathbf{\Gamma})\). If \(\pi(\mathbf{n}\mid\mathbf{\Gamma})\) is non-identifiable, meaning that multiple values of \(\mathbf{\Gamma}\) produce the same likelihood, the posterior mean of \(\mathbf{\Gamma}\) is the weighted average of all such values as the sample size \(N\to\infty\). In particular, if the prior parameter for each element \(\gamma^{(k)}_{v^{\prime}v}\) is set to \(\beta=0.5\), the posterior mean can be interpreted as a majority vote, with a value greater than \(0.5\) indicating that the majority of values of \(\gamma^{(k)}_{v^{\prime}v}\) that produce the same likelihood are \(1\). Additionally, decreasing the prior parameter, e.g., \(\beta<0.5\), can favor sparser association structures.
The identifiability of the density function \(\pi(\mathbf{n}\mid\mathbf{\Gamma})\) is implied by the identifiability of \(\pi(\mathbf{n}\mid\mathbf{\Theta},\mathbf{w})\). We discuss necessary and sufficient identifiability conditions for \(\pi(\mathbf{n}\mid\mathbf{\Theta},\mathbf{w})\) in Section 5. On the other hand, the identifiability of \(\pi(\mathbf{n}\mid\mathbf{\Gamma})\) can be partially solved by considering the rank of the observed information matrix evaluated at the MLEs for the mixture parameters, as discussed in Fruhwirth-Schnatter (2006, Chapter 9.5.2). This is a consequence of the equivalence between local identifiability and the rank of the information matrix, as established in (Rothenberg, 1971; Catchpole and Morgan, 1997).
We determine the Fisher information matrix of an Ising mixture model with parameters \(\mathbf{w}\), \(\mathbf{\Theta}\) from the log-likelihood function \(\ell\left(\mathbf{w},\mathbf{\Theta}\mid\mathbf{X}\right)=\log p_{\operatorname*{mix},\mathbf{ X}}(\mathbf{w},\mathbf{\Theta}):\)
\[\mathcal{I}(\mathbf{w},\mathbf{\Theta}):=-E\left[\frac{\partial^{2}\log p_{ \operatorname*{mix},\mathbf{X}}(\mathbf{w},\mathbf{\Theta})}{\partial(\mathbf{w},\mathbf{\Theta})^ {2}}\right]=-\sum_{\mathbf{i}\in I}p_{\operatorname*{mix},\mathbf{i}}(\mathbf{w},\mathbf{ \Theta})\frac{\partial^{2}\log p_{\operatorname*{mix},\mathbf{i}}(\mathbf{w},\mathbf{ \Theta})}{\partial(\mathbf{w},\mathbf{\Theta})^{2}}.\]
The Fisher information matrix provides a justification for the local identifiability of an Ising mixture model- see Section 5.
## 3 Simulation experiments
We evaluate the empirical performance of our proposed Bayesian framework for assessing the strength of association in Ising mixture models. The number of binary variables is fixed at \(d=6\).
### The Ising model
We set the number of variables to \(d=6\), the sample size to \(N=10000\), and the main effects to
\[(\theta_{1},\theta_{2},\theta_{3},\theta_{4},\theta_{5},\theta_{6})=(1,-1,1,-1,1,-1).\]
For the interaction effects (\(\theta_{v^{\prime}v}:v^{\prime}<v\)) we used two designs. In design A, \((\theta_{12},\theta_{13},\theta_{14},\theta_{23})=(1,-1,1,-1)\) and others are \(0\). In design B, \((\theta_{12},\theta_{13},\theta_{14},\theta_{23})=(1,-0.5,0.2,-.1)\) and others are \(0\).
We chose the following combinations for the hyperparameters of the prior distributions. In Setting 1, \(\sigma_{0}=0.1\), \(\sigma_{1}=1\), \(\beta=0.5\). In Setting 2: \(\sigma_{0}=0.01\), \(\sigma_{1}=1\), \(\beta=0.5\). The two settings illustrate the sensitivity of the results with respect to the ratio of the two variances in the spike-and-slab prior (2.3). The sampling size \(M\) in the importance sampling algorithm is \(10^{5}\).
The data consist of the six-way contingency table with counts given by \(N\cdot\mathbf{p}(\mathbf{\theta})\), where \(p(\mathbf{\theta})\) is determined as in Equation (2.1). Keeping the data fixed as opposed to sampling it from \(\text{Multinomial}(N,\mathbf{p}(\mathbf{\theta}))\) allows us to evaluate he sampling error caused by the importance sampling procedure and the performance of the proposed Bayesian method as the sample size \(N\) approaches infinity. The posterior mean of the association indicators, \(\mathbf{\gamma}\), is reported for each combination of designs and settings in Table 1. These results are an average of \(100\) independent replicates of the importance sampling algorithm with \(M=10^{5}\).
The results in Table 1 provide evidence for the effectiveness of the proposed Bayesian framework in inferring associations between variables. Under Design A, all four non-zero interaction terms have an absolute value of \(1\) which makes them clearly distinguishable from the other interaction terms that are set to zero. In both prior settings, the estimated posterior means of the indicators corresponding to non-zero interaction effects is \(1\), while the estimated posterior means of the zero interaction effects is \(0.1\) or less.
Under Design B, three of the four non-zero interaction effects have an absolute value of \(0.5\) or less. Their smaller size makes them less distinguishable from the remaining interaction terms that are set to zero. In Setting 1 (\(\sigma_{0}=.1\)), the estimated posterior means of the association indicators for larger interaction effects is close to \(1\). The estimated posterior means for smaller interaction effects is less than \(0.5\), suggesting that these associations may be harder to identify. In Setting 2 (\(\sigma_{0}=.01\)), the estimated posterior means of the association indicators for all four non-zero interaction effects is greater than \(0.5\), demonstrating the increased effectiveness of a smaller value of \(\sigma_{0}\) in detecting small interaction effects.
The estimated posterior mean of \(\mathbf{\gamma}\) is smaller in Setting 1 than in Setting 2 for the interaction effects that are set to zero. This should not be surprising, since the expectation \(E(\gamma\mid\mathbf{n})=E_{\theta\sim\pi(\mathbf{\theta}\mid\mathbf{n})}[r(\theta)]\), where \(r(\theta)\) is defined in (2.5) is monotonically increasing with respect to \(\sigma_{0}\) for small values of \(|\theta|\). As a result, when \(\sigma_{0}\) approaches \(0\), the lower bound of the posterior mean of \(\gamma\), which is \(r(0)=1/(1+(1-\beta)\sigma_{1}/\beta\sigma_{0})\), becomes smaller. On the other hand, the sampling error is larger in Setting 2 (\(\sigma_{0}=0.01\)) compared to Setting 1 (\(\sigma_{0}=0.1\)). Under both designs the importance sampling standard errors for the posterior mean estimates of association indicators are approximately \(0.0001\) in Setting 1 and \(0.1\) in Setting 2. The reason relates to the sampling density function \(h_{2}(\mathbf{\theta})\) being closer to the objective density function \(\pi(\mathbf{\theta}\mid\mathbf{n})\) in Setting 1, resulting in a lower variance of the importance sampling method. As such, selecting the value of \(\sigma_{0}\) involves
balancing the stability of the importance sampling algorithm with the ability to detect distinguish non-zero interaction effects.
### The Ising mixture model with two components
We set the number of variables to \(d=6\), the sample size \(N=10000\), the weight for the first component to \(w^{(1)}=0.4\), and the main effects to \(\mathbf{\theta}^{(1)}=\mathbf{\theta}^{(2)}=(1,-1,1,-1,1,-1)\). The spike-and-slab prior parameters are set to \(\sigma_{0}=0.1\), \(\sigma_{1}=1\), \(\beta=.5\). This is Setting 1 in the previous simulation experiment. The sampling size \(M\) in the importance sampling algorithm is \(10^{5}\). The number of components \(J\) in the normal mixture sampling distribution for Bayesian Ising mixture models is 5.
The data consist of the six-way contingency table with counts given by \(\mathbf{n}=N\cdot\mathbf{p}_{\text{mix}}(\mathbf{w},\mathbf{\Theta})\) where \(p_{\text{mix}}(\mathbf{w},\mathbf{\Theta})\) is defined in Equation (2.2). We used two designs for the interaction effects. In design C, \((\theta_{12}^{(1)},\theta_{13}^{(1)},\theta_{46}^{(2)},\theta_{56}^{(2)})=(1,-1,1,-1)\) and others are 0. In design D, \((\theta_{12}^{(1)},\theta_{13}^{(1)},\theta_{23}^{(2)},\theta_{14}^{(2)}, \theta_{15}^{(2)})=(1,-1,1,1,-1)\) and others are 0. Under both designs, we fit an Ising model as well as an Ising mixture model with two components. Table 2 presents the estimated posterior means of the association indicators for both models. These results are an average of 100 independent replicates of the importance sampling algorithm with \(M=10^{5}\). The importance sampling standard error is about 0.05 for both designs which is an indication of the stability of the importance sampling algorithm. As an illustration of computation time on average, the Bayesian Ising model required only 0.37 seconds, while the Bayesian two-component Ising mixture model took 5.14 minutes to complete the importance sampling algorithm under Design D and Setting 1. Both experiments were conducted on a laptop with a 1.8 GHz Intel Core i5 processor and 8 GB of memory.
Under both designs, the Ising model identifies the non-zero interaction effects from both components of the mixture. However, under Design D, it incorrectly identifies two additional interaction effects that are actually zero in both mixture components. On the other hand, the Ising mixture model with two components identifies all non-zero and all zero interaction effects in both components based on a cutoff of 0.5 under both designs. The estimated posterior mean of the weight
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \(\gamma_{12}\) & \(\gamma_{13}\) & \(\gamma_{14}\) & \(\gamma_{15}\) & \(\gamma_{16}\) & \(\gamma_{23}\) & \(\gamma_{24}\) & \(\gamma_{25}\) & \(\gamma_{26}\) & \(\gamma_{34}\) & \(\gamma_{35}\) & \(\gamma_{36}\) & \(\gamma_{45}\) & \(\gamma_{46}\) & \(\gamma_{56}\) \\ & \multicolumn{8}{c}{Posterior mean under Design A and Setting 1} \\
**1.0** & **1.0** & **1.0** &.10 &.10 & **1.0** &.10 &.10 &.10 &.10 &.10 &.10 &.10 &.10 &.10 \\ & \multicolumn{8}{c}{Posterior mean under Design B and Setting 1} \\
**1.0** & **1.0** &.34 &.10 &.10 &.14 &.10 &.10 &.10 &.10 &.10 &.10 &.10 &.10 &.10 \\ & \multicolumn{8}{c}{Posterior mean under Design A and Setting 2} \\
**1.0** & **1.0** & **1.0** &.08 &.10 & **1.0** &.08 &.07 &.08 &.06 &.09 &.07 &.07 &.06 &.09 \\ & \multicolumn{8}{c}{Posterior mean under Design B and Setting 2} \\
**1.0** & **1.0** & **1.0** &.10 &.10 & **.68** &.07 &.08 &.07 &.08 &.08 &.09 &.08 &.10 &.08 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Estimated posterior mean of \(\mathbf{\gamma}\) in Ising models under Design A, B, and Setting 1 and 2. Under both designs, the importance sampling standard errors are approximately 0.0001 for Setting 1 and 0.1 for Setting 2.
is.40 under Design C and 0.41 under Design D, both estimates very close to the true value of 0.4.
This simulation setting shows that, if an Ising mixture model with one component is fit when the data corresponds with a mixture model with two components, the inferred association structure might include pairwise effects that do not exist. This result is in a sense not surprising given that the first component has non-zero interaction effects between variables 1 and 2, variables 1 3, and variables 2 and 3, while the second component has non-zero interaction effects between variables 1 4, and variables 1 and 5. Due to this configuration, the Ising model might show non-zero interaction effects between variables 2 and 4, and variables 2 and 5. Nevertheless, the estimated posterior mean of the association indicators for these variables is lower compared to the truly non-zero associations.
## 4 Real data applications
We examine the fit of the Ising model and of the Ising mixture model with two components for two eight-way binary contingency tables. The first example focuses on the Rochdale data - a dataset that has been analyzed numerous times in the existent literature. The pairwise interactions of the Rochdale data are considered to be well understood. The second dataset comes from a larger dataset, and has not been previously analyzed in this form. We chose it because of its much larger sample size leads to significantly larger counts in some of the cells compared to the largest counts in the Rochdale data. The presence of the larger counts will test the ability of the Ising mixture models to adequately capture the imbalance between the magnitude of the largest and the smallest counts in sparse contingency tables.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline Model & \(\gamma_{12}\) & \(\gamma_{13}\) & \(\gamma_{14}\) & \(\gamma_{15}\) & \(\gamma_{16}\) & \(\gamma_{23}\) & \(\gamma_{24}\) & \(\gamma_{25}\) & \(\gamma_{26}\) & \(\gamma_{34}\) & \(\gamma_{35}\) & \(\gamma_{36}\) & \(\gamma_{45}\) & \(\gamma_{46}\) & \(\gamma_{56}\) \\ \hline \multicolumn{11}{c}{Posterior mean under Design C} \\ Ising model & **.99** & **.98** &.10 &.10 &.11 &.30 &.11 &.14 &.11 &.11 &.14 &.11 & **1.0** & **1.0** \\ \hline Ising mixture, & **1.0** & **1.0** &.17 &.16 &.20 &.19 &.16 &.15 &.17 &.18 &.17 &.18 &.16 &.19 &.19 \\ \hline Ising mixture, &.17 &.14 &.14 &.13 &.13 &.16 &.17 &.16 &.15 &.14 &.13 &.13 &.15 & **1.0** & **1.0** \\ \hline \multicolumn{11}{c}{Posterior mean under Design D} \\ Ising model & **.98** & **.94** & **1.0** & **1.0** &.10 & **.88** & **.69** & **.67** &.10 &.13 &.12 &.13 &.22 &.22 &.22 \\ \hline Ising mixture, & **.99** & **.99** &.19 &.17 &.14 & **.99** &.17 &.15 &.15 &.17 &.16 &.14 &.17 &.18 &.15 \\ \hline Ising mixture, &.19 &.14 & **.99** & **.99** &.12 &.17 &.15 &.17 &.15 &.12 &.12 &.12 &.12 &.12 &.12 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Estimated posterior mean of \(\boldsymbol{\gamma}\) in the Ising model and of \(\boldsymbol{\gamma}^{(1)},\boldsymbol{\gamma}^{(2)}\) in the Ising mixture model with two components under Design C and Design D.
### The Rochdale data
The Rochdale data (Whittaker, 1990) was collected to determine the relationships among factors affecting women's economic activity. It includes eight binary variables: 1) wife's economic activity (no, yes), 2) age of wife \(>38\) (no, yes), 3) husband's employment status (no, yes), 4) presence of children \(\leq\) 4 years old (no, yes), 5) wife's education level, high-school+ (no, yes), 6) husband's education level, high-school+ (no, yes), 7) Asian origin (no, yes), and 8) presence of other working household members (no, yes). With a sample size of 665, the resulting \(2^{8}\) contingency table, shown in Table 3, is sparse, with 165 cells having 0 counts, 217 cells having small positive counts less than 3, and several cells with counts larger than 30 or even 50.
The estimated posterior means of the association indicators \(\boldsymbol{\gamma}\) are presented in Table 5 based on an Ising model with spike-and-slab prior with \(\sigma_{0}=0.1\), \(\sigma_{1}=1\) and \(\beta=0.5\). In what follows we consider an interaction effect between variables \(v^{\prime}\) and \(v\) to be significant if \(E(\gamma_{v^{\prime}v}\mid\boldsymbol{n})>0.5\). In Figure 2 we compare the set of non-zero interaction effects we identified with those of Whittaker (1990). We find that all the 14 significant pairwise associations found by Whittaker (1990) are also found by our Ising model. However, the Ising model determined two additional interactions: one interaction between variables 5 (wife's education level) and 7 (Asian origin) with a posterior mean of the corresponding association indicator of 0.86, and the interaction between variables 2 (age of wife \(>38\)) and 7 (Asian origin) with a posterior mean of 0.65. The posterior means of these two extra interactions are much smaller then the posterior means of the 14 interactions that were also determined by Whittaker (1990). These two extra associations seem to be reasonable, and can be attributed to the wave of Asian immigration, particularly of Asian women, in the last century (Kim,
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline
5 & 0 & 2 & 1 & 5 & 1 & 0 & 0 & 4 & 1 & 0 & 0 & 6 & 0 & 2 & 0 \\
8 & 0 & 11 & 0 & 13 & 0 & 1 & 0 & 3 & 0 & 1 & 0 & 26 & 0 & 1 & 0 \\
5 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
4 & 0 & 8 & 2 & 6 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\
17 & 10 & 1 & 1 & 16 & 7 & 0 & 0 & 0 & 2 & 0 & 0 & 10 & 6 & 0 & 0 \\
1 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
4 & 7 & 3 & 1 & 1 & 1 & 2 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
18 & 3 & 2 & 0 & 23 & 4 & 0 & 0 & 22 & 2 & 0 & 0 & 57 & 3 & 0 & 0 \\
5 & 1 & 0 & 0 & 11 & 0 & 1 & 0 & 11 & 0 & 0 & 0 & 29 & 2 & 1 & 1 \\
3 & 0 & 0 & 0 & 4 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
41 & 25 & 0 & 1 & 37 & 26 & 0 & 0 & 15 & 10 & 0 & 0 & 43 & 22 & 0 & 0 \\
0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 0 & 0 & 0 \\
2 & 4 & 0 & 0 & 2 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 2 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 3: Rochdale data from Whittaker (1990). The cells counts appear row by row in lexicographical order with the levels of variable 8 varying fastest and the levels of variable 1 varying slowest.
1977; Piper and Roces, 2004).
We also applied our Bayesian Ising mixture model with two components to the Rochdale data - see Table 5 and Figure 3. We fitted the mixture model with invariant main effects across components (Assumption 5.1). The number of components in the normal mixture sampling distribution is \(J=5\). The estimated posterior mean of the weight of the first component is \(E(w^{(1)}\mid\mathbf{n})=0.14\). The 16 significant associations found by the Bayesian Ising model are also idenfied in both components of the Ising mixture model. However, each of the two components involve additional significant associations. We note that the estimated posterior means of all the association indicators are 1. Goodness-of-fit tests for the maximum likelihood estimators show that both the Ising model (2.1) and the Ising mixture model (2.2) fit the data well with p-values of 0.42 and 1, respectively. The likelihood ratio test shows that the two-component Ising mixture model fits the data significantly better than the Ising model with a p-value \(<0.001\).
The left panel of Table 4 shows the cells containing the 10 largest observed counts in the Rochdale data together with their expected cell counts in the Ising model and the Ising mixture model. We see that both models are able to capture the largest counts reasonably well.
### The NLTCS data
We analyze a dataset extracted from the National Long Term Care Survey (NLTCS) created by the Center of Demographic Studies at Duke University (Manton et al., 1993). It includes eight binary variables that measure functional disability in daily living activities: 1) eating, 2) getting around inside, 3) dressing, 4) cooking, 5) grocery shopping, 6) getting about outside, 7) traveling, and 8) managing money. Each measure classifies study participants as healthy or disabled. The data comprise observations of elderly individuals aged 65 and above, pooled across four survey waves from 1982, 1984, 1989, and 1994. With a sample size of 21574, the resulting \(2^{8}\) contingency table,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Cell & \begin{tabular}{c} Ob- \\ served \\ \end{tabular} & \begin{tabular}{c} Ising \\ models \\ \end{tabular} & \begin{tabular}{c} Ising \\ mixtures \\ \end{tabular} & Cell & \begin{tabular}{c} Ob- \\ served \\ \end{tabular} & \begin{tabular}{c} Ising \\ models \\ \end{tabular} &
\begin{tabular}{c} Ising \\ mixtures \\ \end{tabular} \\ \hline
10001100 & 57 & 56.78 & 58.63 & 00000000 & 4419 & 4181.60 & 4320.54 \\
11001100 & 43 & 44.61 & 42.89 & 00010000 & 2063 & 2087.60 & 2134.16 \\
1100000 & 41 & 36.40 & 36.99 & 00110000 & 1189 & 1324.38 & 1175.31 \\
11000100 & 37 & 38.81 & 37.65 & 1111111 & 1056 & 1035.05 & 1055.71 \\
10011100 & 29 & 33.29 & 29.02 & 00111111 & 764 & 702.14 & 752.47 \\
00011100 & 26 & 20.37 & 21.84 & 00110100 & 667 & 607.96 & 658.85 \\
11000101 & 26 & 23.69 & 23.33 & 00110101 & 654 & 702.29 & 657.78 \\
11000001 & 25 & 28.13 & 29.30 & 00110001 & 601 & 571.91 & 597.43 \\
10000100 & 23 & 22.70 & 23.25 & 00111101 & 549 & 577.10 & 561.45 \\
11001101 & 22 & 22.85 & 21.43 & 00010100 & 529 & 565.21 & 518.92 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Expected cell counts for the top 10 largest counts cells for the Rochdale data (left panel) and the NLTCS data (right panel). Cells are identified through their sequence of level indicators with 0 as no and 1 as yes.
Table 6 is sparse, with 17.1% cells having 0 counts, 46.9% cells having small counts no larger than 5, and largest 1.6% of the cell counts accounting for 40.5% of the observations.
We employ our Bayesian framework to fit an Ising model and an Ising mixture model with two components. The prior specification, assumptions related to the invariance of main effects and the number of components in the mixture sampling distributions were the same as the ones used for the Rochdale data. The pairwise associations that have an estimated posterior mean of their indicators above 0.5 are shown as graphs in the right panel of Figure 4. There are 21 significant interaction effects identified in the Ising model. The forward stepwise function from the R package gRim (Hojsgaard et al., 2012) identifies 20 of these 21 pairwise interactions - see the left panel of Figure 4. The additional interaction identified by our Bayesian framework involves variables 1 and 4, and has the smallest estimated posterior mean of 0.85 among the 21 associations. In the Ising mixture model, there are 17 significant interaction effects in the first component and 20 significant interaction effects in the second component - see Figure 5. The estimated posterior mean of the weight of the first component is 0.4. The patterns of significant interaction effects in both components of the Ising mixture model are sparser than the pattern inferred in the Ising model. One example of a key difference between the inferred association patterns relates to variables 1 and 4. The estimated posterior mean of the association indicator is 0.85 in the Ising model, while it is less than 0.5 in the first component of the Ising mixture model and it is equal with 1 in the second component. As such, this pairwise association is a combination of a weaker effect in one component and a very strong effect in the second component. Similar patterns with varying strength between components involve variables 1 and 3 and variables 4 and 5.
Goodness-of-fit tests for the maximum likelihood estimators show that the Ising model (2.1) does
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline Model & \(\gamma_{12}\) & \(\gamma_{13}\) & \(\gamma_{14}\) & \(\gamma_{15}\) & \(\gamma_{16}\) & \(\gamma_{17}\) & \(\gamma_{18}\) & \(\gamma_{23}\) & \(\gamma_{24}\) & \(\gamma_{25}\) & \(\gamma_{26}\) & \(\gamma_{27}\) & \(\gamma_{28}\) & \(\gamma_{34}\) \\ \hline Ising model &.23 & **1.0** & **1.0** & **.96** &.22 & **1.0** &.21 &.29 & **1.0** & **1.0** &.18 & **.65** & **1.0** &.25 \\ \hline Ising mixture, & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** \\ \hline Ising mixture, & **1.1** & **1.0** & **1.0** & **1.0** & **.92** & **1.0** & **1.0** & **1.0** & **1.0** &.29 & **1.0** & **1.0** & **1.0** \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Model & \(\gamma_{35}\) & \(\gamma_{36}\) & \(\gamma_{37}\) & \(\gamma_{38}\) & \(\gamma_{45}\) & \(\gamma_{46}\) & \(\gamma_{47}\) & \(\gamma_{48}\) & \(\gamma_{56}\) & \(\gamma_{57}\) & \(\gamma_{58}\) & \(\gamma_{67}\) & \(\gamma_{68}\) & \(\gamma_{78}\) \\ \hline Ising model & **1.0** & **.95** & **.98** &.28 &.30 &.46 & **.99** & **.99** & **1.0** & **.86** &.37 & **1.0** &.44 &.37 \\ \hline Ising mixture, & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** \\ \hline Ising mixture, & **1.0** & **1.0** & **1.0** & **1.0** &.24 & **1.0** &.34 & **1.0** & **1.0** &.29 & **1.0** & **1.0** & **1.0** &.39 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Estimated posterior means of the association indicators in the Ising model and the Ising mixture model with two components for the Rochdale data.
not fit the data well (p-value \(<0.00001\)), while the Ising mixture model (2.2) with two components fits the data well (p-value 0.21). The right panel of Table 4 shows the cells containing the 10 largest observed counts observed in the NLTCS data together with their expected cell counts in the Ising model and the Ising mixture model. We see that the Ising mixture model seems to capture the size of the largest cell counts more faithfully than the Ising model.
## 5 Identifiability of Ising mixture models
In this section we focus on exploring the identifiability of Ising mixture models from a theoretical perspective. The non-identifiability of the probability mass function given the association indicators, i.e., \(\pi(\mathbf{n}\mid\mathbf{\Gamma})\), arises from the non-identifiability of the probability mass function of Ising mixture models, i.e., \(\pi(\mathbf{n}\mid\mathbf{\Theta},\mathbf{w})\). We start with a thorough review of closely related existing results and methods in the literature, and explain why their application to Ising mixture models is challenging. Then we propose some specific sufficient conditions and necessary conditions for the identifiability of Ising mixture models. We also discuss several examples that illustrate specific cases of key interest. Proofs of all the theoretical results are given in the Appendix.
Manole and Khalili (2021, Corollary 1) provide a sufficient condition for the identifiability of finite mixtures of multinomial distributions. Ising mixture models assume that each cell count follows a multinomial distribution determined by cell probabilities, rather than a mixture of multinomial distributions - the situation studied in (Manole and Khalili, 2021). Thus their results are not directly applicable in our setting. Other related results in literature are based on the conditional independence assumption, such as Allman et al. (2009) and Xu (2017). In their work, the conditional
\begin{table}
\begin{tabular}{r r r r r r r r r r r r r r} \hline
4419 & 97 & 67 & 472 & 2063 & 55 & 335 & 44 & 313 & 18 & 33 & 76 & 1 & 5 & 2 & 6 \\
119 & 115 & 1 & 16 & 0 & 4 & 1189 & 17 & 112 & 6 & 130 & 64 & 529 & 52 & 453 & 56 \\
2 & 22 & 13 & 116 & 10 & 67 & 47 & 0 & 2 & 0 & 1 & 92 & 0 & 4 & 0 & 4 \\
1 & 12 & 5 & 19 & 1 & 0 & 0 & 3 & 1 & 0 & 354 & 2 & 27 & 4 & 16 & 5 \\
55 & 3 & 24 & 1 & 0 & 0 & 1 & 7 & 1 & 60 & 667 & 29 & 601 & 14 & 1 & 16 \\
3 & 55 & 8 & 85 & 7 & 65 & 69 & 400 & 24 & 5 & 62 & 2 & 10 & 164 & 0 & 8 \\
2 & 6 & 3 & 15 & 3 & 5 & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 32 & 4 & 41 \\
1 & 0 & 1 & 0 & 1 & 0 & 4 & 1 & 3 & 3 & 9 & 0 & 0 & 0 & 0 & 0 \\
14 & 226 & 11 & 140 & 5 & 0 & 2 & 2 & 10 & 0 & 7 & 3 & 3 & 11 & 31 & 3 \\
0 & 4 & 0 & 2 & 125 & 8 & 134 & 81 & 654 & 34 & 0 & 34 & 1 & 5 & 25 & 215 \\
8 & 80 & 30 & 5 & 105 & 19 & 50 & 1 & 1 & 0 & 2 & 3 & 0 & 3 & 0 & 3 \\
13 & 9 & 4 & 1 & 0 & 0 & 4 & 7 & 1 & 6 & 6 & 54 & 3 & 0 & 1 & 0 \\
0 & 1 & 6 & 0 & 6 & 1 & 42 & 3 & 28 & 48 & 207 & 12 & 0 & 5 & 0 & 2 \\
4 & 34 & 1 & 13 & 6 & 38 & 549 & 19 & 180 & 21 & 196 & 27 & 2 & 14 & 72 & 88 \\
8 & 3 & 0 & 0 & 2 & 8 & 11 & 3 & 15 & 9 & 5 & 19 & 3 & 26 & 0 & 28 \\
29 & 158 & 10 & 89 & 5 & 66 & 764 & 66 & 86 & 8 & 175 & 7 & 151 & 131 & 516 & 1056 \\ \hline \end{tabular}
\end{table}
Table 6: The NLTCS data. This 16 by 16 tables shows all the possible combination of the 8 binary variables. The cells counts appear row by row in lexicographical order with Variable 8 varying fastest and Variable 1 varying slowest.
independence assumption allows for the transfer of the identifiability question to an equivalent one with fewer variables and more levels with the help of the row-wise tensor product. In the case of three categorical variables, the conditional independence assumption further enables the identifiability question to be transformed into an equivalent one by considering the rank of matrices using the triple product. However, these methods are not directly applicable for Ising mixture models since the joint probability conditional on a mixture component cannot be written as a product of marginal probabilities.
### Examples and main results related to identifiability
**Definition 5.1**.: _An Ising mixture model parameterized by weights \(\mathbf{w}\) as well as main and interaction terms \(\mathbf{\Theta}\) is identifiable if and only if different \(\mathbf{w}\), \(\mathbf{\Theta}\) imply different cell probabilities \(\mathbf{p}_{\mathrm{mix}}\)._
**Definition 5.2** (Definition 3 in Rothenberg (1971)).: _A Ising mixture model is locally identifiable at a parameter point \(\underline{\mathbf{w}},\underline{\mathbf{\Theta}}\) if and only if there exists an open neighborhood of \(\underline{\mathbf{w}},\underline{\mathbf{\Theta}}\) containing no other parameter \(\mathbf{w},\mathbf{\Theta}\) implying the same cell probabilities \(\mathbf{p}_{\mathrm{mix}}\) as \(\underline{\mathbf{w}},\underline{\mathbf{\Theta}}\)._
In the sequel the identifiability is studied based on the following assumption.
**Assumption 5.1**.: _The main effects vectors \((\theta_{v}^{(k)}:v\in[d])^{T}\) are identical for all \(k\in[K]\)._
This assumption arises from the similarity among subpopulations. Although it is assumed that each cell probability is a mixture of different components, we don't want to assume that these
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Model & \(\gamma_{12}\) & \(\gamma_{13}\) & \(\gamma_{14}\) & \(\gamma_{15}\) & \(\gamma_{16}\) & \(\gamma_{17}\) & \(\gamma_{18}\) & \(\gamma_{23}\) & \(\gamma_{24}\) & \(\gamma_{25}\) & \(\gamma_{26}\) & \(\gamma_{27}\) & \(\gamma_{28}\) & \(\gamma_{34}\) \\ \hline Ising model & \(\mathbf{1.0}\) & \(\mathbf{.90}\) & \(\mathbf{.85}\) &.32 &.12 & \(\mathbf{1.0}\) &.11 & \(\mathbf{1.0}\) & **1.0** &.29 & \(\mathbf{.98}\) & \(\mathbf{1.0}\) &.28 & \(\mathbf{1.0}\) \\ \hline Ising mixture, & \multirow{2}{*}{\(\mathbf{1.0}\)} & \(\mathbf{1.0}\) & \(\mathbf{.16}\) & \(\mathbf{.70}\) &.35 &.27 &.46 & \(\mathbf{1.0}\) & \(\mathbf{.97}\) &.33 & \(\mathbf{.72}\) &.28 & \(\mathbf{1.0}\) & \(\mathbf{1.0}\) \\ \cline{1-1} Component 2 & & & & & & & & & & & & & & \\ \hline \hline \end{tabular}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline Model & \(\gamma_{35}\) & \(\gamma_{36}\) & \(\gamma_{37}\) & \(\gamma_{38}\) & \(\gamma_{45}\) & \(\gamma_{46}\) & \(\gamma_{47}\) & \(\gamma_{48}\) & \(\gamma_{56}\) & \(\gamma_{57}\) & \(\gamma_{58}\) & \(\gamma_{67}\) & \(\gamma_{68}\) & \(\gamma_{78}\) \\ \hline Ising model &.46 & \(\mathbf{1.0}\) & 0.1 & \(\mathbf{1.0}\) & \(\mathbf{.93}\) & \(\mathbf{1.0}\) & \(\mathbf{1.0}\) & \(\mathbf{1.0}\) & \(\mathbf{1.0}\) & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** & **1.0** \\ \hline Ising mixture, & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{.26}\)} & \multirow{2}{*}{\(.18\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} \\ Component 1 & & & & & & & & & & & & & \\ \hline Ising mixture, & \multirow{2}{*}{\(\mathbf{.11}\)} & \multirow{2}{*}{\(\mathbf{.99}\)} & \multirow{2}{*}{\(.15\)} & \multirow{2}{*}{\(.30\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{.99}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} & \multirow{2}{*}{\(\mathbf{1.0}\)} \\ Component 2 & & & & & & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 7: The posterior means of \(\mathbf{\gamma}^{(1)}\) and \(\mathbf{\gamma}^{(2)}\) inferred by two-component Ising mixture models for the NLTCS data. The number of component in the normal mixture sampling distribution is \(J=5\). The posterior mean of the weight of the first component, i.e. \(E(w^{(1)}\mid\mathbf{n})\), is 0.4.
components are entirely different from each other. By assuming identical main effects across all components, we can account for the similarity among components and also allow for heterogeneity of interaction effects in different components. Lemma 5.1 shows that this assumption simplifies the study of sufficient conditions and necessary conditions, allowing us to bypass main effects and focus on the identifiability of interaction effects.
**Lemma 5.1**.: _Under Assumption 5.1, the identifiability of an Ising mixture model remains the same when all main effects are assumed to be \(0\)._
In what follows the parameter vector \(\mathbf{\theta}\) includes only interaction effects for each component, i.e., \(\mathbf{\theta}^{(k)}=(\theta^{(k)}_{v^{\prime}v}:v^{\prime}<v)^{T}\) for each \(k\in[K]\).
The second assumption arises when interaction effects in only one component are unknown.
**Assumption 5.2**.: _All interaction effects in every component except the first component are known. In other words, \(\mathbf{\theta}^{(k)}\) are fixed and known for all \(k\geq 2\)._
However, the following example shows that this assumption is not sufficient for the local identifiability of the Ising mixture model.
**Example 5.1**.: Suppose \(d=2\) and \(\theta^{(2)}_{12}=0\). Then this mixture model is not locally identifiable for any \(\theta^{(1)}_{12}\in\mathbb{R}\) and \(w^{(1)}\in(0,1)\).
The proofs for this example and the following examples are deferred to the appendix.
Therefore, we need another assumption on the weights of components \(\mathbf{w}\).
**Assumption 5.3**.: _The weights of components \(w^{(k)}\in(0,1),k\in[K]\) are fixed and known._
The following proposition states our first sufficient conditions for the local identifiability of Ising mixture models, which is particularly useful when some unknown subpopulation is mixed with other well-known populations.
**Proposition 5.1**.: _Assumptions 5.1, 5.2 and 5.3 are sufficient for local identifiability of Ising mixture models._
Next we connect Ising mixture models with mixtures of graphical structures. We represent an Ising model with parameters \(\mathbf{\theta}\) by an undirected graph \(G(\mathbf{\theta}):=G(V,\mathbb{E}(\mathbf{\theta}))\). In this representation, vertices \(V=\{1,2,\ldots,d\}\) are associated with each variable, and edges \(\mathbb{E}(\mathbf{\theta}):=\{(v^{\prime},v):v^{\prime}<v,\theta_{v^{\prime}v} \neq 0\}\) are associated with each non-zero pairwise interaction. Each missing edge corresponds with a pairwise interaction effect that is zero. The edges in \(\mathbb{E}(\mathbf{\theta})\) are called activation edges. We also define \(\mathbb{V}(\mathbf{\theta})\) as the set of vertices with degree at least \(1\). The vertices in \(\mathbb{V}(\mathbf{\theta})\) are called activation vertices or activation variables. We define the projection of the graph \(G(\mathbf{\theta})\) onto its activation variables \(\mathbb{V}(\mathbf{\theta})\) as the subgraph of \(G(\mathbf{\theta})\) determined by \(\mathbb{V}(\mathbf{\theta})\). This projection is denoted by \(G(\mathbf{\theta}\mid\mathbb{V}(\mathbf{\theta}))\).
The graphical representation of Ising mixture models are constructed at the level of their mixture components. For component \(k\in[K]\) with parameters \(\mathbf{\theta}^{(k)}\), we construct its undirected graph \(G(\mathbf{\theta}^{(k)})\). The set of activation variables and activation edges for component \(k\) are denoted by \(\mathbb{V}(\mathbf{\theta}^{(k)})\) and \(\mathbb{E}(\mathbf{\theta}^{(k)})\), respectively. To illustrate these definitions, consider the following example.
**Example 5.2**.: Suppose \(d=4\), \(K=2\), \(\theta^{(1)}_{v^{\prime}v}=0\) for all \((v^{\prime},v)\neq(1,2)\) and \(\theta^{(2)}_{v^{\prime}v}=0\) for all \((v^{\prime},v)\neq(3,4)\). Then \(G(\mathbf{\theta}^{(1)})=(\{1,2,3,4\},\{(1,2)\})\) and \(G(\mathbf{\theta}^{(2)})=(\{1,2,3,4\},\{(3,4)\})\). The activation variables in component 1 are \(\mathbb{V}(\mathbf{\theta}^{(1)})=\{1,2\}\). The activation variables in component 2 are \(\mathbb{V}(\mathbf{\theta}^{(2)})=\{3,4\}\). The activation edges in component 1 are \(\mathbb{E}(\mathbf{\theta}^{(1)})=\{(1,2)\}\). The activation edges in component 2 are \(\mathbb{E}(\mathbf{\theta}^{(2)})=\{(3,4)\}\). \(G(\mathbf{\theta}^{(1)}\mid\mathbb{V}(\mathbf{\theta}^{(1)}))=(\{1,2\},\{(1,2)\})\) and \(G(\mathbf{\theta}^{(2)}\mid\mathbb{V}(\mathbf{\theta}^{(2)}))=(\{3,4\},\{(3,4)\})\). Please see Figure 6.
We now are prepared to formulate an essential assumption for identifiability based on graphical representations and activation variables.
**Assumption 5.4**.: _The activation variables in different components of an Ising mixture model are mutually exclusive, i.e., \(\bigcap\limits_{k\in[K]}\mathbb{V}(\mathbf{\theta}^{(k)})=\emptyset\)._
The next result provides sufficient conditions for identifiability. This result is particularly useful when different components have different activation variables.
**Proposition 5.2**.: _Assumptions 5.1, 5.3 and 5.4 are jointly sufficient for local identifiability of Ising mixture model._
The following example illustrates that Assumption 5.4 alone does not guarantee local identifiability, and therefore Assumption 5.3 is required for the validity of Proposition 5.2.
**Example 5.3** (Assumption 5.3 violated).: Suppose \(K=2\), \(d=4\). Only \(\theta^{(1)}_{12}\), \(\theta^{(2)}_{34}\) and \(w^{(1)}\) are the nonzero unknown parameters. All other interaction effects are fixed at zero. The resulting mixture model is not locally identifiable for any \(\theta^{(1)}_{12},\theta^{(2)}_{34}\in\mathbb{R}\) and \(w^{(1)}\in(0,1)\).
Based on this example, it can be immediately inferred that any two-component Ising mixture model with at least one non-zero interaction effect in each component is not locally identifiable in general.
The following example shows that Assumption 5.4 is required for the validity of Proposition 5.2.
**Example 5.4** (Assumption 5.4 violated).: Let \(d=4\), \(w^{(1)}=w^{(2)}=.5\) and \(\theta^{(k)}_{13}=\theta^{(k)}_{14}=\theta^{(k)}_{23}=\theta^{(k)}_{24}=0\) for \(k=1,2\). The unknown parameters are \(\theta^{(k)}_{12},\theta^{(k)}_{34}\) for \(k=1,2\) only. Then this mixture model is not locally identifiable.
## 6 Discussion
In this paper we developed finite mixtures of Ising within a Bayesian framework as an effective alternative to infer associations between binary variables. By combining Ising models with multivariate Bernoulli mixture models, our contribution addresses the current gap in the literature between log-linear models and various types of mixture models for categorical data. There are several key reasons why addressing this gap was a worthwhile effort. First, Ising mixture models not only effectively fit sparse data, but also offer interpretable results. If data are generated from an Ising mixture model with sparse interaction effects, using Ising models can result in denser and less confident interaction
effects. Although Ising mixture models have more parameters than Ising models, they often infer fewer but more significant non-zero interaction effects. This feature of Ising mixture models was illustrated in the simulations experiments.
Second, Ising mixture models can be viewed as an extension of multivariate Bernoulli mixture models, breaking the conditional independence assumption by introducing interaction effects for each component. Inferring interaction effects from Ising mixture models can lead to the identification of mixtures of graphical loglinear models, providing insight into multivariate patterns of associations of subpopulations. As graphical models become increasingly popular in fields such as social networks, it is likely that Ising mixture models will also gain attention in these areas. Furthermore, the development of Ising mixture models can potentially contribute to the development of more general finite mixtures of graphical loglinear models.
Ising mixture models are a powerful tool to handle multi-modal spike-and-slab posteriors in data. Research has shown that mixture models can be used to approximate these multi-modal posteriors in linear regressions (Rockova, 2018). Reporting a single model is a misleading reflection of overall model uncertainty. We studied Ising mixture models with spike-and-slab prior distributions to infer associations between binary variables. We have shown that our framework is not only effective in fitting sparse contingency tables, but also leads to interpretable results. We established sufficient and necessary conditions for the identifiability of Ising mixture models without relying on the assumption of conditional independence. More work is certainly needed to propose general conditions for identifiability of Ising mixture models, that will lead to improving the interpretation of the inferred associations and reduce the risk of overfitting.
Our proposed framework can be extended in at least two key directions. It can be generalized to handle categorical random variables with more than two levels starting from the Potts model (Wu, 1982), although this should be done with care given the increase in the number of parameters. Furthermore, the inclusion of higher-order interaction terms can also be beneficial to allow the study of more complex interaction patterns of associations.
## 7 Appendix
Proof of Lemma 5.1.: Suppose \(\underline{\mathbf{w}},\underline{\mathbf{\Theta}},\underline{\mathbf{\Gamma}}\) are true values of parameters in the Ising mixture model. It follows from the likelihood equation of \(\mathbf{X}=\mathbf{0}\) as well as \(X_{1}=1,X_{2}=\ldots=X_{d}=0\) that
\[\sum_{k\in[K]}\frac{w_{k}}{Z_{k}}=\sum_{k\in[K]}\frac{\underline{w}_{k}}{ \underline{Z}_{k}}\text{ as well as }\sum_{k\in[K]}\frac{w_{k}\exp(\theta_{1})}{Z_{k}}= \sum_{k\in[K]}\frac{\underline{w}_{k}\exp(\underline{\theta}_{1})}{\underline{ Z}_{k}},\]
where \(Z_{k},\underline{Z}_{k},k\in[K]\), are normalization constants. Dividing the second equation by the first equation, it follows that \(\theta_{1}=\underline{\theta}_{1}\). It then follows from analogous arguments that \(\theta_{k}=\underline{\theta}_{k}\) for all \(k\in[K]\). This lemma then follows immediately from the claim that all likelihood equations with the true value of main effects are equivalent with likelihood equations with main effects \(0\).
Proof of Example 5.1.: Let \(\underline{\theta}_{12}^{(1)}\) and \(\underline{w}^{(1)}\) be the true value of parameters. Then we only need to
prove that the solutions to the following equations are not unique:
\[\frac{w^{(1)}}{3+\eta^{(1)}_{12}}+\frac{1-w^{(1)}}{4}=p_{00}=\frac{\underline{w}^ {(1)}}{3+\underline{\eta}^{(1)}_{12}}+\frac{1-\underline{w}^{(1)}}{4}\text{ and }\frac{w^{(1)}\eta^{(1)}_{12}}{3+\eta^{(1)}_{12}}+\frac{1-w^{(1)}}{4}=p_{11}= \frac{\underline{w}^{(1)}\underline{\eta}^{(1)}_{12}}{3+\underline{\eta}^{(1) }_{12}}+\frac{1-\underline{w}^{(1)}}{4},\]
where \(\eta^{(1)}_{12}=\exp(\theta^{(1)}_{12})\) and \(\underline{\eta}^{(1)}_{12}=\exp(\theta^{(1)}_{12})\). It follows from the second equation that
\[\eta^{(1)}_{12}=\underline{\eta}^{(1)}_{12}+\frac{(1-\underline{\eta}^{(1)}_ {12})(w^{(1)}-\underline{w}^{(1)})}{\frac{\underline{w}^{(1)}}{3+\underline{ \eta}^{(1)}_{12}}+\frac{w^{(1)}-\underline{w}^{(1)}}{4}}.\]
Replace \(\eta^{(1)}_{12}\) with this identity in the first equation, it follows that
\[\frac{w^{(1)}}{3+\eta^{(1)}_{12}}+\frac{1-w^{(1)}}{4}=\frac{\underline{w}^{(1 )}}{3+\underline{\eta}^{(1)}_{12}}+\frac{1-\underline{w}^{(1)}}{4}.\]
Therefore, this model is not locally identifiable.
Proof of Proposition 5.1.: Let \(\underline{w}\) denote the true value of the weights \(\boldsymbol{w}\) and it is known by assumptions. Analogously, let \(\underline{\boldsymbol{\theta}}^{(k)}\) denote the true value of interaction effects \(\boldsymbol{\theta}^{(k)}\) in the \(k\)-th component. The interaction effect in the first component \(\boldsymbol{\theta}^{(1)}\) is unknown and its true value is denoted by \(\underline{\boldsymbol{\theta}}^{(1)}\).
It then follows from the identity of the cell probabilities, i.e.,
\[\boldsymbol{p}_{\text{mix}}(\boldsymbol{\theta}^{(1)},\underline{\boldsymbol{ w}},\underline{\boldsymbol{\theta}}^{(2)},\ldots,\underline{\boldsymbol{\theta}}^{(K)})= \boldsymbol{p}_{\text{mix}}(\underline{\boldsymbol{\theta}}^{(1)}, \underline{\boldsymbol{w}},\underline{\boldsymbol{\theta}}^{(2)},\ldots, \underline{\boldsymbol{\theta}}^{(K)})\]
that
\[\underline{w}^{(1)}\boldsymbol{p}(\boldsymbol{\theta}^{(1)})+\sum_{2\leq k \leq K}\underline{w}^{(k)}\boldsymbol{p}(\underline{\boldsymbol{\theta}}^{( k)})=\underline{w}^{(1)}\boldsymbol{p}(\underline{\boldsymbol{\theta}}^{(1)})+ \sum_{2\leq k\leq K}\underline{w}^{(k)}\boldsymbol{p}(\underline{ \boldsymbol{\theta}}^{(k)}),\]
and hence
\[\boldsymbol{p}(\boldsymbol{\theta}^{(1)})=\boldsymbol{p}(\underline{ \boldsymbol{\theta}}^{(1)})\]
given \(\underline{w}^{(1)}>0\). It then follows from the identifiability of Ising model that \(\boldsymbol{\theta}^{(1)}=\underline{\boldsymbol{\theta}}^{(1)}\).
Proof of Proposition 5.2.: Without loss of generality we assume all main effects are zero by Lemma 5.1. Let \(\underline{\boldsymbol{\theta}}^{(k)}\) denote the true value of \(\boldsymbol{\theta}^{(k)}\) for \(k\in[K]\). The key step is to show that
\[p_{\boldsymbol{i}}(\boldsymbol{\theta}^{(k)})-p_{\boldsymbol{0}}(\boldsymbol{ \theta}^{(k)})=p_{\boldsymbol{i}}(\underline{\boldsymbol{\theta}}^{(k)})-p_{ \boldsymbol{0}}(\underline{\boldsymbol{\theta}}^{(k)}) \tag{7.1}\]
for any \(\boldsymbol{i}\in I\) and \(k\in[K]\). If these equations hold, we then sum them up over all \(\boldsymbol{i}\in I\) and we immediately have \(1-2^{d}p_{\boldsymbol{0}}(\boldsymbol{\theta}^{(k)})=1-2^{d}p_{\boldsymbol{0} }(\boldsymbol{\theta}^{(k)})\) or \(p_{\boldsymbol{0}}(\boldsymbol{\theta}^{(k)})=p_{\boldsymbol{0}}(\underline{ \boldsymbol{\theta}}^{(k)})\). As a result, we have \(p_{\boldsymbol{i}}(\boldsymbol{\theta}^{(k)})=p_{\boldsymbol{i}}(\underline{ \boldsymbol{\theta}}^{(k)})\) and hence \(\boldsymbol{\theta}^{(k)}=\underline{\boldsymbol{\theta}}^{(k)}\) by the identifiability of Ising models.
It suffices to prove (7.1) for \(k=1\). Arguments for remaining cases are analogous and hence omitted. For each \(\boldsymbol{i}\in I\), define \(\mathbb{V}(\boldsymbol{i}):=\{v:v\in[d],i_{v}=1\}\) as activated variables for cell \(\boldsymbol{i}\). Then \(G(\boldsymbol{\theta}\mid\mathbb{V}(\boldsymbol{i}))\) is the projection of graph \(G(\boldsymbol{\theta})\) on \(\mathbb{V}(\boldsymbol{i})\) and \(G(\boldsymbol{\theta}\mid\mathbb{V}(\boldsymbol{i})\bigcap\mathbb{V}( \boldsymbol{\theta}))\) is the projection on activated variables \(\mathbb{V}(\boldsymbol{i})\) and activation variables \(\mathbb{V}(\boldsymbol{\theta})\). The probability of cell \(\boldsymbol{i}\) can be written as
\[p_{\boldsymbol{i}}(\boldsymbol{\theta})=\exp(\sum_{(v^{\prime},v)\in\mathcal{ E}(G(\boldsymbol{\theta}\mid\mathbb{V}(\boldsymbol{i})))}\theta_{v^{\prime}v})/Z=\exp( \sum_{(v^{\prime},v)\in\mathcal{E}(G(\boldsymbol{\theta}\mid\mathbb{V}( \boldsymbol{i})\bigcap\mathbb{V}(\boldsymbol{\theta})))}\theta_{v^{\prime}v})/Z,\]
where \(Z=Z(\boldsymbol{\theta})\) is a normalization constant depending on \(\boldsymbol{\theta}\).
As a consequence, for any fixed \(\mathbf{i}=(i_{1},\ldots,i_{d})^{T}\), we have
\[p_{\mathbf{i}}(\mathbf{\theta}^{(1)})=\exp(\sum_{(v^{\prime},v)\in\mathcal{E}(G(\mathbf{ \theta}^{(1)}|\mathbb{V}(\mathbf{i})\bigcap\mathbb{V}(\mathbf{\theta}^{(1)})))}\theta^{ (1)}_{v^{\prime}v})/Z^{(1)},\]
where \(G(\mathbf{\theta}^{(1)})\) is the undirected graph associated with an Ising model with parameters \(\mathbf{\theta}^{(1)}\), \(\mathbb{V}(\mathbf{i})\) are activated variables for cell \(i\), \(\mathbb{V}(\mathbf{\theta}^{(1)})\) are activation variables for parameters \(\mathbf{\theta}^{(1)}\), and \(Z^{(1)}=Z(\mathbf{\theta}^{(1)})\) is the normalization constant.
Let \(\mathbf{i}^{(1)}\) be the cell such that \(\mathbb{V}(\mathbf{i}^{(1)})=\mathbb{V}(\mathbf{i})\bigcap\mathbb{V}(\mathbf{\theta}^{(1)})\) and we immediately have
\[G(\mathbf{\theta}^{(1)}\mid\mathbb{V}(\mathbf{i}^{(1)})\bigcap\mathbb{V}(\mathbf{\theta}^ {(1)}))=G(\mathbf{\theta}^{(1)}\mid\mathbb{V}(\mathbf{i})\bigcap\mathbb{V}(\mathbf{ \theta}^{(1)})\bigcap\mathbb{V}(\mathbf{\theta}^{(1)}))=G(\mathbf{\theta}^{(1)}\mid \mathbb{V}(\mathbf{i})\bigcap\mathbb{V}(\mathbf{\theta}^{(1)})).\]
Therefore, we have
\[p_{\mathbf{i}^{(1)}}(\mathbf{\theta}^{(1)}) =\exp\Big{(}\sum_{(v^{\prime},v)\in\mathcal{E}(G(\mathbf{\theta}^{(1) }|\mathbb{V}(\mathbf{i}^{(1)})\bigcap\mathbb{V}(\mathbf{\theta}^{(1)})))}\theta^{(1)} _{v^{\prime}v}\Big{)}/Z^{(1)}\] \[=\exp(\sum_{(v^{\prime},v)\in\mathcal{E}(G(\mathbf{\theta}^{(1)}| \mathbb{V}(\mathbf{i})\bigcap\mathbb{V}(\mathbf{\theta}^{(1)})))}\theta^{(1)}_{v^{ \prime}v})/Z^{(1)}\] \[=p_{\mathbf{i}}(\mathbf{\theta}^{(1)}).\]
Now let's consider \(p_{\mathbf{i}^{(1)}}(\mathbf{\theta}^{(k)})\) for any \(k\neq 1\). Note that \(\mathbb{V}(\mathbf{i}^{(1)})=\mathbb{V}(\mathbf{i})\bigcap\mathbb{V}(\mathbf{\theta}^{(1) })\subset\mathbb{V}(\mathbf{\theta}^{(1)})\) and \(\mathbb{V}(\mathbf{\theta}^{(1)})\bigcap\mathbb{V}(\mathbf{\theta}^{(k)})=\emptyset\) by Assumption 5.4. We then have \(\mathbb{V}(\mathbf{i}^{(1)})\bigcap\mathbb{V}(\mathbf{\theta}^{(k)})=\emptyset\). Therefore, for \(k\neq 1\) we have
\[p_{\mathbf{i}^{(1)}}(\mathbf{\theta}^{(k)})=\exp\Big{(}\sum_{(v^{\prime},v)\in \mathcal{E}(G(\mathbf{\theta}^{(k)}|\mathbb{V}(\mathbf{i}^{(1)})\bigcap\mathbb{V}( \mathbf{\theta}^{(k)})))}\theta^{(k)}_{v^{\prime}v}\Big{)}/Z^{(k)}=\exp(0)/Z^{(k)} =1/Z^{(k)}.\]
Then the probability of \(\mathbf{X}=\mathbf{i}^{(1)}\) is
\[\underline{w}^{(1)}p_{\mathbf{i}^{(1)}}(\mathbf{\theta}^{(1)})+\sum_{k\geq 2}\underline {w}^{(k)}p_{\mathbf{i}^{(1)}}(\mathbf{\theta}^{(k)})=\underline{w}^{(1)}p_{\mathbf{i}}( \mathbf{\theta}^{(1)})+\sum_{k\geq 2}\underline{w}^{(k)}/Z^{(k)}.\]
This is also true with replacing \(\mathbf{\theta}^{(k)}\) by its true value \(\underline{\mathbf{\theta}}^{(k)}\). Therefore, the cell probability equation in the case of \(\mathbf{X}=\mathbf{i}^{(1)}\), i.e.,
\[\underline{w}^{(1)}p_{\mathbf{i}^{(1)}}(\mathbf{\theta}^{(1)})+\sum_{k\geq 2} \underline{w}^{(k)}p_{\mathbf{i}^{(1)}}(\mathbf{\theta}^{(k)})=\underline{w}^{(1)}p_{ \mathbf{i}^{(1)}}(\underline{\mathbf{\theta}}^{(1)})+\sum_{k\geq 2}\underline{w}^{(k)}p_{ \mathbf{i}^{(1)}}(\underline{\mathbf{\theta}}^{(k)})\]
can be written as
\[\underline{w}^{(1)}p_{\mathbf{i}}(\mathbf{\theta}^{(1)})+\sum_{k\geq 2}\underline{w}^{(k)} /Z^{(k)}=\underline{w}^{(1)}p_{\mathbf{i}}(\underline{\mathbf{\theta}}^{(1)})+\sum_{k \geq 2}\underline{w}^{(k)}/\underline{Z}^{(k)}, \tag{7.2}\]
where \(\underline{Z}^{(k)}\) is the normalization constant corresponding with \(\underline{\mathbf{\theta}}^{(k)}\). Considering the cell probability equation in the case of \(\mathbf{X}=\mathbf{0}:=(0,\cdots,0)\), it follows from \(p_{\mathbf{0}}(\mathbf{\theta}^{(k)})=1/Z^{(k)}\) that
\[\underline{w}^{(1)}p_{\mathbf{0}}(\mathbf{\theta}^{(1)})+\sum_{k\geq 2}\underline{w}^{(k)} /Z^{(k)}=\underline{w}^{(1)}p_{\mathbf{0}}(\underline{\mathbf{\theta}}^{(1)})+\sum_{k \geq 2}\underline{w}^{(k)}/\underline{Z}^{(k)}.\]
Then the difference between (7.2) and the identity above implies
\[\underline{w}^{(1)}p_{\mathbf{i}}(\mathbf{\theta}^{(1)})-\underline{w}^{(1)}p_{\mathbf{0}}( \mathbf{\theta}^{(1)})=\underline{w}^{(1)}p_{\mathbf{i}}(\underline{\mathbf{\theta}}^{(1)})- \underline{w}^{(1)}p_{\mathbf{0}}(\underline{\mathbf{\theta}}^{(1)}),\]
which gives (7.1).
Proof of Example 5.3.: To simplify notations, let \(\mathbf{\eta}^{(k)}:=\exp(\mathbf{\theta}^{(k)})\), \(k=1,2\). Suppose \(\underline{\eta}_{12}^{(1)},\underline{\eta}_{34}^{(2)}\) and \(\underline{w}^{(1)}\) are true parameters. It then follows from the likelihood equations that
\[\frac{w^{(1)}}{3+\eta_{12}^{(1)}}+\frac{1-w^{(1)}}{3+\eta_{34}^{(2 )}} =\frac{\underline{w}^{(1)}}{3+\underline{\eta}_{12}^{(1)}}+\frac{1- \underline{w}^{(1)}}{3+\underline{\eta}_{34}^{(2)}},\] \[\frac{w^{(1)}\eta_{12}^{(1)}}{3+\eta_{12}^{(1)}}+\frac{1-w^{(1)} }{3+\eta_{34}^{(2)}} =\frac{\underline{w}^{(1)}\underline{\eta}_{12}^{(1)}}{3+ \underline{\eta}_{12}^{(1)}}+\frac{1-\underline{w}^{(1)}}{3+\underline{\eta} _{34}^{(2)}},\] \[\frac{w^{(1)}}{3+\eta_{12}^{(1)}}+\frac{(1-w^{(1)})\eta_{34}^{(2 )}}{3+\eta_{34}^{(2)}} =\frac{\underline{w}^{(1)}}{3+\underline{\eta}_{12}^{(1)}}+ \frac{(1-\underline{w}^{(1)})\underline{\eta}_{34}^{(2)}}{3+\underline{\eta} _{34}^{(2)}}.\]
From the first two equations it follows that
\[\eta_{12}^{(1)}=\underline{\eta}_{12}^{(1)}+\frac{(w^{(1)}- \underline{w}^{(1)})(1-\underline{\eta}_{12}^{(1)})(\underline{\eta}_{12}^{( 1)}+3)}{w^{(1)}(\underline{\eta}_{12}^{(1)}+3)+\underline{w}^{(1)}(1- \underline{\eta}_{12}^{(1)})},\]
where \(\eta_{12}^{(1)}\) is represented as a function of \(w^{(1)}\). Analogously, it follows from the first and the third equations that
\[\eta_{34}^{(2)}=\underline{\eta}_{34}^{(2)}-\frac{(w^{(1)}- \underline{w}^{(1)})(1-\underline{\eta}_{34}^{(2)})(\underline{\eta}_{34}^{( 2)}+3)}{(1-w^{(1)})(\underline{\eta}_{34}^{(2)}+3)+(1-\underline{w}^{(1)})(1- \underline{\eta}_{34}^{(2)})},\]
where \(\eta_{34}^{(2)}\) is represented as a function of \(w^{(1)}\). After some algebra we can see that \(\eta_{12}^{(1)}\) and \(\eta_{34}^{(2)}\) satisfy the first equation without other constraints. In other words, we can construct different values of parameters based on \(w^{(1)}\in(0,1)\) with the same cell probability.
Proof of Example 5.4.: In this example, the nonidentifiability is justified by the singularity of the information matrix based on the equivalence between the local identifiability and the rank of the information matrix, as established in Rothenberg (1971); Catchpole and Morgan (1997). Some examples of Ising mixture models with singular information matrices are in Table 8.
|
2309.17321 | STARS for Integrated Sensing and Communications: Challenges, Solutions,
and Future Directions | This article discusses the employment of simultaneously transmitting and
reflecting surface (STARS) for integrated sensing and communication (ISAC)
networks. First, two fundamental configurations of STARS-enabled ISAC systems
are introduced, namely integrated full-space configuration and separated
half-space configuration, as well as their respective advantages and common
challenges are identified. To address the aforementioned challenges, a novel
sensing-at-STARS design is proposed, where the sensing functionality is
achieved at the STARS instead of at the base station. Such a design
significantly improves the echo signal energy by eliminating undesired echo
energy attenuation/leakage, in addition to establishing favorable echo
propagation paths to facilitate sensing information extraction. We also present
three practical implementations for sensing-at-STARS, including separated
elements, mode-selection elements, and power-splitting elements. Each
implementation enables flexible sensing-communication tradeoffs. Numerical
results are provided to demonstrate the superiority of the proposed
STARS-enabled ISAC design. Finally, we discuss several future research
directions. | Zheng Zhang, Zhaolin Wang, Xidong Mu, Jian Chen, Yuanwei Liu | 2023-09-29T15:29:42Z | http://arxiv.org/abs/2309.17321v1 | # STARS for Integrated Sensing and Communications: Challenges, Solutions, and Future Directions
###### Abstract
This article discusses the employment of simultaneously transmitting and reflecting surface (STARS) for integrated sensing and communication (ISAC) networks. First, two fundamental configurations of STARS-enabled ISAC systems are introduced, namely _integrated full-space configuration_ and _separated half-space configuration_, as well as their respective advantages and common challenges are identified. To address the aforementioned challenges, a novel sensing-at-STARS design is proposed, where the sensing functionality is achieved at the STARS instead of at the base station. Such a design significantly improves the echo signal energy by eliminating undesired echo energy attenuation/leakage, in addition to establishing favorable echo propagation paths to facilitate sensing information extraction. We also present three practical implementations for sensing-at-STARS, including separated elements, mode-selection elements, and power-splitting elements. Each implementation enables flexible sensing-communication tradeoffs. Numerical results are provided to demonstrate the superiority of the proposed STARS-enabled ISAC design. Finally, we discuss several future research directions.
## I Introduction
Driven by a variety of emerging applications, such as auto-driving, digital twins, extended reality (XR), and Metaverse, the next-generation wireless networks towards 2030 are seeking a unified versatile network paradigm [1]. Fortunately, the concept of integrated sensing and communications (ISAC) provides a new perspective regarding the fusion of sensing and communications [2, 3]. It encourages sharing the hardware platform and signal processing module among multiple functionalities, which is regarded as a fundamental shift in the wireless network paradigm. On the one hand, by designing the specialized inter-functional cooperation mechanism, ISAC can dramatically raise the utilization of the network resources, e.g., spectrum efficiency and spatial degrees of freedom (DoFs), whilst significantly reducing hardware costs. On the other hand, ISAC provides easier access to real-time channel information surveillance for communication users, which contributes to precise spatial beamforming, power allocation, and interference management. In view of the above benefits, ISAC has been deemed a key enabler for future wireless networks.
Benefiting from the materials discipline, another promising technique, namely reconfigurable intelligent surface (RIS), has been proposed to overcome the negative signal propagation issues (e.g., obstacle blockage, shadow fading, and multipath effect) in wireless networks [4]. Technically, RIS is a type of digital-domain programmable metamaterial consisting of plenty of passive reflecting elements. Each element is able to manipulate the amplitudes and/or phases of the impinging signals, thereby enabling a proactive reconfiguration of the wireless channels. More recently, it has been claimed that proper integration of RIS into ISAC networks can concurrently boost communication and sensing quality as it establishes additional line-of-sight (LoS) links for blind-zone users/targets and also mitigates the inter-functionality interference in ISAC systems [5]. However, the _half-space_ coverage characteristics of the reflecting-only RIS restrict the sensing range and communication connectivity of ISAC systems. As a remedy, a novel concept of simultaneously transmitting and reflecting surface (STARS) is proposed [6]. By adjusting the magnetic and electric surface reactance of the STARS, a _full-space_ controllable propagation environment can be provided for ISAC systems. Despite the fact that exploiting STARS exhibits new benefits for the design of ISAC systems (such as large sensing-communication (S&C) range, extra spatial DoFs, and intuitive S&C tradeoff), establishing efficient STARS-enabled ISAC systems still faces tricky challenges. In particular, STARS brings a fundamentally different signal propagation, i.e., simultaneous signal transmission and reflection, which inevitably leads to the echo signal energy leakage at the STARS and introduces the inter-functionality interference between transmission and reflection regions. Hence, it calls for the redesign of the STARS architecture to unleash its potential in ISAC transmission.
Against this background, we integrate STARS into ISAC systems in this article. We commence by concisely introducing two basic configurations of STARS-enabled ISAC systems, as well as their key design challenges. To address these challenges, a new concept of sensing-at-STARS is proposed, where both operating principles and implementation methods are presented. The numerical results are provided to demonstrate the effectiveness of sensing-at-STARS structure. Finally, we make a conclusion to this article, followed by some future research directions.
## II STARS-Enabled ISAC
In comparison to conventional RISs, STARSs exhibit two distinctive attributes that can be harnessed for the design of ISAC systems. On the one hand, STARS enables full-space coverage. Therefore, it can facilitate seamless communication
and extensive sensing across the entire space, which is referred to as _integrated full-space configuration_. On the other hand, STARS also partitions the entire space into two separate spaces [7], namely the transmission and reflection space. This partition enables STARS to potentially accommodate communication and sensing functionalities within two separate half-spaces, which is referred to as _separated half-space configuration_. In the following, we will introduce the principles, advantages, and challenges of these two configurations of STARS-enabled ISAC.
### _Integrated Full-Space Configuration_
As depicted in Fig. 1(a), communication users (CUs) and sensing targets (STs) are located on both sides of the STARS in the integrated full-space configuration. The key advantages of this configuration can be summarized as follows:
* **Ubiquitous S&C Coverage:** In contrast to conventional RISs, STARS exhibits enhanced adaptability in establishing reliable LoS communication and sensing links. On the one hand, in the popular application scenarios of RISs, STARS can be more flexibly placed for establishing LoS links for randomly distributed CUs and STs, without the need for real-time orientation adjustments required by conventional RISs. On the other hand, in more stringent scenarios, such as outdoor-to-indoor and indoor, STARS can achieve S&C coverage extension between two physically disconnected spaces by strategically deploying it on the windows and walls.
* **Enhanced S&C DoFs:** With the simultaneous transmission and reflection beamforming, STARS introduces additional DoFs to favor the ISAC performance by not only directionally strengthening signals intended for CUs and STs, but also mitigating potential inter-functionality interference.
### _Separated Half-Space Configuration_
As shown in Fig. 1(b), in the separated half-space configuration, CUs and STs are confined to distinct half-spaces on their respective sides of the STARS, thus achieving a clear demarcation between communication and sensing spaces. Although the S&C DoFs are reduced due to the half-space configuration, it exhibits the following unique advantages:
* **Independent S&C Beamforming:** In practice, sensing and communications typically require different distinct beam configurations. For instance, the utilization of an isotropic beam proves advantageous for target detection, while directional beams are favorable for facilitating unicast communication. Fortunately, the separated half-space configuration can utilize the independent S&C beamforming to pursue the S&C tradeoff. In particular, the transmission and reflection signals are independently responsible for communication and sensing in the separated half-space configuration. Therefore, individualized designs for communication and sensing beamforming at the STARS become feasible, leading to a notable reduction in beamforming complexity.
* **Scalable S&C Tradeoff:** In the integrated full-space configuration, the S&C tradeoff adjustment requires the redesign of the joint beamforming. On the contrary, in the separated half-space configuration, since the S&C beamforming can be designed independently, achieving a scalable tradeoff between S&C becomes straightforward by adjusting transmission-reflection energy ratios, element numbers, and time allocations in energy-splitting, mode-switching, and time-sharing modes, respectively.
### _Key Design Challenges_
Although both integrated full-space configuration and separated half-space configuration of STARSs have demonstrated enormous potential for S&C performance improvement, the direct employing them in the conventional ISAC system also brings the following practical challenges (see Fig. 2):
* **(C1) Multi-Hop Pathloss:** Although the STARS can help to create virtual LoS links between the ISAC base station (BS) and the target, such links are subject to severe attenuation due to the multi-hop pathloss through
Fig. 1: Illustration of STARS-enabled ISAC systems. (a) Integrated full-space configuration. (b) Separated half-space configuration.
the BS\(\rightarrow\)STARS\(\rightarrow\)target\(\rightarrow\)STARS\(\rightarrow\)BS cascaded channel. This cumulative path loss yields a considerable reduction in the power levels of the echo signals received at the BS, particularly in scenarios where direct links are obstructed. As a result, the accuracy of target detection is substantially limited.
* **(C2) Echo Power Leakage:** This issue is caused by the unique dual-sided incident property of STARS. In particular, the echo signals from STs exhibit inevitable leakage towards the side of the STARS that is situated opposite the ISAC BS. Such an effect further reduces the power of echo signals received at the BS. To solve this issue, an additional ISAC BS can be deployed on the side of the STARS where leakage occurs to capture the leaked echo signals. Then, the target sensing can be carried out through cooperation between the distributed ISAC BSs.
* **(C3) Multi-Path Effect:** STARS results in the multi-path effect of echo signals when the direct link between BS and target exists. Conventionally, this multi-path effect can lead to the emergence of ghost targets, deceiving the ISAC BS. Although the ghost targets caused by STARS reflection and transmission can be effectively eliminated by the prior location knowledge of STARS, harnessing the potential benefits of this multi-path effect for target sensing is still challenging, which requires the high-complexity sensing receiver design [8]. Moreover, in the integrated full-space configuration, the beamforming at the STARS should also be designed for communication, which aggravates the above challenge. With the communication-prior beamforming, the indirect path caused by the STARS may be unconstructively combined with the direct path at the BS, thus resulting in reduced echo power.
* **(C4) Multi-Echo Aliasing:** For the multi-target scenarios, the ISAC BS receives multiple echo signals from a single STARS\(\rightarrow\)BS link, resulting in an aliasing effect. Such a signal aliasing renders it difficult to directly extract and process the echoes of each target and poses a high complexity to sensing algorithms.
Given the aforementioned challenges, a fundamental question arises, i.e., _Is it possible to devise a unified solution that can simultaneously tackle these challenges?_ The following section introduces a sensing-at-STARS approach as a response to this question, aiming to realize the desired objective.
## III Sensing-at-Stars for ISAC
In this section, we present the concept of sensing-at-STARS to address the above issues suffered by existing STARS-enabled ISAC networks, where three implementation methods, namely separated elements (SE), mode-selection elements (MSE), and power-splitting elements (PSE), are proposed to provide flexible full-space communication and sensing services.
### _Principles and Advantages_
The core idea of sensing-at-STARS is to integrate the sensing functionality into the STARS, where the target sensing is carried out at the STARS instead of at the BS, as illustrated in Fig. 3. Taking the downlink STARS-enabled
Fig. 2: Key design challenges for exploiting STARS in ISAC systems.
ISAC transmission as an example, the BS first broadcasts the joint S&C signals. On receiving these S&C signals, the STARS reflects and/or transmits these signals to the STs for illumination. Afterward, the S&C signals reflected from the targets will be received by the sensing module at the STARS, which experience a double-reflection cascaded link, i.e., BS\(\rightarrow\)STARS\(\rightarrow\)targets\(\rightarrow\)STARS. Finally, the subspace-based or deep learning-based sensing algorithms can be employed to extract the target information based on the received echo signals. Notably, the proposed sensing-at-STARS can efficiently address the technical challenges presented previously, which are summarized as follows:
* **Low echo energy leakage:** Exploiting the sensing-at-STARS architecture renders it available to carry out sensing functionality at the STARS, which reduces the path loss that echo signals suffer from. Meanwhile, the energy leakage of the echo signals reflected/transmitted to the opposite side of the BS is fully avoided. Both of them raise the received echo energy at the sensing module, which efficiently address the challenges (C1) and (C2), thereby resulting in enhanced sensing performance.
* **Favorable echo propagation path:** In STARS-enabled ISAC systems with sensing-at-STARS architecture, the echo signals reflected from the single target go through the same propagation path, i.e., target\(\rightarrow\)sensing elements. It mitigates the multipath effects in challenge (C3). Moreover, the echo signals reflected from arbitrarily different targets experience completely non-overlapping propagation paths, which solves the echo signal overlapping issue of challenge (C4) and facilitates the detection performance for multiple targets.
Besides addressing the above challenges, exploiting sensing-at-STARS in ISAC systems also yields the following benefits:
* **Higher sensing accuracy:** Applying the sensing-at-STARS architecture in existing ISAC systems, the reflected echo waves can be additionally received at the sensing module. It further increases the sampling resolution of the echo signals and improves the accuracy of the target detection.
* **Better adaptability:** Conventional ISAC networks require the dedicated co-design of the communication and sensing hardware or signal processing architectures. However, the sensing-at-STARS structure can be directly integrated into the pure communication networks for achieving simultaneous communication and sensing functionalities.
* **Low-complexity passive beamforming design:** In STARS-enabled ISAC systems without sensing-at-STARS, the sensing signals go through two signal propagation reconfigurations at the STARS, which results in a strong self-coupling of the STARS coefficients in the cascaded channels of sensing signals. It undoubtedly requires complex decoupling algorithms for the passive beamforming optimization. Whereas the sensing-at-STARS structure eliminates the secondary reflection/transmission of echo signals, and no additional decoupling operation will be needed.
communication information bits are modulated on the waveforms of the S&C signals and
### _Implementations for sensing-at-STARS_
Toward achieving sensing-at-STARS functionality, we propose three practical implementation methods. We also identify the respective advantages and disadvantages of each implementation method.
#### Iii-B1 Separated Elements (SE)
As shown in Fig. 4(a), the STARS is equipped with two categories of basic elements, namely passive elements and sensing elements, where the functionality of each basic element is pre-determined and cannot be changed. By adjusting the electric/magnetic surface reactance or the geometrical characteristics (such as the relative distance between the substrate and the dielectric element), each passive element is capable of achieving simultaneous signal transmission and reflection, full signal reflection, and full signal transmission. For sensing elements, the active sensing module is installed inside the transparent substrates to receive and analyze the echo signals reflected from the targets. By optimizing reflection/transmission coefficients at the passive elements, the passive beam gains targeting CUs and STs can be adjusted to enhance communication and sensing performance.
From a hardware design perspective, the SE structure can be achieved by directly integrating the dedicated low-cost sensors into the STARS element. To elaborate, each sensing element encapsulates a micro-sized sensor inside the transparent substrate, which is located between two reconfigurable elements. By controlling the operation mode of the outer reconfigurable elements (e.g., full signal transmission or full signal reflection), the sensing elements can switch between bilateral sensing coverage and unilateral sensing coverage. Obviously, the SE structure is based on the existing STARS elements and the encapsulated sensors, which enables it easy to implement and enjoy low hardware cost. However, the predesigned hardware design restricts the adaptability and flexibility of the SE structure. Meanwhile, the limited number of sensing elements also limits the range and accuracy of the target sensing.
Fig. 3: Illustration of the proposed sensing-at-STARS structure.
#### Iii-A2 Mode-Selection Elements (MSE)
In the MSE implementation (see Fig. 4(b)), each element can switch between two different modes, i.e., passive mode and sensing mode. Specifically, the element receives the echo signals and carries out the target detection in the sensing mode while reflecting and/or transmitting incident signals in the passive mode. By jointly optimizing element modes and passive beamforming, the MSE implementation provides a more flexible ISAC design than the SE implementation.
To realize the controllable sensing functionality, an additional nano-controller network is integrated into the reconfigurable element [9]. By monitoring the dissipated power variation of the element, the electric/magnetic surface impedance can be adjusted to a specific value corresponding to the full absorption of the echo signals, where the incidence characteristics of the echo signals can be efficiently detected. When operating in the passive mode, we switch off the nano-controller network and recalibrate the surface impedance, which turns the reconfigurable element into a passive element for signal reflection and/or transmission. Compared to the SE implementation, the MSE implementation additionally considers the optimization of the element mode selection, which results in a high ISAC performance. Nevertheless, it imposes challenges on the design of the passive beamforming of STARS.
#### Iii-A3 Power-Splitting Elements (PSE)
In the PSE implementation (shown in Fig. 4(c)), the reflection/transmission and sensing functionalities coexist in a single element. Particularly, each element can simultaneously reflect/transmit a part of incident signals while leaking the remaining part to the sensing module for parameter estimation. Obeying the law of conservation of energy, the sum energy of the sensed and reflected/transmitted signals is equal to that of the incident signals. Besides taking into account the optimization of passive beamforming, PSE implementation also requires the power splitting ratio allocation, which can further boost the S&C performance of ISAC systems.
The PSE implementation relies on the positive-intrinsic-negative (PIN) diode-based reconfigurable element, where an annular slot is designed to couple the incident signals to the adjacent waveguide [10]. Based on this hardware implementation, a fraction of incident signals can be reflected/transmitted by the reconfigurable element, while the other fraction is captured by the waveguide connected to an RF chain. By altering the geometrical and/or spatial characteristics of the waveguide, the coupling level of the waveguide and reconfigurable element can be manipulated, so as to modify the power splitting ratio between the reflected/transmitted and sensed signals [11]. For this implementation, it becomes available to sense impinging signals at all the elements, which increases the sampling resolution of the echo signals, so as to favor the target sensing. However, since each element can only sense a fraction of the incident signals, the received sensing signal-to-noise ratio (SNR) at each element is degraded.
The comparison of three implementations is summarized in Table I. In the SE implementation, the S&C tradeoff can be achieved by the passive beamforming design at the STARS. Particularly, by adjusting the spatial directivity of the passive beamforming, it is capable of artificially controlling the received power levels of communication and sensing signals at the receiving ends, thereby realizing the dynamic tuning between the communication and sensing performance. The key tradeoff in the MSE implementation is to determine the number of passive elements and the number of sensing elements. Specifically, under the fixed total number of elements of the STARS, increasing the number of sensing elements
Fig. 4: Different implementations of sensing-at-STARS. (a) Separated elements. (b) Mode-selection elements. (c) Power-splitting elements.
can enhance the received energy and the sampling resolution of echo signals while deploying more passive elements can boost both the communication and sensing performance. In the PSE implementation, the elaborate power splitting ratio of each element is required to strike the S&C tradeoff. Intuitively, allocating more power to the sensing module increases received sensing SNR and is conducive to target sensing while assigning more power to the reflected/transmitted signals can compensate for the communication signal degradation caused by the double path loss.
## IV Numerical Case Studies
This section provides the numerical results for a study case of a STARS-enabled uplink ISAC system with sensing-at-STARS structure, where the SE implementation is considered. We consider an integrated full-space ISAC scenario, where the STARS serves to establish the reliable LoS channels from an 20-antenna BS to multiple users and two targets. All the users are randomly distributed on a circle with 20-meter (m) radius. Target 1 and target 2 are on a circle with a radius of 10 m in the directions of \((342^{\circ},30^{\circ})\) and \((18^{\circ},30^{\circ})\). To avoid the energy leakage of communication signals in uplink transmission, a STARS-enabled two-phase framework is considered. In the first phase, an 20-element STARS, with full-reflection mode passive elements, is exploited to serve the users in the reflection region while sensing target 1. In the second phase, the passive elements switch to the full-transmission mode, and the STARS aims to support the users in the transmission region while sensing target 2.
For this system setup, the joint optimization of the sensing waveform at the BS and the passive beamforming at the STARS is studied to minimize the Cramer-Rao bound (CRB) of sensing targets under the quality of service (QoS) requirements. A conventional-RIS-enabled baseline scheme is considered for comparison. To elaborate, we deploy a reflecting-only RIS and a transmitting-only RIS at the same location of STARS, each of which is equipped with \(\frac{N}{2}\) elements. For fairness, we consider the sensing-at-RIS structure for the conventional RIS [13].
Regarding the sensing performance, it is observed from Fig. 5(a) that STARS achieves higher sensing accuracy than the conventional RIS. It validates the superiority of the STARS: 1) the deployment of the STARS increases the spatial DoFs for facilitating passive beamforming design; 2) the sensing-at-STARS structure yields more sensing elements than the sensing-at-RIS structure, which is conducive to sensing accuracy enhancement. We also observe that the sensing-at-STARS structure performs better during the reflection phase. This is expected since a portion of communication signals emitted from users are reflected back to the target, which leads to enhanced echo signal energy and a high sensing performance.
With the total number of STARS elements fixed, Fig. 5(b) reveals the tradeoff between the number of passive elements and the number of sensing elements. On the one hand, increasing the number of sensing elements benefits the echo sampling resolution enhancement, but degrades the passive beamforming gain at the STARS. On the other hand, increasing the number of passive elements introduces more DoFs to reconfigure the communication and sensing signal propagation, which however directly deteriorates the echo reception at the STARS. However, the latter plays a dominant role in STARS-enabled ISAC transmission. Moreover, it can observed that an upward trend of the number of users reduces the sensing performance of the considered system. It is because the configuration of the passive elements has to be aligned with communications users for accommodating their QoS demands, which degrades the received echo signal energy at the sensing elements.
## V Conclusions and Future Directions
In this article, the integration of STARS into ISAC systems was investigated. Two basic STARS-enabled ISAC configurations, namely integrated full-space configuration and separated
Fig. 5: Degree-based root CRB versus the transmit power at the BS and the number of sensing elements. The QoS rates for all the users are set to 0.5 bps/Hz, and other detailed simulation parameters can be found in [12]: (a) performance comparison with the baseline scheme; (b) tradeoff between the number of passive elements and the number of sensing elements.
half-space configuration, were presented. Both their exclusive benefits and design challenges were discussed. To deal with the above challenges, a novel concept of sensing-at-STARS was introduced for ISAC systems, which aimed to migrate the sensing functionality from the BS to STARS. Furthermore, three distinctive implementation methods for sensing-at-STARS were proposed to strike the tradeoff between communications and sensing. As an emerging solution for ISAC systems, STARS also inspired some promising research directions, which are summarized as follows:
* **Near-field STARS-ISAC:** To provide reliable passive beamforming gain, STARS usually needs to consist of a large-scale number of elements in practice. Such a hardware setup inevitably leads to a large STARS aperture and extends its near-field region to the ten- or even hundred-meter scale [14]. Near-field signal propagation relies on the unique spherical-wave channel model. It not only introduces the extra spatial DoFs for multiplexing enhancement but also renders it possible to simultaneously estimate the distance and angle information of targets. To reap these advantages, the low-complexity passive beamforming design scheme and the tailored near-field sensing algorithms are required.
* **Fluid antenna for STARS-ISAC:** To overcome the severe signal attenuation caused by the obstacle block and fading effect, a novel concept, namely fluid antenna, has been proposed recently [15]. It can strike a good balance between the multiplexing performance and spatial diversity by adaptively adjusting the physical positions of antennas. Inspired by this, it is natural to employ the fluid antenna in the sensing-at-STARS structure. To elaborate, all the elements of STARS become separated over the whole free space, each of which can vary its own position for communication and/or sensing performance improvement. For instance, a sensing element with blocked echo links can be switched to a position with LoS echo links for sensing performance guarantee. It brings extra spatial DoFs to enhance the robustness of ISAC transmission. However, this design brings new deployment challenges to the STARS, which calls for a sophisticated element position optimization design.
* **NOMA for STARS ISAC:** As one of promising multiple access techniques, non-orthogonal multiple access (NOMA) permits multiple communication users share the same spectral resource with low intra-user interference. Hence, it is expected to adopt NOMA in ISAC networks to achieve flexible resource allocation and precise interference management. In details, by regarding the sensing waveforms as virtual communication signals superimposed on the real communication signals, NOMA enables receivers to employ the successive interference cancellation (SIC) to mitigate the sensing-to-communication interference in ISAC transmission. Nevertheless, the cascaded S&C channels are highly coupled with the response coefficients of the STARS, which brings uncertainty to the SIC decoding order. As such, the dedicated passive beamforming design scheme accommodating flexible SIC decoding order is required for STARS-enabled NOMA ISAC networks.
* **PLS in STARS-ISAC:** The STARS-enabled ISAC system faces serious physical layer security (PLS) challenges. To elaborate, the full-space signal propagation characteristics of the STARS similarly result in a full-space eavesdropping risk. Meanwhile, integrating sensing into STARS will exacerbate this vulnerability as the information-embedded signals would be reflected/transmitted to illuminate the targets. However, these targets may possess the signal decoding ability for the purpose of maliciously eavesdropping on communicating users. Therefore, the joint S\(\&\)C waveform and passive beamforming design are located to ensure the ISAC transmission while suppressing the information leakage to the targets.
|
2309.12259 | Soft Merging: A Flexible and Robust Soft Model Merging Approach for
Enhanced Neural Network Performance | Stochastic Gradient Descent (SGD), a widely used optimization algorithm in
deep learning, is often limited to converging to local optima due to the
non-convex nature of the problem. Leveraging these local optima to improve
model performance remains a challenging task. Given the inherent complexity of
neural networks, the simple arithmetic averaging of the obtained local optima
models in undesirable results. This paper proposes a {\em soft merging} method
that facilitates rapid merging of multiple models, simplifies the merging of
specific parts of neural networks, and enhances robustness against malicious
models with extreme values. This is achieved by learning gate parameters
through a surrogate of the $l_0$ norm using hard concrete distribution without
modifying the model weights of the given local optima models. This merging
process not only enhances the model performance by converging to a better local
optimum, but also minimizes computational costs, offering an efficient and
explicit learning process integrated with stochastic gradient descent. Thorough
experiments underscore the effectiveness and superior performance of the merged
neural networks. | Hao Chen, Yusen Wu, Phuong Nguyen, Chao Liu, Yelena Yesha | 2023-09-21T17:07:31Z | http://arxiv.org/abs/2309.12259v1 | Soft Merging: A Flexible and Robust Soft Model Merging Approach for Enhanced Neural Network Performance
###### Abstract
Stochastic Gradient Descent (SGD), a widely used optimization algorithm in deep learning, is often limited to converging to local optima due to the non-convex nature of the problem. Leveraging these local optima to improve model performance remains a challenging task. Given the inherent complexity of neural networks, the simple arithmetic averaging of the obtained local optima models in undesirable results. This paper proposes a _soft merging_ method that facilitates rapid merging of multiple models, simplifies the merging of specific parts of neural networks, and enhances robustness against malicious models with extreme values. This is achieved by learning gate parameters through a surrogate of the \(l_{0}\) norm using hard concrete distribution without modifying the model weights of the given local optima models. This merging process not only enhances the model performance by converging to a better local optimum, but also minimizes computational costs, offering an efficient and explicit learning process integrated with stochastic gradient descent. Thorough experiments underscore the effectiveness and superior performance of the merged neural networks.
Hao Chen\({}^{\dagger}\), Yusen Wu\({}^{*}\), Phuong Nguyen\({}^{*}\), Chao Liu\({}^{\dagger}\), Yelena Yesha\({}^{*}\)\({}^{*}\)Dept. of Computer Science, University of Miami, Miami, FL, USA
\({}^{\dagger}\)Dept. of Computer Science, University of Maryland, Baltimore County, MD, USA Model Merging, Model Optimization
## 1 Introduction
In the recent decade, deep learning has been flourishing in various domains. However, the inherent complexity of neural networks, with their intricate non-linearity and non-convexity, poses formidable challenges. The stochastic gradient descent (SGD) algorithm, despite using identical training data and network architectures, converges to distinct local optima due to different initializations. This leads to a fundamental question: _Can the diverse local optima be leveraged to merge models, enhancing performance and moving closer to a more favorable global optimum?_
Convolutional neural networks exhibit various architectural paradigms like ShuffleNet [1], ResNet [2], UNet [3] and DenseNet [4], each with unique features. Our primary challenge lies in devising an algorithm that accommodates these disparate designs. The secondary challenge involves ensuring the robustness of the merging algorithm across models with vastly varying parameter values. Additionally, we face the third challenge of selectively merging specific components rather than all parameters, aiming for efficiency.
Model merging, as a novel and challenging research direction, has seen limited exploration in existing literature. Unlike model combination and aggregation [5, 6, 7], which fuses different architectures, model merging improves the model performance by integrating the trained ones (local optima) with congruent architectures. Simple techniques like arithmetic averaging fall short due to the intricate nature of neural networks as addressed in the paper [8]. Further, it proposes to merge the models by solving a permutation problem, assuming that local optima exhibit similar performances, they match the neurons of two models. The [9] proposed a general framework for merging the models, with "teacher" and "student" concepts. Primarily the existing methods focus on neuron-level merging, which means they are targeting the weights of the neural network. However, relying solely on this approach has limitations in applicability, flexibility, and robustness, particularly with irregular models.
To address these issues, we introduce a novel paradigm called _soft merging_, known for efficiency, adaptability, and robustness. Our method draws from model merging and channel pruning research [10, 9, 11, 12]. It involves concurrent training of gate parameters for multiple models, using a differentiable surrogate of \(l_{0}\) regularization to identify crucial parts. Instead of updating the weights, it only picks the best ones from the provided set of weights. This enables selective merging across various layers and architectures, with enhanced adaptability. In summary, our contributions include:
* Our proposal outlines a general procedure for selectively soft merging multiple models simultaneously with diverse neural network architectures.
* We present an algorithm that achieves linear complexity for efficient soft merging.
* Extending neural network model merging to accommodate a wide range of deep learning designs ensures robustness, even in the presence of anomalies.
## 2 Proposed Methods
### Problem statement
Suppose there are \(J\) given models denoted as \(\{\mathcal{M}_{j}\}_{j=1}^{J}\) with the same neural network architecture. Given training data \(\mathbf{X}\) and labels \(\mathbf{Y}\), our goal to find the optimal model \(\mathcal{M}^{*}\) from the object function
\[\min_{\{g_{j}\}_{j}}\sum_{j}\mathcal{L}(g_{j}\mathcal{M}_{j}(\mathbf{X}), \mathbf{Y};\mathbf{\theta}_{j}),s.t.\sum_{j}g_{j}=1, \tag{1}\]
where \(g_{j}\in\{0,1\}\) is a gate parameter, \(\mathcal{L}\) is the loss function, and \(\mathbf{\theta}_{j}\) represents the neural networks parameters in the \(j\)-th model. Labels \(\mathbf{Y}\) may not be necessary in some learning tasks, and they can be ignored in specific learning objective functions. The loss function \(\mathcal{L}\) is a general function, which can be utilized in various machine learning methods, including supervised, unsupervised, and semi-supervised approaches. The Eq.(1) is called the model-level merging, if \(\mathbf{\theta}_{j}\) is fixed, which is equivalent to picking the best model among the \(J\) models. Jointly learning \(\mathbf{\theta}_{j}\) and \(g_{j}\) belongs to the wide-sense _hard_ merging, because \(\mathbf{\theta}_{j}\) may change during the merging process. While, if learning gates parameter \(g_{j}\) only, with \(\mathbf{\theta}_{j}\) held fixed, it is referred to as _soft merging_. Especially, here Eq.(1) performs the soft merging on the model, namely picking the best granule from all the granules, with each model as a granule, which is a high-level merging. In the following section, we will introduce soft merging at different levels.
### Model Merging at Various Levels of Granularity
To merge the full model of the neural networks, we can also apply the merging process at a lower level by merging the individual modules or layers. Suppose the model \(\mathcal{M}_{j}\) consists of \(L\) layers; we can disassemble \(\mathcal{M}_{j}\) into individual layers \(\mathcal{M}_{j}:=\mathcal{F}(\{\Lambda_{I,j}\}_{l=1}^{L})\), where \(\mathcal{F}(\cdot)\) is a structural function to connect each layer which bears the same design for all \(J\) models, and \(\Lambda_{l,j}\) is the \(l\)th-layer in the model \(\mathcal{M}_{j}\). Some of the layers in the \(j\)-th model could aggregate to a module, which is defined as \(\Phi_{m,j}:=\mathcal{F}_{m}(\{\Lambda_{l,j}\}_{l=m}^{m^{\prime}})\) as the \(m\)-th module. Here, whether we are referring to the model \(\mathcal{M}\), the module \(\Phi\), or the layer \(\Lambda\), they all share the fundamental characteristic of being composed of linear or non-linear functions. So the model \(\mathcal{M}_{j}\) can be written as
\[\mathcal{M}_{j}=\mathcal{F}_{M}(\{\Phi_{m,j}\}_{m=1}^{M})=\mathcal{F}(\{ \Lambda_{l,j}\}_{l=1}^{L}), \tag{2}\]
where \(\mathcal{F}_{M}(\cdot)\) is the structural function to connect all the modules in the model \(\mathcal{M}_{j}\). The objective of module-level merging is to address the following problem
\[\min_{\{g_{m,j}\}_{m,j}}\mathcal{L}(\mathcal{M}(\mathbf{X}), \mathbf{Y};\mathbf{\theta})\ \ s.t.\sum_{j}g_{m,j}=1\] \[\Phi_{m}=\sum_{j}g_{m,j}\Phi_{m,j},\ \mathcal{M}=\mathcal{F}_{M}(\{ \Phi_{m}\}_{m=1}^{M}) \tag{3}\]
where \(g_{m,j}\) are the module-level gates. The gates are applied after the data through the module \(\Phi_{m}\) and the whole flow is managed by \(\mathcal{F}_{M}\). Similarly, the layer-level problem can be formulated as
\[\min_{\{g_{l,j}\}_{l,j}}\ \mathcal{L}(\mathcal{M}(\mathbf{X}), \mathbf{Y};\mathbf{\theta}),\ \ s.t.\sum_{j}g_{l,j}=1,\] \[\text{with}\ \Lambda_{l}=\sum_{j}g_{l,j}\Lambda_{l,j},\ \mathcal{M}= \mathcal{F}(\{\Lambda_{l}\}_{l=1}^{L}) \tag{4}\]
### Merging Optimization Algorithms
We consider \(g_{j}\) as a random variable following a Bernoulli distribution, however, it is not differentiable. To solve this problem, the constraint can then be reformulated in relation to the set of variables \(\{g_{j}\}_{j}=\mathbf{g}\in\{0,1\}^{J}\), and \(\|\mathbf{g}\|_{0}=1\) indicating that only one element in \(\mathbf{g}\) is allowed to be non-zero. Similarly, in Eq.(3) and (4), the constraint can be restated in terms of \(L_{0}\) norm. However, the \(L_{0}\) norm is not differentiable nor convex. A typical surrogate of \(L_{0}\) norms as \(L_{1}\) norm, as a convex constraint, but imposing the sparsity would introduce another constraint. In the paper by Louizos et al. [10], a surrogate approach was introduced. This approach utilizes a random variable governed by a hard concrete distribution to address the \(L_{0}\) norm constraint. Notably, this surrogate method retains differentiability through the implementation of the reparameterization trick.
**Hard Concrete Distribution**. The probability density function (PDF) of concrete distribution is written as,
\[p(s;\beta,\alpha)=\frac{\alpha\beta s^{\beta-1}(1-s)^{\beta-1}}{(s^{\beta}+ \alpha(1-s)^{\beta})^{2}},\ \ 0<s<1. \tag{5}\]
with the cumulative distribution function (CDF) as
\[F(s;\beta,\alpha)=\frac{1}{e^{\log\alpha+\beta(\log(1-s)-\log s)}} \tag{6}\]
where \(\alpha>0\) and \(0<\beta<1\). The parameter \(\alpha\) controls the distribution This binary-like concrete distribution is a smooth approximation of Bernoulli distribution[13], because it can be reparameterized with uniform distribution \(u\sim\mathcal{U}(0,1)\) as \(s=\text{Sigmoid}(\log(u)-\log(1-u)+\log\alpha)\), where \(\text{Sigmoid}(x)=\frac{1}{1+e^{-x}}\). However, the concrete distribution does not include \(0\) and \(1\). To tackle this problem, the [10] proposed a method stretching \(s\) to \((\gamma,\zeta)\) by \(\bar{s}=s\zeta+(1-s)\gamma\), with \(\gamma<0\) and \(\zeta>1\). Then by folding \(\bar{s}\) into \((0,1)\) by \(g=\min(1,\max(\bar{s},0))\), the hard concrete distribution has the CDF simply as
\[Q(s;\beta,\alpha)=F(\frac{s-\gamma}{\zeta-\gamma}),\ \ 0\leq s\leq 1 \tag{7}\]
and the PDF as
\[q(s;\beta,\alpha)=F(\frac{\gamma}{\gamma-\zeta})\delta(s)+\left(1-F( \frac{1-\gamma}{\zeta-\gamma})\right)\delta(s-1)\] \[+\left(F(\frac{1-\gamma}{\zeta-\gamma})-F(\frac{\gamma}{\gamma- \zeta})\right)p(\frac{s-\gamma}{\zeta-\gamma}),\ \ 0\leq s\leq 1. \tag{8}\]
The comparisons among examples of concrete, stretched concrete, and hard concrete distribution are shown in Fig.1.
**Surrogate Loss Functions** All the gates are replaced by the surrogate probability random variables following the hard concrete distribution. With the reparameterization trick [14], reformulated with the Lagrangian multiplier, the loss function for model-level merging is
\[\min_{\{\alpha_{j},\beta_{j}\}}\sum_{j}\mathcal{L}(\hat{s}_{j} \mathcal{M}_{j}(\mathbf{X}),\mathbf{Y};\boldsymbol{\theta}_{j})+\lambda(\hat{ s}_{j}-\frac{1}{J})\] \[\text{with }\hat{s}_{j}\sim q(s_{j}>0;\alpha_{j},\beta_{j}) \tag{9}\]
Similarly, we can get the module- and layer-level merging loss functions respectively as
\[\min_{\{\alpha_{m,j},\beta_{m,j}\}_{m,j}} \mathcal{L}(\mathcal{M}(\mathbf{X}),\mathbf{Y};\boldsymbol{ \theta})+\lambda\sum_{m,j}(\hat{s}_{m,j}-\frac{M}{J})\] \[\text{with }\hat{s}_{m,j}\sim q(s_{m,j}>0;\alpha_{m,j},\beta_{m,j}) \tag{10}\]
\[\min_{\{\alpha_{l,j},\beta_{l,j}\}_{l,j}} \mathcal{L}(\mathcal{M}(\mathbf{X}),\mathbf{Y};\boldsymbol{\theta })+\lambda\sum_{l,j}(\hat{s}_{l,j}-\frac{L}{J})\] \[\text{with }\hat{s}_{l,j}\sim q(s_{l,j}>0;\alpha_{l,j},\beta_{l,j}) \tag{11}\]
### General training algorithm
We have the full-model soft merging in different levels as shown in Eq.(9) - (11). If one just selects a few layers or modules to perform the soft merging, the loss function should be changed accordingly. For example, supposing to merge the 1st and the 5th layers of the models, which means a layer-level merging, it requires the training parameter \(\alpha_{1,j},\alpha_{5,j},\beta_{1,j}\) and \(\beta_{5,j}\) as the random variable and others as fixed with given gate value as \(1\), using formulation (11). Here we propose the general problem formulation for full-model and selective soft merging in different levels as
\[\min_{\boldsymbol{\alpha},\boldsymbol{\beta}}\mathcal{L}_{1}(\mathbf{X}, \mathbf{Y})+\lambda\mathcal{L}_{2}(\boldsymbol{\alpha},\boldsymbol{\beta}) \tag{12}\]
where \(\mathcal{L}_{1}\) is related to model performance and \(\mathcal{L}_{2}\) is the term controlling the merging, including the sampling process for the reparameterization. In the formulation (12), the parameters \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) represent two sets of parameters selected for the process of selective soft merging. Notably, the hyper-parameter \(\lambda\) remains fixed as a user-defined tuning parameter and is not learned during training. The training methodology is relatively straightforward, involving the application of SGD in the mini-batch fashion as outlined in Table 1.
## 3 Experiments
We conducted multiple experiments at various levels of merging to demonstrate the performance of multi-model soft merging, assess the robustness of the merging process, and explore selective merging across diverse neural networks. Nevertheless, the tasks of the experiments include supervised classification and unsupervised source separation. We used the ESC-\(50\)[15], and MNIST (with mitture) as the data sets. The neural networks are Audio Spectrogram Transformer (AST) [16], ResNet18 [2], Variational auto-encoder (VAE) [17]. We apply soft merging at various levels to evaluate the performance of our proposed algorithm. By experimenting with different settings, we aim to demonstrate the versatility of our soft merging approach across a broad spectrum of tasks.
**Model-Level: 10 Models Merging**. The proposed algorithm involves model-level soft merging of 10 vision transformer (ViT) models for audio source [16] classification using the ESC-50 dataset. This approach employs parallel selection of the best model post-training, in contrast to the sequential comparison of neural network models. The dataset comprises 50 environmental sound classes, each containing 40 examples, which are divided into 1600 training and 400 validation samples. These models, initially pre-trained on ImageNet, process audio spectrograms using non-overlapping patches. Within the pool of 10 models, ranging from notably underperforming to highly competent ones, the soft-merging technique demonstrates its effectiveness even with limited training data. Furthermore, learning from validation data is accomplished within a mere 5 epochs, thereby reducing computational complexity compared to traditional sequential inference methods. The performance of the merged model, as illustrated in Fig.2,
\begin{table}
\begin{tabular}{l} \hline \hline Input: \(\mathbf{X}\), \(\mathbf{Y}\), \(\{\mathcal{M}_{j}\}\), \(\lambda\), the learning rate \(\eta\) \\ Output: \(\mathcal{M}^{*}\) \\ \hline
1: Initialize \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\) randomly \\
2: For \(b=0,1,\dots\) _/ *b_-th mini-batch */ \\
3: \(\mathcal{L}_{l}=\mathcal{L}_{1}(\mathcal{M}(\mathbf{X}^{(b)}),\mathbf{Y}^{(b) })+\lambda\mathcal{L}_{2}(\boldsymbol{\alpha},\boldsymbol{\beta})\) /*\(\mathbf{X}^{(b)},\mathbf{Y}^{(b)}\) \\ & as the data and labels for current mini-batch, \(\mathcal{L}_{l}\) is the loss function for full-model or selective merging*/ \\
4: \(\boldsymbol{\alpha}=\boldsymbol{\alpha}+\eta\frac{\partial\mathcal{L}_{l}}{ \partial\boldsymbol{\alpha}},\boldsymbol{\beta}=\boldsymbol{\beta}+\eta\frac{ \partial\mathcal{L}_{l}}{\partial\boldsymbol{\beta}}\) \\
5: Next \(b\) \\
6: \(\mathcal{M}^{*}\) contains the gate parameters \(\,\delta^{*}\sim q(\mathbf{s};\boldsymbol{\alpha}^{*},\boldsymbol{\beta}^{*})\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: General training algorithm.
showcases its capabilities, while the learned attention parameters \(\log\alpha\) in Table 2 provide insights into model quality. Remarkably, even with unfavorable initializations, the training process successfully identifies correct gradient directions. Model 10 emerges as the top-performing choice, demonstrating the algorithm's effectiveness in model selection without the necessity for extensive hyperparameter tuning.
**Module-Level: Three Models Merging** This experiment aims to determine if our algorithm could effectively identify correct modules within a collection of both trained and untrained modules. Specifically, we took one trained ResNet18 model with MNIST and two untrained ResNet18 models, splitting each into two halves to create a total of six modules. Among these, two modules held functional values while the other four contained random values. Our objective was to discern the functional modules using learned gate values. Despite the initial poor performance of the three individual models due to the untrained modules, applying soft merging yielded promising outcomes, indicating successful learning of the correct gates. During this experiment, we utilized parameters \(\lambda=5\), with a learning rate of 0.001 across 150 epochs. The initial \(\log\mathbf{\alpha}\) values followed a Gaussian distribution with \(\mathcal{N}(0,0.01)\). The learning curve in Fig.(a)a depicted the merged model's progression, demonstrating that while the initial performance was subpar due to random gate initialization (Fig.(b)b), both training and validation accuracy improved significantly and quickly converged after around 80 epochs. Notably, the convergence of \(\log\mathbf{\alpha}\) values in Fig.(b)b did not occur within 150 epochs, indicating that the parameter does not possess inherent bounds due to the formulation in Eq. (5) and (6).
**Selective Layer-Level Merging** In our unsupervised source separation experiment, we adapted Variational Autoencoders (VAEs) for blind source separation, showcasing the capabilities of our algorithm in such settings. We applied this concept to image data, similar to audio and RF blind source separation problems[18]. By manually creating MNIST mixtures without labels, we mirrored the approach in [17]. We used two trained models with similar signal-to-interference ratio (SIR) performance and chose one layer in the encoder and one layer in the decoder to conduct the soft merging, which requires choosing a primary and secondary model. The VAE KL penalty \(\beta_{KL}\) increased up to 0.5 per epoch for 10 epochs, with \(\mathbf{\beta}=0\) and \(\lambda=1\). The gate values in the last batch are depicted in Fig. 4, where different prime model selections led to varying \(\log\mathbf{\alpha}\), and still maintaining the SIR around \(29\) but better than the one before merged.
## 4 Conclusions
Our research introduces the innovative concept of soft merging, a paradigm that addresses adaptability, efficiency, and robustness challenges in enhancing deep learning models. Our approach provides a versatile method for selectively integrating diverse neural network architectures, ultimately leading to improved model performance and a more favorable achievement of a better local optimum.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline
**Model \#** & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) \\ \hline
**Init.** & \(0.0089\) & \(-0.0185\) & \(-0.0075\) & \(0.0276\) & **0.0059** \\
**Final** & \(-0.7604\) & \(-0.7877\) & \(0.6155\) & \(0.6509\) & \(0.6722\) \\ \hline \hline
**Model \#** & \(6\) & \(7\) & \(8\) & \(9\) & **10** \\ \hline
**Init.** & \(0.0023\) & \(0.0120\) & \(-0.0098\) & \(0.0115\) & \(-0.0003\) \\
**Final** & \(0.7943\) & \(0.7943\) & \(0.7878\) & \(0.8740\) & **0.8870** \\ \hline \end{tabular}
\end{table}
Table 2: Model-level merging \(\log\alpha\) values
Figure 4: The gate values of the last mini-batch training data.
Figure 3: Module-level merging result, using three ResNet18 models and manually split into \(6\) modules, with only two correct ones
Figure 2: Accuracy Comparison of Models and Merged Model |
2309.05725 | Greybody Factors Imprinted on Black Hole Ringdowns: an alternative to
superposed quasi-normal modes | It is shown that the spectral amplitude of gravitational-wave (GW) ringdown
of a Kerr black hole sourced by an extreme mass ratio merger can be modeled by
the $\textit{greybody factor}$, which quantifies the scattering nature of the
black hole geometry. The estimation of the mass and spin of the remnant is
demonstrated by fitting the greybody factor to GW data without using black hole
quasi-normal modes. We propose that the ringdown modeling with the greybody
factor may strengthen the test of gravity as one can avoid the possible
overfitting issue and the start time problem in the ringdown modeling with
superposed quasi-normal modes. | Naritaka Oshita | 2023-09-11T18:00:43Z | http://arxiv.org/abs/2309.05725v2 | # Greybody Factors Imprinted on Black Hole Ringdowns:
###### Abstract
It is shown that the spectral amplitude of gravitational-wave (GW) ringdown of a Kerr black hole sourced by an extreme mass ratio merger can be modeled by the _greybody factor_, which quantifies the scattering nature of the black hole geometry. The estimation of the mass and spin of the remnant is demonstrated by fitting the greybody factor to GW data without using black hole quasi-normal modes. We propose that the ringdown modeling with the greybody factor may strengthen the test of gravity as one can avoid the possible overfitting issue and the start time problem in the ringdown modeling with superposed quasi-normal modes.
+
Footnote β : preprint: RIKEN-iTHEMS-Report-23
+
Footnote β : preprint: RIKEN-iTHEMS-Report-23
## I Introduction
The Kerr solution, describing a spinning black hole, is one of the most simplest solutions to the Einstein equation. Based on the black hole no-hair theorem [1; 2; 3], the spacetime structure near an astrophysical Kerr black hole is characterized by two parameters only, i.e, the mass \(M\) and angular momentum \(J\) of the black hole. Therefore, a black hole is a suitable site to test gravity in strong gravity regimes. In the context of the test of the no-hair theorem, the black hole spectroscopy [4; 5; 6] has been actively studied so far. The black hole spectroscopy is an extraction of each black hole quasi-normal (QN) mode [7; 8; 9; 10; 11; 12; 13; 14] from gravitational wave (GW) ringdown which is a superposition of multiple QN modes. There are infinite number of QN modes and each mode has a complex frequency \(\omega=\omega_{lmn}\in\mathbb{C}\) labeled by the overtone number \(n\) for each angular and azimuthal mode \((l,m)\). The real and imaginary part of \(\omega_{lmn}\) represent the frequency and damping rate of the mode, respectively.
GW ringdown appears after the inspiral phase of a binary black hole system. If the ringdown starts around the strain peak, it would be possible to measure several QN modes included in a ringdown signal by truncating GW data before the assumed start time of ringdown and by fitting several QN modes to the truncated data [15]. However, some issues in the black hole spectroscopy with (superposed) QN modes have been pointed out like the start time problem [13; 8; 14; 16] and overfitting problem [17]. Then, it would be natural to ask if there is another nice quantity being suitable to test gravity other than QN modes.
In this paper, we propose that the black hole greybody factor, \(\Gamma_{lm}(\omega)\), would be an important quantity in the test of the no-hair theorem and the estimation of the remnant parameters from GW ringdown. We here consider a particle plunging into a massive black hole as a source of GW. Then we show that for \((l,m)=(2,2)\), \(\Gamma_{lm}\) can be imprinted on the GW spectral amplitude \(|\tilde{h}_{lm}(\omega)|\) in \(\omega\gtrsim f_{lm}\equiv\text{Re}(\omega_{lmb0})\) with the form of
\[|\tilde{h}_{lm}(\omega)|\simeq c_{lm}\times\gamma_{lm}(\omega)\equiv c_{lm} \times\sqrt{1-\Gamma_{lm}(\omega)}/\omega^{3}\text{ for }\omega\gtrsim f_{lm}, \tag{1}\]
where \(\omega\) is a frequency of GWs and \(c_{lm}\) is a constant corresponding to the GW amplitude. The frequency dependence of the greybody factor is determined by the two remnant parameters only, i.e., the mass and spin of the black hole. It means that if (1) holds, one can detect the greybody factor from the ringdown to test the no-hair theorem as the spectral amplitude in \(\omega\gtrsim f_{lm}\) corresponds to the ringdown signal. The reflectivity \(\mathcal{R}_{lm}\equiv 1-\Gamma_{lm}\) has an exponential damping at high frequencies (\(\omega\gtrsim f_{lm}\)) and the strength of the damping in the frequency domain is unique for the remnant mass and spin like a complex QN mode frequency. As the damping in \(\mathcal{R}_{lm}\) is strong for rapid spins, the reflectivity well govern the dependence of \(|\tilde{h}_{lm}|\) on frequency and our model works well especially for rapidly spinning remnant black holes. One of the important difference in the ringdown modeling with the greybody factor and QN modes is that given the remnant mass \(M\) and spin \(j(\equiv J/M^{2})\), we know where the universal damping of \(\mathcal{R}_{lm}=1-\Gamma_{lm}\) appears in the frequency space, i.e., \(\omega\gtrsim f_{lm}(M,j)\), but it is unknown when the excitation of superposed QN modes appears in the time domain, which is recognized as the start time problem or the time-shift problem.
The original idea of the modeling of ringdown with the greybody factor was introduced in the previous paper by the author [18]. In this paper, we investigate the importance of the greybody factors in the ringdown sourced by an extreme mass ratio merger in more detail. In Sec. II.1, we explain our methodology to compute GW waveform in the linear perturbation regime. The definition and the property of the greybody factor is provided in Sec. II.2. In Sec. III.1, we study why the greybody factor can be imprinted on the GW ringdown by carefully considering the
effect of the source term. In Sec. III.2, we investigate the exponential damping in the greybody factor and how it is consistent with the exponential damping in the spectral amplitude of the GW ringdown. In Sec. III.3, we perform the measurement of the remnant mass and spin only with the fit of the greybody factor. In Sec. IV, our conclusion is provided and we discuss the pros and cons of using the greybody factor and QN modes in the test of the no-hair theorem, measurement of the remnant quantities, and the modeling of GW ringdown. Throughout the manuscript, we use the natural unit of \(c=\hbar=1\) and \(G=1\).
## II Formalism
In this section, we describe how we compute GW spectral amplitude for an extreme mass ratio merger and the greybody factor of a spinning black hole. Here we concentrate on a particle plunging into the hole and its trajectory is restricted on the equatorial plane.
### extreme mass ratio merger and gravitational waveform
The background geometry is approximated by the Kerr spacetime when we consider an extreme mass ratio merger with a massive black hole. Therefore, the background geometry can be covered by the Boyer-Lindquist coordinates \((t,r,\theta,\phi)\) and one can compute the GW spectrum \(\tilde{h}_{lm}(\omega)\) sourced by the merger event in a linear manner. Let us begin with solving the Sasaki-Nakamura equation [19]:
\[\left[\frac{d^{2}}{dr^{*2}}-F_{lm}\frac{d}{dr^{*}}-U_{lm}\right]X_{lm}=\rho_{ lm}, \tag{2}\]
where the explicit forms of \(F_{lm}\) and \(U_{lm}\) are given in the original paper by Sasaki and Nakamura [19], and the spectrum \(\tilde{h}_{lm}\) is obtained from the perturbation variable \(X_{lm}\) as is shown later explicitly. The source term \(\rho_{lm}\) depends on the plunging orbit of a particle with mass \(\mu\). The form of \(\rho_{lm}\) for the plunging particle on the equatorial plane (\(\theta=\pi/2\)) is [20]
\[\rho_{lm}=\frac{\gamma_{0}\Delta}{(r^{2}+a^{2})^{3/2}r^{2}}W\exp{\left(-i \int^{r}\frac{K(r^{\prime})}{\Delta(r^{\prime})}dr^{\prime}\right)}, \tag{3}\]
where \(\Delta(r)\equiv r^{2}-2Ma+a^{2}\), \(K(r)\equiv(r^{2}+a^{2})\omega-am\), and \(a\equiv J/M\). The functions \(\gamma_{0}\) and \(W\) are shown in Appendix A and B in Ref. [20], respectively. The trajectory of a particle is determined by the following differential equations
Figure 1: Spectral amplitude \(|\tilde{h}_{22}|\) for \(j=0.7\) (black dash-dotted), \(0.9\) (red solid), and \(0.99\) (blue dashed). The source term is obtained with \(L_{z}=0.5\) and the observation angle is \(\theta=\pi/2\). The black vertical lines show the real part of the fundamental QN mode frequency \(f_{22}(j)\). The exponential damping of the spectral amplitudes appear in \(\omega\gtrsim f_{22}\).
[20; 21]:
\[r^{2}\frac{dt}{d\tau} =-a(a-L_{z})+\frac{r^{2}+a^{2}}{\Delta}P, \tag{4}\] \[r^{2}\frac{d\phi}{d\tau} =-a(a-L_{z})+\frac{a}{\Delta}P,\] (5) \[r^{2}\frac{dr}{d\tau} =-\sqrt{R},\] (6) \[\theta =\pi/2, \tag{7}\]
where \(P\equiv r^{2}+a^{2}-L_{z}a\), \(R\equiv 2Mr^{3}-L_{z}^{2}r^{2}+2Mr(L_{z}-a)^{2}\), \(\mu L_{z}\) is the orbital angular momentum, and \(\tau\) is the proper time of the particle. We obtain the source term \(\rho_{lm}\) by substituting the trajectory of the particle, \((t(\tau),r(\tau),\theta=\pi/2,\phi(\tau))\), into (3). We then numerically compute the Sasaki-Nakamura equation with the source term for a plunging orbit on the equatorial plane \(\rho_{lm}\). The GW spectral amplitude we obtained for \(l=m=2\) are shown in Figure 1.
### greybody factors
The greybody factor quantifies the absorptive nature of a black hole geometry and is independent of the source term. It is determined only by the no-hair parameters of a black hole, i.e., the mass and spin of a black hole, like the black hole quasi-normal modes. We obtain the greybody factor by computing a homogeneous solution to the Sasaki-Nakamura equation \(X_{lm}=X_{lm}^{\rm(hom)}\) with the boundary condition of
\[X_{lm}^{\rm(hom)}=e^{-i\kappa_{\rm H}r^{*}}\text{ for }r^{*}\to-\infty, \tag{8}\]
where \(k_{\rm H}\equiv\omega-m\Omega_{\rm H}\) with \(\Omega_{\rm H}\equiv j/(2r_{+})\). We then read the asymptotic ingoing and outgoing amplitudes at a distant region as
\[X_{lm}^{\rm(hom)}=A_{\rm in}e^{-i\omega r^{*}}+A_{\rm out}e^{i\omega r^{*}} \text{ for }r^{*}\to\infty. \tag{9}\]
The reflectivity of the angular momentum barrier is given by [22; 23]
\[\mathcal{R}_{lm}\equiv\left|\frac{C}{c_{0}}\right|^{2}\left|\frac{A_{\rm out} }{A_{\rm in}}\right|^{2}\equiv 1-\Gamma_{lm}, \tag{10}\]
where \(\mathcal{R}_{lm}\) and \(\Gamma_{lm}\) are the reflectivity and the greybody factor (i.e., transmissivity), respectively. The factors \(|C|^{2}\) and \(c_{0}\) are [24; 25]
\[|C|^{2} \equiv\lambda^{4}+4\lambda^{3}+\lambda^{2}(-40a^{2}\omega^{2}+40 am\omega+4)+48a\lambda\omega(a\omega+m)+144\omega^{2}(a^{4}\omega^{2}-2a^{3}m \omega+a^{2}m^{2}+M^{2}), \tag{11}\] \[c_{0} \equiv\lambda(\lambda+2)-12a\omega(a\omega-m)-i12M\omega, \tag{12}\]
respectively, and \(\lambda\) is the separation constant of the spin-weighted spheroidal harmonics. We numerically compute the greybody factor by solving the Sasaki-Nakamura equation1. Our computation reproduce the exponential decay
\begin{table}
\begin{tabular}{c||c c c c c c c} \hline spin parameter (\(j\)) & 0.001 & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 \\ \hline decay frequency (\(T_{22}\)) & 0.067 & 0.066 & 0.065 & 0.064 & 0.063 & 0.062 & 0.060 \\ \hline \multicolumn{6}{c}{} \\ \hline spin parameter (\(j\)) & 0.7 & 0.8 & 0.9 & 0.95 & 0.99 & 0.995 & 0.998 \\ \hline decay frequency (\(T_{22}\)) & 0.057 & 0.053 & 0.045 & 0.036 & 0.019 & 0.014 & 0.0096 \\ \hline \end{tabular}
\end{table}
Table 1: The value of \(T_{lm}\) (\(l=m=2\)) with respect to the spin parameter \(j\). We read the value of \(T_{22}\) from \(1-\Gamma_{22}(\omega)\) in the range of \(\alpha\times f_{22}\leq\omega\leq 1.99/(2M)\) where we choose a constant \(\alpha\) in the range of \(1\leq\alpha\leq 1.2\). The constant \(\alpha\) should be larger for a lower spin in order for \(1-\Gamma_{22}\) to be well approximated with \(e^{-(\omega-f_{22})/T_{22}}\) in the frequency range. The value of \(\alpha\) we set is shown in (A3) in Appendix A.
of \(\mathcal{R}_{22}\) at high frequencies (\(\omega\gtrsim f_{22}\)) and the superradiant amplification at \(\omega<m\Omega_{\rm H}\) as is shown in Figure 2. The exponential damping of \(1-\Gamma_{lm}\) at high-frequency region (\(\omega\gtrsim f_{lm}\)) can be approximated as
\[1-\Gamma_{lm}\simeq e^{-(\omega-f_{lm})/T_{lm}}, \tag{13}\]
where \(T_{lm}\) quantifies the strength of the exponential damping of \(\mathcal{R}_{lm}\) in the frequency domain. The factor \(T_{lm}\) is a _no-hair_ quantity which depends only on the mass and spin of the remnant black hole like the QN mode frequency. The values of \(T_{lm}\) extracted from our numerical data are shown in Table 1 and the fitting methodology we used is provided in Appendix A.
## III Greybody factors in ringdown
In this Section, we study the greybody factor imprinted on GW ringdown. As is shown in Figure 3, we find that the greybody factor can model the spectral amplitude of GW ringdown in \(\omega\gtrsim f_{lm}\) when the GW is sourced by an extreme mass ratio merger. This still holds even for some higher harmonic modes (Figure 4) and for various values of the orbital angular momentum \(\mu L_{z}\) and the spin parameter \(j\) of the massive black hole. We also confirm that the frequency dependence of GW spectrum is insensitive to the observation angle \(\theta\) as shown in Appendix B. This universal nature in the black hole ringdown is important to test the no-hair theorem with high precision by combining the black hole spectroscopy.
We here study how the greybody factor can be imprinted on the ringdown signal. We also demonstrate the estimation of the remnant mass and spin by using the greybody factor only. We then find that it works well, which implies that the greybody factor is important to test the black hole no-hair theorem. Not only using the QN modes but also using the greybody factor would enhance the accuracy of the test of gravity and measurability of the remnant quantities at least for extreme mass ratio mergers.
### Why are the greybody factors imprinted on ringdown?
We here discuss why the greybody factors can be imprinted on the ringdown for extreme mass ratio mergers. For higher mass ratios, the background geometry is governed only by a massive black hole and the perturbation theory in the Kerr spacetime works to compute GW waveform. The GW strain, \(h\), is given by
\[h=\sum_{l,m}\frac{e^{im\phi}}{\sqrt{2\pi}r}\int d\omega\tilde{h} _{lm}(\omega)e^{-i\omega t} =\sum_{l,m}\frac{e^{im\phi}}{\sqrt{2\pi}r}\int d\omega\frac{-2}{ \omega^{2}}_{-2}S_{lm}(a\omega,\theta)R_{lm}(\omega)e^{-i\omega t}, \tag{14}\] \[=\sum_{l,m}\frac{e^{im\phi}}{\sqrt{2\pi}r}\int d\omega\frac{-2}{ \omega^{2}}_{-2}S_{lm}(a\omega,\theta)\frac{A_{\rm out}(\omega)}{2i\omega A_{ \rm in}(\omega)}\tilde{\rho}_{lm}(\omega)e^{-i\omega t}, \tag{15}\]
Figure 2: Reflectivity \(\mathcal{R}_{lm}=1-\Gamma_{lm}\) for \((l,m)=(2,2)\) with \(j=0.7\) (black dot-dashed), \(0.9\) (red solid), and \(0.99\) (blue dashed). The black vertical lines show the real part of the fundamental QN mode frequency \(\omega=f_{22}\). The exponential damping of the reflectivity appear in \(\omega\gtrsim f_{22}\).
where \({}_{-2}S_{lm}(a\omega,\theta)\) is the spin-weighted spheroidal harmonics, \(R_{lm}\) is the radial Teukolsky variable, and \(\tilde{\rho}_{lm}\) is
\[\tilde{\rho}_{lm}(\omega)=\frac{-4\omega^{2}}{\lambda(\lambda+2)-12iM\omega-12a^ {2}\omega^{2}}\int_{r_{+}}^{\infty}dr^{\prime}\frac{\rho_{lm}(\omega,r^{\prime} )X_{lm}^{\rm(hom)}(\omega,r^{\prime})}{A_{\rm out}(\omega)}, \tag{16}\]
Then we find
\[|\tilde{h}_{lm}|=\frac{\sqrt{1-\Gamma_{lm}}}{\omega^{3}}t_{lm}=\gamma_{lm}t_{lm}, \tag{17}\]
where \(\gamma_{lm}\) is a universal quantity that depends only on the two remnant quantities \((M,j)\) and another factor \(t_{lm}\) includes the source term and the spheroidal harmonics, depending on the external information like the GW source and the observation angle, respectively:
\[t_{lm}\equiv\Big{|}\frac{c_{0}}{C}\Big{|}\,|\tilde{\rho}_{lm}||_{-2}S_{lm}|. \tag{18}\]
The factor of \(t_{lm}\) is hereinafter referred to as the _renormalized source term_. The frequency dependence of the GW spectral amplitude, \(|\tilde{h}_{lm}|\), is governed by \(\gamma_{lm}\) at higher frequencies \(\omega\gtrsim f_{lm}\) provided that \(t_{lm}\) has the small dependence in \(\omega\) for \(\omega\gtrsim f_{lm}\). The value of the source term \(t_{22}\) is shown in Figure 5. One can see that \(t_{22}(\omega)\) is indeed nearly constant and \(\gamma_{22}\) governs the frequency dependence of the GW spectrum. It might imply that a compact object plunging into a large black hole can be regarded as an _instantaneous_ source of GW ringdown and the associated
source term can be nearly constant in the frequency domain2. We leave a more detailed study of our ringdown model for other harmonic modes, e.g., the sensitivity of our model for \((l,m)\neq(2,2)\) to external parameters like the orbital angular momentum, for a future work.
Footnote 2: Remember that an instantaneous pulse like a delta function or a sharp Gaussian distribution in the time domain has a (nearly) constant distribution in the frequency domain.
### Exponential decay in GW spectral amplitudes and in the greybody factors
We find the spectral amplitude in high frequency region (\(\omega\gtrsim f_{22}\)) can be modeled by the greybody factor as is shown in Figures 3 and 5. This holds for various values of the orbital angular momentum of the plunging particle \(\mu L_{z}\). Fitting the Boltzmann factor \(\exp[-(\omega-f_{22})/T_{22}^{\rm(GW)}]\) to the simulated GW data3, we read the damping exponent of the GW spectral amplitude \(T_{22}^{\rm(GW)}\) in the frequency domain. The result is shown in Figure 6 and the best fit values of \(T_{22}^{\rm(GW)}\) (dots) is consistent with \(T_{22}\) (solid line) especially for \(j\gtrsim 0.8\). The value of \(T_{22}\) is sensitive to the spin parameter \(j\) for rapid spins, but is insensitive to \(j\) for lower spins (see table 1 as well).
Footnote 3: The detailed methodology of our fitting analysis is provided in Appendix A.
In addition to \(T_{lm}\), another quantity \(f_{lm}\) is also important to model \(|\tilde{h}_{lm}|\) as the exponential damping in \(|\tilde{h}_{lm}|\) appears at \(\omega\gtrsim f_{lm}\) (see Figures 1 and 3). In the next section, we show that the two remnant values, i.e., \(M\) and \(j\), can be extracted from the GW spectral amplitude by fitting \(\gamma_{lm}\) characterized by \(T_{lm}(M,j)\) and \(f_{lm}(M,j)\).
Figure 6: The dots show the best fit values of \(T_{22}^{\rm(GW)}\) extracted from the numerical GW waveform data with \(\theta=\pi/2\). The black solid line is the best fit value of the damping frequency \(T_{22}\) obtained from the numerical computation of the greybody factor.
### estimation of the remnant quantities
The two no-hair quantities \((M,j)\) can be extracted by fitting the function of \(\gamma_{lm}(M,j)\equiv\sqrt{1-\Gamma_{lm}(M,j)}/\omega^{3}\) to the spectral amplitude of GW data \(|\tilde{h}_{lm}|\) as the greybody factor is characterized by the two remnant quantities. The fitting parameters are \(M\), \(j\), and an amplitude \(c_{lm}\). Here we demonstrate the extraction of the two no-hair parameters \((M,j)\) from our clean numerical GW waveform by fitting \(\gamma_{lm}\) with \((l,m)=(2,2)\). We here use the analytic model function that models \(\gamma_{22}\), whose explicit form is shown in Appendix C. The estimation of \((M,j)\) with noise is important to quantify the feasibility for a specific detector, and it will be studied elsewhere.
We estimate the mismatch \(\mathcal{M}\) between the GW spectral amplitude \(|\tilde{h}_{22}|\) and \(c_{22}\times\gamma_{22}\) on the mass-spin space with
\[\mathcal{M}(M,a)=\left|1-\frac{\left\langle|\tilde{h}_{22}||c_{22}\times\gamma _{22}\right\rangle}{\sqrt{\left\langle|\tilde{h}_{22}||\tilde{h}_{22}|\right\rangle \left\langle c_{22}\times\gamma_{22}|c_{22}\times\gamma_{22}\right\rangle}} \right|=\left|1-\frac{\left\langle|\tilde{h}_{22}||\gamma_{22}\right\rangle}{ \sqrt{\left\langle|\tilde{h}_{22}||\tilde{h}_{22}|\right\rangle\left\langle \gamma_{22}|\gamma_{22}\right\rangle}}\right|, \tag{19}\]
where \(\left\langle a(\omega)|b(\omega)\right\rangle\) is
\[\left\langle a(\omega)|b(\omega)\right\rangle=\int_{\omega_{i}}^{\omega_{f}}d \omega a(\omega)b^{*}(\omega). \tag{20}\]
Note that the mismatch \(\mathcal{M}\) is independent of the scale \(c_{22}\) and depends only on the other two fitting parameters \((M,j)\) only. This makes the fit and extraction of the remnant quantities quite simpler than the case in the multiple QN mode fitting. Also, we could avoid the overfitting issue. For the fit of multiple overtones, on the other hand, there are many fitting parameters, i.e., an amplitude and phase for each QN mode. It was pointed out [17] that the inclusion of many QN modes in the ringdown model may cause overfitting when we use a GW waveform beginning with the strain peak [17]4.
Footnote 4: On the other hand, the previous work of Ref. [15] fit multiple QN modes to the numerical relativity GW waveform beginning from the strain peak. Then they reproduced the injected remnant mass and spin values. This implies that the fit of multiple QN modes may work at least when GW data has no contamination by noise.
We estimate the mismatch \(\mathcal{M}\) by computing the inner product (20) with the range of the integral of \(\omega_{i}=2M\) and \(\omega_{f}=2M\times 1.99\). Note that the \(M\) in \(\omega_{i/f}\) is not the true value but the fitting parameter of the black hole mass. The mismatch is computed in the mass-spin domain and the result is shown in Figure 7. We find that the mass-spin estimation works well even though we here use the greybody factor without the fit of multiple QN modes. We also find that the best fit mass and spin are not sensitive to an artificial choice of the range of the data we use \(\omega\in[\omega_{i},\omega_{f}]\) as is shown in Figure 8. On the other hand, in the fit of QN modes, the mass-spin measurement is sensitive to the assumed start time of ringdown [15]. Although the feasibility of the extraction of the greybody factor depends on noise, combining this with the black hole spectroscopy may strengthen not only the measurability of the remnant quantities but also the precision of the test of gravity. We will come back to this in the future.
## IV Discussions
The superposed QN modes is one of the most established model of the black hole ringdown. In this paper, we discussed another universal nature of ringdown that is described by the black hole greybody factor \(\Gamma_{lm}\). We considered how GW ringdown can be modeled by the greybody factor, which is another no-hair quantity that depends only on the mass and spin of the remnant black hole like the black hole QN modes. We found that the spectral amplitude of GW ringdown \(|\tilde{h}_{lm}|\) with \((l,m)=(2,2)\) sourced by an extreme mass ratio merger can be modeled by \(|\tilde{h}_{lm}|\sim c\times\gamma_{lm}(\omega)\) for \(\omega\gtrsim f_{lm}\) where \(\gamma_{lm}(\omega)\equiv\sqrt{1-\Gamma_{lm}}/\omega^{3}\) and \(c\) is an amplitude. In order for the greybody factor to be imprinted on the ringdown, the renormalized source term \(t_{lm}(\omega)\), depending on the GW source, should be nearly constant with respect to \(\omega\) for \(\omega\gtrsim f_{lm}\). We confirmed that \(t_{22}(\omega)\) satisfies the condition when GW is sourced by a compact object plunging into a massive black hole in the extreme mass ratio regime (Figure 5). We may expect that this is the case as long as a particle plunging into a massive black hole can be regarded as an instantaneous source of GW ringdown. We numerically computed GW waveforms with several values of the orbital angular momentum \(\mu L_{z}\). We then confirmed that the GW spectral amplitude is well modeled by \(\gamma_{22}\), determined by the greybody factor, at higher frequencies \(\omega\gtrsim f_{22}\) for various values of \(|L_{z}|\lesssim 0.5\) (see Figures 3 and 6). Also, this model works well especially for a rapidly spinning remnant black hole. Indeed, the measurement of the innermost stable circular orbit of supermassive black
holes (SMBHs) based on the X-ray observation puts the lower bound on the spin \(j\) of SMBHs, and some of them take \(j>0.9\) and can be even near extremal as \(j>0.99\)[27; 28].
As the greybody factor \(\Gamma_{lm}\) is another no-hair quantity, the extraction of not only the QN modes but also the greybody factor from GW ringdown would improve the accuracy of the measurement of the remnant mass and spin and strengthens the test of gravity (Figure 7). The pros and cons in the modeling of GW ringdown with QN modes and greybody factors are summarized below.
1. For the ringdown modeling with QN modes, the relevant data range in the time domain \(t\gtrsim t_{\rm start}\) is difficult
Figure 8: The mismatch \(\mathcal{M}\) between \(|\tilde{h}_{22}|\) and \(\gamma_{22}\) for \(j=0.9\). The injected (true) values of the remnant quantities are indicated with the white solid lines. The source particle has the orbital angular momentum of \(\mu L_{z}=0.5\mu\) and we set \(\theta=\pi/2\). We change the frequency range of the data used in the computation of \(\mathcal{M}\) as \(2M\omega_{i}=1\), \(1.1\), \(1.25\) and \(2M\omega_{f}\) is fixed to \(1.99\).
Figure 7: The mismatch \(\mathcal{M}\) between \(|\tilde{h}_{22}|\) and \(\gamma_{22}\) for (a) \(j=0.7\), (b) \(0.9\), and (c,d) \(0.99\). The injected (true) values of the remnant quantities are indicated with the white solid lines. The source particle has the orbital angular momentum of \(\mu L_{z}=0.5\mu\) and the observation angle is set to \(\theta=\pi/2\). The frequency range used in the estimation of the mismatch is \([2M\omega_{i},2M\omega_{f}]=[1,1.99]\).
to identify, where \(t_{\rm start}\) is the start time of ringdown. On the other hand, for the ringdown modeling with the greybody factor, the relevant data range \(\omega\gtrsim f_{lm}(M,j)\) is uniquely determined once we fix the remnant quantities \(M\) and \(j\).
2. Many fitting parameters are needed to extract QN modes from GW ringdown especially when several QN modes are excited simultaneously as \(\sum_{n}C_{lmn}\exp[-i\omega_{lmn}t+\varphi_{lmn}]\) for a dominant angular mode of \((l,m)\). For the extraction of the greybody factor, on the other hand, the spectral amplitude of the black hole ringdown at \(\omega\gtrsim f_{lm}\) is modeled by \(c\times\gamma_{lm}(M,j)\). The scale \(c\) is irrelevant for the minimization of the mismatch \(\mathcal{M}\). As such, one can search the least value of \(\mathcal{M}\) with the only two fitting parameters \((M,j)\) while avoiding the overfitting issue. It is much simpler than the QN mode fitting which involves many fitting parameters, i.e., amplitude \(C_{lmn}\) and phase \(\varphi_{lmn}\) for each QN mode.
3. GW ringdown can be modeled by the superposition of QN modes regardless of the frequency-dependence of the source term. However, the modeling of ringdown with the greybody factor does not always work due to the contamination from the source term. Note that the greybody factor can be extracted only when the normalized source term \(t_{lm}(\omega)\) is nearly constant in \(\omega\gtrsim f_{lm}\) (Figure 5).
Given the pros and cons in the modeling of ringdown with the greybody factor and in that with QN modes, combining those two models may improve the test of the no-hair theorem and the estimation of the remnant quantities. We could also relate the excitation of overtones with the greybody factors as the residue of \(\gamma_{lm}\) at QN modes can be regarded as the excitation factor, which quantifies the _excitability_ of each QN mode [7; 16; 29; 30]. It would be important to understand the relation between the greybody factor and excitation factor to reveal the universality in the black hole ringdown.
To further confirm the importance of the greybody factor in the modeling of GW ringdown, we have to check the detectability of the greybody factor from GW ringdown with the future detectors such as LISA. Also, it would be important to take into account some higher harmonic modes, which would affect the extraction of the greybody factor and increases the fitting parameters if higher harmonic modes are significantly excited. We will come back to these points in the future. It is interesting to note that as another different direction, the authors in Ref. [31] studied an inverse problem to read the greybody factor from quantum Hawking radiation. An interesting aspect of the greybody factor is that it can be important in both quantum and classical radiation of black holes, i.e., Hawking radiation and GW ringdown, respectively.
###### Acknowledgements.
The author appreciate Niayesh Afshordi, Kazumasa Okabayashi, and Hidetoshi Omiya for valuable comments on this work. The author thanks Daiki Watarai for carefully reading an earlier version of this paper and for valuable comments. The author also thanks Giulio Bonelli and Sebastian Volkel for sharing their recent works and for helpful comments on an earlier version of this paper. The author is supported by the Grant-in-Aid for Scientific Research (KAKENHI) project for FY2023 (23K13111).
## Appendix A numerical methodology and accuracy
We numerically solve the Sasaki-Nakamura equation (2) with the 4th Runge-Kutta method. The source term \(\rho_{lm}\) for the plunging particle is numerically computed in the range of \(r^{*}_{\rm min}\leq r^{*}\leq r^{*}_{\rm max}=400M\). The minimum radius of the range of integral \(r^{*}_{\rm min}\) is set to
\[r^{*}_{\rm min}=\begin{cases}-40M,&\text{for }j\leq 0.9,\\ -50M,&\text{for }0.9<j<0.97,\\ -80M,&\text{for }0.97\leq j<0.99,\\ -100M,&\text{for }j=0.99,\\ -120M,&\text{for }j=0.995,\\ -160M,&\text{for }j=0.998.\end{cases} \tag{10}\]
For the Sasaki-Nakamura equation, the exponential tail of the potential \(U_{lm}\) near the horizon becomes long range as \(j\to 1\). As such, \(r^{*}_{\rm min}\) should be a larger negative value for rapid spins so that one can impose the boundary condition of \(e^{-ikur^{*}}\) at the end point of \(r^{*}=r^{*}_{\rm min}\). The numerical integration of the Sasaki-Nakamura equation is done in the range of \(r^{*}_{\rm min}\leq r^{*}\leq r^{*}_{\rm Nmax}=300+30/\omega\) for each frequency mode of \(\omega\).
The source term \(\rho_{lm}(\omega,r^{*})\) is obtained in the resolution of \(\Delta r^{*}=(r^{*}_{\rm max}-r^{*}_{\rm min})/N_{\rm source}\) with \(N_{\rm source}=5000\). The Sasaki-Nakamura equation is integrated with the step size of \(\Delta r^{*}=(r^{*}_{\rm SNmax}-r^{*}_{\rm min})/N_{\rm SN}\) with \(N_{\rm SN}=10^{5}\). The greybody factor is computed by reading the asymptotic amplitude at \(r^{*}=r^{*}_{\rm SNmax}\) by using the Wronskian. We checked that our resolution is high enough to obtain high-accuracy GW waveform and greybody factor (see Figure 9).
The damping frequency \(T_{lm}\) in \(1-\Gamma_{lm}(\omega)\) and \(T^{\rm(GW)}_{lm}\) in the GW spectral amplitude \(|\tilde{h}_{lm}(\omega)|\) are extracted at higher frequencies \(\omega\gtrsim f_{lm}\) by using a Mathematica's function NonlinearModelFit for the log-scaled data, \(\log\Gamma_{lm}\) and \(\log(|\tilde{h}_{lm}|^{2}\omega^{6})\), with the fitting function of
\[B(\omega-\omega_{i})+\log A, \tag{10}\]
where \(A\) and \(B\) are the fitting parameters and \(B\) is associated with \(T_{lm}\) or \(T^{\rm(GW)}_{lm}\). The results for \((l,m)=(2,2)\) are shown in Table 1 and Figure 6. The extraction of \(T_{22}\) is done by fitting the Boltzmann factor \(e^{-(\omega-f_{22})/T_{22}}\) to the data in the frequency range of \(\omega_{i}\leq\omega\leq\omega_{f}=1.99/(2M)\) with
\[\omega_{i}=\begin{cases}1.20\times f_{22},&\text{for }0.001\leq j\leq 0.75,\\ 1.10\times f_{22},&\text{for }0.8\leq j\leq 0.9,\\ 1.05\times f_{22},&\text{for }0.93\leq j\leq 0.99,\\ 1.02\times f_{22},&\text{for }j=0.995,\\ 1.00\times f_{22},&\text{for }j=0.998.\end{cases} \tag{11}\]
For the extraction of \(T^{\rm(GW)}_{22}\), we fit the Boltzmann factor to the numerical data in the range of \(\omega_{i}\leq\omega\leq\omega_{f}\) with
\[\omega_{i}=\begin{cases}1.20\times f_{22},&\text{for }0.001\leq j\leq 0.75,\\ 1.10\times f_{22},&\text{for }0.8\leq j\leq 0.9,\\ 1.05\times f_{22},&\text{for }0.93\leq j<0.98,\\ 1.00\times f_{22},&\text{for }0.98\leq j\leq 0.998,\end{cases} \tag{12}\]
and \(\omega_{f}\) is set to a value at which \((\omega_{f}^{3}|\tilde{h}_{22}(\omega_{f})|^{2})/(\omega_{i}^{3}|\tilde{h}_{ 22}(\omega_{i})|^{2})\simeq 0.01\). Also, the best fit value and error of \(T^{\rm(GW)}_{22}\) in Figure 6 was estimated by Mathematica's commands BestFitParameters and ParameterErrors in a Mathematica's function of NonlinearModelFit.
## Appendix B greybody factor in GW ringdown and observation angle \(\theta\)
Our ringdown modeling for a harmonic mode \((l,m)\) is given by the product of \(\gamma_{lm}\) and the renormalized source term (17). The renormalized source term is determined by a source of GW emission and the observation angle as it includes
the spin-weighted spheroidal harmonics \({}_{-2}S_{lm}(a\omega,\theta)\). We confirmed that the ringdown modeling with the greybody factor works for a wide range of the observation angle \(\theta\). Indeed, the mismatch \(\mathcal{M}\) defined in (19) is less than \(10^{-3}\) at least for \(\pi/6\leq\theta\leq 5\pi/6\) as is shown in Figure 10. The mismatch \(\mathcal{M}\) is evaluated for data in \(f_{22}\leq\omega\leq 1.99/(2M)\).
## Appendix C analytic model of the greybody factor
Our proposal in this paper is that the greybody factor \(\Gamma_{lm}\) is imprinted on the spectral amplitude of GW ringdown with the form of
\[\gamma_{lm}=\sqrt{1-\Gamma_{lm}}/\omega^{3}. \tag{101}\]
As the greybody factor is a universal quantity which depends only on the remnant mass and spin like the black hole QN modes, the extraction of the greybody factor from the signal is applicable to test the no-hair theorem and the measurement of the remnant mass and spin. To demonstrate that in Sec. III.3, we compute the mismatch \(\mathcal{M}\) between GW spectral amplitude and the function \(\gamma_{lm}\). As the computation of the greybody factor involves the numerical integration of the Sasaki-Nakamura equation in our approach, we shorten the computation time of \(\mathcal{M}\) by using an analytic model function \(\tilde{\Gamma}_{22}\) that models the greybody factor \(\Gamma_{22}\) for \(\omega>0\)5:
Footnote 5: Another fitting function of the reflectivity for \(0.6<j<0.8\) is provided in Ref. [23].
\[1-\tilde{\Gamma}_{22}(\omega)=\frac{1+a_{1}Z[-2,2,2,M,j,\omega](1-\tanh\left[( \omega-f_{22})/a_{2}\right])}{(1+\exp[(\omega-f_{22})/T_{22}])}\text{ for }\omega>0, \tag{102}\]
where \(a_{1}=0.325\), \(a_{2}=0.02\), and
\[T_{22} \simeq 0.223\sqrt{1-j}-0.33(1-j)+0.249(1-j)^{3/2}-0.0748(1-j)^{2}, \tag{103}\] \[f_{22} \simeq 2-2.85\sqrt{1-j}+3.01(1-j)-2.01(1-j)^{3/2}+0.597(1-j)^{2},\] (104) \[Z[s,l,m,M,j,\omega] \equiv 4m\Omega_{\text{H}}\frac{r_{+}}{\sqrt{1-j^{2}}}\left(\frac{(l- s)!(l+s)!}{(2l)!(2l+1)!!}\right)^{2}[2M\omega(1-j^{2})]^{2l+1}\prod_{k=1}^{l} \left(1+\frac{4}{k^{2}}\left(m\Omega_{\text{H}}\frac{r_{+}}{\sqrt{1-j^{2}}} \right)^{2}\right), \tag{105}\]
Figure 10: Comparison of the spectral amplitude of GW \(|\tilde{h}_{22}|\) (black solid) and \(\gamma_{22}\) (red dashed). We set \(j=0.9\) and \(L_{z}=0.5\). We also set the observation angle as \(\theta=\pi/6\), \(\pi/4\), \(\pi/2\), \(3\pi/4\), and \(5\pi/6\). The mismatch \(\mathcal{M}\) is evaluated for data in \(f_{22}\leq\omega\leq 1.99/(2M)\).
where \(s\) is the spin of the relevant field, e.g., \(|s|=2\) for gravitational field. This fitting model matches with the exact greybody factor within \(\tilde{\mathcal{M}}\lesssim 0.01\) as is shown in Figure 11. This fitting function is applicable to the broad range of spin parameter \(0.001\leq j\leq 0.998\) as is partially shown in the Figure.
|
2301.00094 | On Skoda's theorem for Nadel-Lebesgue multiplier ideal sheaves on
singular complex spaces and regularity of weak KΓ€hler-Einstein metrics | In this article, we will characterize regular points respectively by the
local vanishing, positivity of the Ricci curvature and $L^2$-solvability of the
$\overline\partial$-equation together with Skoda's theorem for Nadel-Lebesgue
multiplier ideal sheaves associated to plurisubharmonic (psh) functions on any
(reduced) complex space of pure dimension. As a by-product, we show that any
weak K\"ahler-Einstein metric on \emph{singular}
$\mathbb{Q}$-Fano/Calabi-Yau/general type varieties cannot be smooth, and that
in general there exists no \emph{singular} normal K\"ahler complex space such
that the K\"ahler metric is K\"ahler-Einstein on the regular locus. | Zhenqian Li | 2022-12-31T02:04:29Z | http://arxiv.org/abs/2301.00094v1 | On Skoda's theorem for Nadel-Lebesgue multiplier ideal sheaves on singular complex spaces and regularity of weak Kahler-Einstein metrics
###### Abstract.
In this article, we will characterize regular points respectively by the local vanishing, positivity of the Ricci curvature and \(L^{2}\)-solvability of the \(\overline{\partial}\)-equation together with Skoda's theorem for Nadel-Lebesgue multiplier ideal sheaves associated to plurisubharmonic (psh) functions on any (reduced) complex space of pure dimension. As a by-product, we show that any weak Kahler-Einstein metric on _singular_\(\mathbb{Q}\)-Fano/Calabi-Yau/general type varieties cannot be smooth, and that in general there exists no _singular_ normal Kahler complex space such that the Kahler metric is Kahler-Einstein on the regular locus.
Key words.:Multiplier ideal sheaves, plurisubharmonic functions, vanishing theorems, \(\overline{\partial}\)-equations, Skoda's \(L^{2}\) division theorem, Kahler-Einstein metrics 2010 Mathematics Subject Classification: 14F18, 32L20, 32Q20, 32S05, 32U05 E-mail: [email protected]
## 1. Introduction
Throughout this note, all complex spaces are always assumed to be reduced and paracompact unless otherwise mentioned; we mainly refer to [19, 40] for basic references on the theory of complex spaces.
### Local vanishing for multiplier ideals
The local vanishing theorem for the higher direct images of sheaves computing multiplier ideals plays an important role in complex geometry and algebraic geometry, by which many local/global properties of multiplier ideal could be deduced, e.g., the restriction theorem, Nadel vanishing theorem and Skoda's theorem for multiplier ideals and so on (cf. [10, 28], etc.).
Let \((X,\omega)\) be a Hermitian complex space of pure dimension \(n\) and \(\varphi\in\operatorname{QPsh}(X)\) be a quasi-psh function on \(X\). Then we can define the Nadel-Lebesgue multiplier ideal sheaf \(\mathscr{I}_{\operatorname{NL}}(\varphi)\) associated to \(\varphi\) on \(X\) by the integrability with respect to the Lebesgue measure \(dV_{\omega}\) (see Definition 2.2), which coincides with the usual multiplier ideal sheaf \(\mathscr{I}(\varphi)\) introduced by Nadel whenever \(X\) is smooth. Let \(\pi:\widetilde{X}\to X\) be any log resolution of the Jacobian ideal \(\mathcal{J}ac_{X}\) of \(X\), then it follows that the Nadel-Lebesgue multiplier ideal sheaf
\[\mathscr{I}_{\operatorname{NL}}(\varphi)=\pi_{*}\left(O_{\widetilde{X}}( \widetilde{K}_{\widetilde{X}/X})\otimes\mathscr{I}(\varphi\circ\pi)\right).\]
When \(X\) is smooth, the Mather discrepancy divisor \(\widetilde{K}_{\widetilde{X}/X}\) is nothing but the relative canonical divisor \(K_{\widetilde{X}/X}:=K_{\widetilde{X}}-\pi^{*}K_{X}\) of \(\widetilde{X}\) over \(X\). Then, we have the following local vanishing for multiplier ideals (cf. [28, 37])
\[R^{q}\pi_{*}\big{(}O_{\widetilde{X}}(K_{\widetilde{X}/X})\otimes\mathscr{I}( \varphi\circ\pi)\big{)}=0,\ \forall q\geq 1.\]
Therefore, it is natural to ask whether we could establish a similar local vanishing result in the singular setting, i.e.,
\[R^{q}\pi_{*}\big{(}O_{\widetilde{X}}(\widetilde{K}_{\widetilde{X}/X})\otimes \mathscr{I}(\varphi\circ\pi)\big{)}=0,\ \forall q\geq 1.\]
In the present note, one of our goals is to study the local vanishing in the context of Nadel-Lebesgue multiplier ideals. In particular, based on Skoda's division for Nadel-Lebesgue
multiplier ideals, we will prove that such a local vanishing for Nadel-Lebesgue multiplier ideals is in fact equivalent to smoothness of the ambient space in some sense; see Theorem 1.3 for a detailed statement.
### Skoda's ideal generation by \(L^{2}\) estimates for the \(\overline{\partial}\)-equation
In the classical works [44, 45], relying on the \(L^{2}\) methods due to [1, 25, 26] in several complex variables, Skoda established an analytic criterion on the ideal generation by a given collection of holomorphic functions or sections. In the original proof of Skoda's ideal generation, as well as standard techniques in functional analysis for the argument on a priori estimate and solving \(\overline{\partial}\)-equation with \(L^{2}\) estimates, he also developed special analytic techniques by restricting the domain of the \(\overline{\partial}\)-operator to an appropriate subspace of the usual \(L^{2}\) space and inducing an \(L^{2}\) estimate on this new operator.
As applications, Skoda's theorem is a crucial ingredient in proving the Briancon-Skoda theorem in commutative algebra [6, 27, 35] and an effective version of the Nullstellensatz in algebraic geometry [13]. Moreover, a special case of Skoda's ideal generation also played key roles in Siu's works on the deformation invariance of plurigenera [42] and finite generation of the canonical ring [43]. The interaction between several complex variables, complex algebraic geometry and partial differential equations has been an attractive area for the researchers. For the sake of reader's convenience, we state a version of Skoda's \(L^{2}\) division theorem as below.
**Theorem 1.1**.: ([45], Theoreme 2). _Let \((X,\omega)\) be an \(n\)-dimensional weakly pseudoconvex Kahler manifold with \(\varphi\in\mathrm{Psh}(X)\), and \(g:E\to Q\) be a surjective morphism of Hermitian holomorphic vector bundles with \(r_{E}=\mathrm{rank}\,E\) and \(r_{Q}=\mathrm{rank}\,Q\). Suppose that \(E\) is Nakano semi-positive on \(X\) and \(L\to X\) is a Hermitian line bundle such that_
\[\sqrt{-1}\Theta(L)-\rho\sqrt{-1}\Theta(\det Q)\geq 0\]
_for \(\rho=\min\{n,r_{E}-r_{Q}\}+\varepsilon\) and some \(\varepsilon>0\)._
_Then, for every \(f\in H^{0}(X,Q\otimes K_{X}\otimes L)\) satisfying_
\[\int_{X}\langle\overline{gg^{*}}f,f\rangle\cdot(\det gg^{*})^{-\rho-1}e^{-2 \varepsilon}dV_{\omega}<+\infty,\]
_there exists \(h\in H^{0}(X,E\otimes K_{X}\otimes L)\) such that \(f=g\cdot h\) and_
\[\int_{X}|h|^{2}\cdot(\det gg^{*})^{-\rho}e^{-2\varphi}dV_{\omega}\leq\frac{ \rho}{\varepsilon}\cdot\int_{X}\langle\overline{gg^{*}}f,f\rangle\cdot(\det gg ^{*})^{-\rho-1}e^{-2\varphi}dV_{\omega}.\]
Due to Theorem 1.1, if we consider the trivial bundles \(E,\ Q\) and \(L\) on a pseudoconvex domain, then by combining with the strong openness of multiplier ideal sheaves established by Guan-Zhou [21], we can reformulate Theorem 1.1 in the language of multiplier ideals as follows (cf. also Remark A.3):
**Theorem 1.2**.: _Let \(X\) be an \(n\)-dimensional complex manifold with \(\varphi\in\mathrm{QPsh}(X)\) a quasi-psh function and \(\mathfrak{a}\subset\mathcal{O}_{X}\) a nonzero ideal sheaf with \(r\) (local) generators. Then, it follows that_
\[\mathscr{I}(\varphi+k\varphi_{\mathfrak{a}})=\mathfrak{a}\cdot\mathscr{I}( \varphi+(k-1)\varphi_{\mathfrak{a}}),\ \forall k\geq\min\{n,r\},\]
_where \(\varphi_{\mathfrak{a}}:=\frac{1}{2}\log(\sum_{i}|g_{i}|^{2})\) and \((g_{i})\) is any local system of generators of \(\mathfrak{a}\)._
Motivated by the above reformulation of Theorem 1.1, it is interesting for us to explore an analogue to Theorem 1.2 for Nadel-Lebesgue multiplier ideals in the singular setting. In order to achieve such a goal, a natural idea is to generalize Skoda's \(L^{2}\) methods to the singular case, i.e., creating an appropriate \(L^{2}\) theory for the \(\overline{\partial}\)-operator on singular complex spaces. However, as presented in [15, 16], it seems not to be possible to establish a general theory as in the smooth setting to solve the \(\overline{\partial}\)-equation with \(L^{2}\) estimates on complex spaces with singularities; one can refer to [17, 38, 39, 41] for some partial results on the related topics.
On the other hand, we can also consider to apply Theorem 1.1 near the singularities under some reasonable assumptions on the positivity of curvatures. Fortunately, we could show that positivity of the Ricci curvature on the regular locus is in fact equivalent to the desired Skoda's ideal generation and \(L^{2}\)-solvability of the \(\overline{\partial}\)-equation. More precisely, we state our main result in the following:
**Theorem 1.3**.: _Let \(X\) be a (Hermitian) complex space of pure dimension \(n\) with \(x\in X\) a normal point and \(\pi:\widetilde{X}\to X\) a log resolution of the Jacobian ideal \(\mathcal{J}ac_{X}\) of \(X\). Then, the following statements are equivalent:_
1. _For each quasi-psh function_ \(\varphi\) _near the point_ \(x\in X\)_, we have_ \[R^{q}\pi_{*}\big{(}\mathcal{O}_{\widetilde{X}}(\widehat{K}_{\widetilde{X}/X}) \otimes\mathcal{I}(\varphi\circ\pi)\big{)}=0,\ \forall q\geq 1.\]
2. _For each quasi-psh function_ \(\varphi\) _near the point_ \(x\in X\)_, we have_ \[R^{q}\pi_{*}\big{(}\mathcal{O}_{\widetilde{X}}(\widehat{K}_{\widetilde{X}/X}) \otimes\mathcal{I}(\varphi\circ\pi)\big{)}=0,\ \forall 1\leq q<n.\]
3. _For some Stein neighborhood_ \(\Omega\subset\subset X\) _of the point_ \(x\)_, there exists a Kahler metric_ \(\omega\) _on_ \(\Omega\) _such that the Ricci curvature_ \(\operatorname{Ric}(\omega)\geq 0\) _on the regular locus_ \(\Omega_{\mathrm{reg}}\) _of_ \(\Omega\)_._
4. _For some Stein neighborhood_ \(\Omega\subset\subset X\) _of the point_ \(x\)_, there exists a Kahler metric_ \(\omega\) _and a_ \(\mathcal{C}^{\infty}\) _differentiable real function_ \(\psi\) _on_ \(\Omega\) _such that_ \(\operatorname{Ric}(\omega)+\sqrt{-1}\partial\overline{\partial}\psi\geq 0\) _on the regular locus_ \(\Omega_{\mathrm{reg}}\) _of_ \(\Omega\)_._
5. _For some Stein neighborhood_ \(\Omega\subset\subset X\) _of the point_ \(x\)_, there exists a Kahler metric_ \(\omega\) _and a Hermitian line bundle_ \(L\) _on_ \(\Omega\) _such that for any smooth_ \(\varphi\in\operatorname{Sph}(\Omega)\) _and_ \(v\in L^{2}_{0,q}(\Omega_{\mathrm{reg}},L)\) _satisfying_ \(\overline{\partial}v=0\) _and_ \[\int_{\Omega_{\mathrm{reg}}}\langle A_{\varphi}^{-1}v,v\rangle\,e^{-2\varphi} dV_{\omega}<+\infty\] _with the curvature operator_ \(A_{\varphi}=[\sqrt{-1}\partial\overline{\partial}\varphi,\,\Lambda_{\omega}]\) _on_ \(\Omega_{\mathrm{reg}}\)_, we have_ \(u\in L^{2}_{0,q-1}(\Omega_{\mathrm{reg}},L)\) _such that_ \(\overline{\partial}u=v\) _and_ \[\int_{\Omega_{\mathrm{reg}}}|u|^{2}e^{-2\varphi}dV_{\omega}\leq\int_{\Omega_{ \mathrm{reg}}}\langle A_{\varphi}^{-1}v,v\rangle\,e^{-2\varphi}dV_{\omega}.\]
6. _For some Stein neighborhood_ \(\Omega\subset\subset X\) _of the point_ \(x\)_, there exists a Kahler metric_ \(\omega\) _and a Hermitian line bundle_ \(L\) _on_ \(\Omega\) _such that for any smooth_ \(\varphi\in\operatorname{Sph}(\Omega)\) _and_ \(v\in L^{2}_{0,1}(\Omega_{\mathrm{reg}},L)\) _satisfying_ \(\overline{\partial}v=0\) _and_ \[\int_{\Omega_{\mathrm{reg}}}\langle A_{\varphi}^{-1}v,v\rangle\,e^{-2\varphi} dV_{\omega}<+\infty,\] _we have_ \(u\in L^{2}(\Omega_{\mathrm{reg}},L)\) _such that_ \(\overline{\partial}u=v\) _and_ \[\int_{\Omega_{\mathrm{reg}}}|u|^{2}e^{-2\varphi}dV_{\omega}\leq\int_{\Omega_{ \mathrm{reg}}}\langle A_{\varphi}^{-1}v,v\rangle\,e^{-2\varphi}dV_{\omega}.\]
7. _The Skoda's theorem holds for Nadel-Lebesgue multiplier ideals, i.e., for any nonzero ideal sheaf_ \(\mathfrak{a}\) _with_ \(r\) _generators and quasi-psh function_ \(\varphi\) _near the point_ \(x\in X\)_, it holds that_ \[\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{\mathfrak{a}})=\mathfrak{a}\cdot \mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{\mathfrak{a}}),\ \forall k\geq\min[n,r].\]
8. _For any nonzero ideal sheaf_ \(\mathfrak{a}\) _near the point_ \(x\in X\)_, it holds that_ \[\mathcal{I}_{\mathrm{NL}}(n\varphi_{\mathfrak{a}})=\mathfrak{a}\cdot \mathcal{I}_{\mathrm{NL}}((n-1)\varphi_{\mathfrak{a}}).\]
9. _For any nonzero ideal sheaf_ \(\mathfrak{a}\) _near the point_ \(x\in X\)_, it holds that_ \[\mathcal{I}_{\mathrm{NL}}(n\varphi_{\mathfrak{a}})\subset\mathfrak{a}.\]
10. _The point_ \(x\in X\) _is a_ regular _point of_ \(X\)
In the above result, the most interesting and amazing point is that it presents several characterizations of regular points by various statements involved Nadel-Lebesgue multiplier ideals, which look like almost irrelevant; e.g., (1, 2) in algebraic geometry, (3, 4) in differential geometry and (5, 6) in partial differential equations together with (7--9) in commutative algebra. The core idea of all arguments originates from the Skoda's ideal generation by the \(L^{2}\) approaches in several complex variables.
_Remark 1.4_.: Simple examples show that the assumption that \(x\) is a normal point of \(X\) cannot be removed in Theorem 1.3; in particular, any of the statements (1, 2, 7, 8) will not imply (10) in that case.
As a straightforward consequence of Theorem 1.3, we have
**Corollary 1.5**.: _Any normal Kahler space with nonnegative Ricci curvature on the regular locus must be non-singular._
### Kahler-Einstein metrics on singular varieties
Let \(X\) be a normal \(\mathbb{Q}\)-Gorenstein Kahler space, that is, a normal Kahler space whose canonical class \(K_{X}\) defines a \(\mathbb{Q}\)-line bundle on \(X\). A Kahler current \(\omega\in c_{1}(\pm K_{X})\) is called a _weak (or singular) Kahler-Einstein metric_ on \(X\) if \(\omega\) has bounded local potentials and is a genuine Kahler-Einstein metric on the regular locus \(X_{\mathrm{reg}}\) of \(X\) (cf. [3, 4, 5, 14], etc.). A weak Kahler-Einstein metric \(\omega\) on \(X\) is called a Kahler-Einstein metric if \(\omega\) is a Kahler metric on \(X\), i.e., \(\omega\) has smooth local potentials. For general expositions on the topic of Kahler-Einstein metrics one can refer to [2, 23, 47, 48, 49, 50] and the references therein. In particular, we state some recent results as follows.
**Theorem 1.6**.: ([3, 5, 14, 30, 31, 32, 36], etc.). _Let \(X\) be a normal \(\mathbb{Q}\)-Gorenstein complex projective variety. Then:_
1. _If_ \(X\) _is a_ \(\mathbb{Q}\)_-Calabi-Yau variety with only log terminal singularities, then_ \(X\) _admits a weak Kahler-Einstein metric._
2. _If_ \(K_{X}\) _is ample, then_ \(X\) _admits a weak Kahler-Einstein metric if and only if_ \(X\) _is_ \(K\)_-stable._
3. _If_ \(-K_{X}\) _is ample, then_ \(X\) _admits a weak Kahler-Einstein metric if and only if_ \(X\) _is_ \(K\)_-polystable._
A basic and widely open problem in Kahler geometry/geometric analysis is understanding the geometric asymptotic behavior of the weak Kahler-Einstein metric near the singular locus \(X_{\mathrm{sing}}\) of \(X\). In [24], the authors made a breakthrough with a very precise description for a class of Calabi-Yau varieties with smoothable isolated singularities, which are in further required to be isomorphic to a neighborhood of the vertex in a strongly regular Calabi-Yau cone; see also [7, 8, 18] for some recent progress in this direction. In more general situations, by using deep tools in the theory of degenerate complex Monge-Ampere equations on singular complex spaces, the continuity of local potentials of weak Kahler-Einstein metrics is established for all \(\mathbb{Q}\)-Fano/Calabi-Yau varieties in [4, 22], but so far little is known for the higher order regularity in general and it is desirable to establish one for weak Kahler-Einstein potentials. However, relying on Theorem 1.3, we will see that too much regularity cannot be expected and in fact any weak Kahler-Einstein potential is at most \(\mathcal{C}^{\alpha}\) (\(\alpha<2\)) differentiable near the singularities. In particular, we obtain the following
**Theorem 1.7**.: _Let \(X\) be a normal \(\mathbb{Q}\)-Gorenstein Kahler space admitting a weak Kahler-Einstein metric \(\omega\). Then, \(\omega\) is smooth on \(X\) if and only if \(X\) is non-singular._
## 2. Preliminaries
Firstly, we introduce the notion of Nadel-Lebesgue multiplier ideal sheaf on any complex space of pure dimension and then present some useful facts used throughout this note.
**Definition 2.1**.: Let \(X\) be a complex space of pure dimension and \(\varphi\in L^{1}_{\mathrm{loc}}(X_{\mathrm{reg}})\) with respect to the Lebesgue measure. Then, the complex space \(X\) is said to be a _Hermitian complex space_ if there is a Hermitian metric \(\omega\) on the regular part (may be disconnected) \(X_{\mathrm{reg}}\) of \(X\) such that \(\omega\) is locally the restriction of a Hermitian metric on some \(\mathbb{C}^{N}\) for a local embedding of \(X\). It follows from the differentiable partition of unity that every complex space is a Hermitian complex space as in the smooth case.
The complex space \(X\) is called to be a _Kahler space_ if there is a Hermitian metric \(\omega\) on \(X\) such that \(\omega\) is locally the restriction of a Kahler metric on some \(\mathbb{C}^{N}\) for a local embedding of \(X\). In particular, it admits smooth strictly psh functions as local potentials.
We say that the function \(\varphi\) is _quasi-plurisubharmonic_ (quasi-psh for short) on \(X\) if it is locally equal to the sum of a psh function and of a smooth function on \(X\). The set of quasi-psh (resp. psh and strictly psh) functions on \(X\) is denoted by \(\mathrm{QPsh}(X)\) (resp. \(\mathrm{Psh}(X)\) and \(\mathrm{SPsh}(X)\)). A quasi-psh function \(\varphi\in\mathrm{QPsh}(X)\) will be said to have _analytic singularities_ on \(X\) if \(\varphi\) can be written locally as
\[\varphi=\frac{c}{2}\log(|f_{1}|^{2}+\cdots+|f_{N_{0}}|^{2})+O(1),\]
where \(c\in\mathbb{R}_{\geq 0}\) and \((f_{i})\) are holomorphic functions.
**Definition 2.2**.: Let \((X,\omega)\) be a Hermitian complex space of pure dimension and \(\varphi\in L^{1}_{\mathrm{loc}}(X_{\mathrm{reg}})\) with respect to the Lebesgue measure.
The _Nadel-Lebesgue multiplier ideal sheaf_ associated to \(\varphi\) on \(X\) is defined to be the \(\mathcal{O}_{X}\)-submodule \(\mathscr{I}_{\mathrm{NL}}(\varphi)\subset\mathscr{M}_{X}\) of germs of meromorphic functions \(f\in\mathscr{M}_{X,x}\) such that \(|f|^{2}e^{-2\varphi}\) is integrable with respect to the Lebesgue measure \(dV_{\omega}\) near the point \(x\in X\). One can check that \(\mathscr{I}_{\mathrm{NL}}(\varphi)\) is independent of the choice of Hermitian metric \(\omega\) on \(X\).
The _log canonical threshold_ (or _complex singularity exponent_) \(\mathrm{LCT}_{x}(\varphi)\) of \(\varphi\) at a point \(x\in X\) is defined to be
\[\mathrm{LCT}_{x}(\varphi):=\sup\left\{c\geq 0\mid\mathcal{O}_{X,x}\subset \mathscr{I}_{\mathrm{NL}}(c\varphi)_{x}\right\}.\]
It is convenient to put \(\mathrm{LCT}_{x}(-\infty)=0\).
It is easy to see that \(\mathscr{I}_{\mathrm{NL}}(\varphi)\subset\mathcal{O}_{X}\) is an ideal sheaf when \(X\) is a normal complex space and \(\varphi\) is locally bounded from above on \(X\). In addition, if \(X\) is smooth and \(\varphi\in\mathrm{QPsh}(X)\), then \(\mathscr{I}_{\mathrm{NL}}(\varphi)\) is nothing but the usual multiplier ideal sheaf \(\mathscr{I}(\varphi)\) introduced by Nadel (see [10]).
_Remark 2.3_.: Since the definition of Nadel-Lebesgue multiplier ideals is local, we can compute the multiplier ideals by choosing a special Hermitian metric \(\omega\) for a local embedding of \(X\). In particular, if \(X\) is an \(n\)-dimensional complex subspace of some domain in \(\mathbb{C}^{N}\), we can take Hermitian metric \(\omega\) on \(X\) to be the inherited standard Kahler metric from \(\mathbb{C}^{N}\). Then, we have \(dV_{\omega}=\frac{1}{n!}\upsilon^{n}|_{X_{\mathrm{reg}}}\), where \(\upsilon=\frac{\sqrt{-1}}{2}\sum\limits_{k=1}^{N}dz_{k}\wedge d\bar{z}_{k}\).
For the sake of reader's convenience, we state a basic estimate related to local volume of an analytic subset as follows.
**Lemma 2.4**.: ([20], Lemma 2.3). _Let \(X\) be a pure \(n\)-dimensional analytic subset through the origin \(\mathbf{0}\) of some domain in \(\mathbb{C}^{N}\)\((N\geq 2)\). Then, there is a Stein neighborhood \(U\subset\subset\mathbb{C}^{N}\) of the origin \(\mathbf{0}\) such that for any \(0\leq\varepsilon<1\), we have_
\[\int_{U\cap X}\frac{1}{(|z_{1}|^{2}+\cdots+|z_{N}|^{2})^{n-1+\varepsilon}}dV_ {\omega}<+\infty,\]
_where \(dV_{\omega}=\frac{1}{n!}\upsilon^{n}|_{X_{\mathrm{reg}}}\) and \(\upsilon=\frac{\sqrt{-1}}{2}\sum\limits_{k=1}^{N}dz_{k}\wedge d\bar{z}_{k}\)._
Analogous to the Nadel-Ohsawa multiplier ideal sheaves introduced in [33, 34] (see also [9, 12] for the algebro-geometric counterpart), we state some related properties as follows.
**Proposition 2.5**.: (1) _Let \(\pi:\widetilde{X}\to X\) be a log resolution of the Jacobian ideal \(\mathcal{J}ac_{X}\) of \(X\) and \(\widetilde{K}_{\widetilde{X}/X}\) be the Mather discrepancy divisor. Then, we have the image_
\[\operatorname{Im}\left(\pi^{*}\Omega_{X}^{\epsilon}\hookrightarrow\Omega_{ \widetilde{X}}^{\pi}\right)=\mathcal{O}_{\widetilde{X}}(-\widetilde{K}_{ \widetilde{X}/X})\cdot\Omega_{\widetilde{X}}^{n},\]
_and_
\[\mathcal{I}_{\operatorname{NL}}(\varphi)=\pi_{*}\left(\mathcal{O}_{\widetilde {X}}(\widetilde{K}_{\widetilde{X}/X})\otimes\mathcal{I}(\varphi\circ\pi) \right).\]
_Furthermore, we can deduce that \(\mathcal{I}_{\operatorname{NL}}(\varphi+\log|\mathcal{J}ac_{X}|)=\mathcal{I}_ {\operatorname{NO}}(\varphi)\), the Nadel-Ohsawa multiplier ideal sheaf associated to \(\varphi\) on \(X\)._
(2) _When \(X\) is normal and \(\varphi\) has analytic singularities, \(\mathcal{I}_{\operatorname{NL}}(\varphi)\) coincides with the Mather multiplier ideal sheaf defined in [9]._
(3) _For any \(\varphi\in\operatorname{QPsh}(X)\), it follows that \(\mathcal{I}_{\operatorname{NL}}(\varphi)\subset\mathcal{M}_{X}\) is a coherent fractional ideal sheaf and satisfies the strong openness, i.e., \(\mathcal{I}_{\operatorname{NL}}(\varphi)=\bigcup\limits_{\rho>0}\mathcal{I}_ {\operatorname{NL}}((1+\varepsilon)\varphi)\)._
For our proof of Theorem 1.3, we need the following \(L^{2}\) estimates for the \(\overline{\partial}\)-equation and relative version of Grauert-Riemenschneider vanishing theorem for the higher direct images.
**Theorem 2.6**.: (cf. [10], Theorem 5.2). _Let \((X,\omega)\) be an \(n\)-dimensional Kahler manifold, which contains a weakly pseudoconvex Zariski open subset. Let \(L\) be a Hermitian line bundle on \(X\) such that \(\sqrt{-1}\Theta(L)+\operatorname{Ric}(\omega)>0\)._
_Then, for every smooth \(\varphi\in\operatorname{Psh}(X)\) and \(v\in L^{2}_{0,q}(X,L)\) satisfying \(\overline{\partial}v=0\) and_
\[\int_{X}\langle A^{-1}v,v\rangle\,e^{-2\varphi}dV_{\omega}<+\infty\]
_with the curvature operator \(A=[\sqrt{-1}\Theta(L)+\operatorname{Ric}(\omega)+\sqrt{-1}\partial\overline{ \partial}\varphi,\Lambda_{\omega}]\) on \(X\), there exists \(u\in L^{2}_{0,q-1}(X,L)\) such that \(\overline{\partial}u=v\) and_
\[\int_{X}|u|^{2}e^{-2\varphi}dV_{\omega}\leq\int_{X}\langle A^{-1}v,v\rangle\, e^{-2\varphi}dV_{\omega}.\]
**Theorem 2.7**.: ([11], Theorem 1.1). _Let \((X,\omega)\) be an \(n\)-dimensional Kahler manifold which is a Zariski open subset of some Stein space \(X^{*}\), and \(L\) be a Hermitian line bundle on \(X\)._
_If for any smooth \(\varphi\in\operatorname{Spsh}(X^{*})\) and \(v\in L^{2}_{0,1}(X,L)\) satisfying \(\overline{\partial}v=0\) and_
\[\int_{X}\langle A^{-1}_{\varphi}v,v\rangle\,e^{-2\varphi}dV_{\omega}<+\infty\]
_with the curvature operator \(A_{\varphi}=[\sqrt{-1}\partial\overline{\partial}\varphi,\Lambda_{\omega}]\) on \(X\), there exists \(u\in L^{2}(X,L)\) such that \(\overline{\partial}u=v\) and_
\[\int_{X}|u|^{2}e^{-2\varphi}dV_{\omega}\leq\int_{X}\langle A^{-1}_{\varphi}v, v\rangle\,e^{-2\varphi}dV_{\omega},\]
_then it follows that \(L\otimes K^{-1}_{X}\) is Nakano semi-positive on \(X\)._
**Theorem 2.8**.: ([37], Corollary 1.5). _Let \(\pi:X\to Y\) be a surjective proper (locally) Kahler morphism from a complex manifold \(X\) to a complex space \(Y\), and \((L,e^{-\varphi_{L}})\) be a (possibly singular) Hermitian line bundle on \(X\) with semi-positive curvature. Then, the higher direct image sheaf_
\[R^{q}\pi_{*}\big{(}K_{X}\otimes L\otimes\mathcal{I}(\varphi_{L})\big{)}=0,\]
_for every \(q>\dim X-\dim Y\)._
_Remark 2.9_.: Any log resolution \(\pi:\widetilde{X}\to X\) of a coherent ideal sheaf \(\mathcal{I}\) on a complex space \(X\) is a locally Kahler (proper modification), which is locally a finite sequence of blow-ups with smooth centers. Besides, any finite holomorphic mapping between complex spaces is (locally) proper Kahler.
In the remainder of this section, we recall some algebraic properties on the integral closure of ideals.
**Definition 2.10**.: ([46]). Let \(R\) be a commutative ring and \(I\) an ideal of \(R\). An element \(f\in R\) is said to be _integrally dependent_ on \(I\) if it satisfies a relation
\[f^{d}+a_{1}f^{d-1}+\cdots+a_{d}=0\quad(a_{k}\in I^{k},1\leq k\leq d).\]
The set \(\overline{I}\) consisting of all elements in \(R\) which are integrally dependent on \(I\) is called the _integral closure_ of \(I\) in \(R\). \(I\) is called _integrally closed_ if \(I=\overline{I}\). One can prove that \(\overline{I}\) is an ideal of \(R\), which is the smallest integrally closed ideal in \(R\) containing \(I\).
**Definition 2.11**.: ([46]). Let \(R\) be a commutative ring with identity and let \(J\subset I\) be ideals in \(R\). \(J\) is said to be a _reduction_ of \(I\) if there exists a nonnegative integer \(n\) such that \(I^{n+1}=JI^{n}\).
A reduction \(J\) of \(I\) is called _minimal_ if no ideal strictly contained in \(J\) is a reduction of \(I\). An ideal that has no reduction other than itself is called _basic_.
One can prove that minimal reductions do exist in Noetherian local rings and an ideal which is a minimal reduction of a given ideal is necessarily basic. Moreover, if \(R\) is a Noetherian ring, \(J\subset I\) is a reduction of \(I\) if and only if \(\overline{J}=\overline{I}\).
In the analytic setting, we have the following characterization on integral closure and reduction of ideals.
**Theorem 2.12**.: (cf. [29], Theoreme 2.1). _Let \(X\) be a complex space and \(Y\subset X\) be a proper closed complex subspace (may be non-reduced) defined by a coherent \(\mathcal{O}_{X}\)-ideal \(\mathcal{I}\) with \(x\in Y\) a point. Let \(\mathcal{J}\subset\mathcal{O}_{X}\) be a coherent \(\mathcal{O}_{X}\)-ideal and \(\mathcal{I}\) (resp. \(\mathcal{J}\)) be the germ of \(\mathcal{I}\) (resp. \(\mathcal{J}\)) at \(x\). Then, the following conditions are equivalent:_
1. \(\mathcal{J}\subset\overline{I}\)_._
2. _For every morphism_ \(\pi:\overline{X}\to X\) _satisfying:_ (i)_\(\pi\) _is a proper and surjective,_ (ii)_\(\overline{X}\) _is a normal complex space and_ (iii)_\(\mathcal{I}\cdot\mathcal{O}_{\overline{X}}\) _is an invertible_ \(\mathcal{O}_{X}\)_-module, there exists an open neighborhood_ \(U\) _of_ \(x\) _in_ \(X\) _such that_ \[\mathcal{J}\cdot\mathcal{O}_{\overline{X}}|_{\pi^{-1}(U)}\subset\mathcal{I} \cdot\mathcal{O}_{\overline{X}}|_{\pi^{-1}(U)}.\]
3. _If_ \(V\) _is an open neighborhood of_ \(x\) _on which_ \(\mathcal{I}\) _and_ \(\mathcal{J}\) _are generated by their global sections, then for every system of generators_ \(g_{1},...,g_{r}\in\Gamma(V,\mathcal{I})\) _and every_ \(f\in\Gamma(V,\mathcal{J})\)_, one can find an open neighborhood_ \(V^{\prime}\) _of_ \(x\) _and a constant_ \(C>0\) _such that_ \[|f(y)|\leq C\cdot\sup_{k}|g_{k}(y)|,\ \forall y\in V^{\prime}.\]
_Remark 2.13_.: Let \(X\) be a normal complex space and \(\mathcal{I}\subset\mathcal{O}_{X}\) a coherent ideal sheaf. Let \(\pi:\overline{X}\to X\) be any proper modification from a normal complex space \(\overline{X}\) onto \(X\) such that \(\mathcal{I}\cdot\mathcal{O}_{\overline{X}}=\mathcal{O}_{\overline{X}}(-D)\) for some effective Cartier divisor \(D\) on \(\overline{X}\). Then, we have \(\pi,\mathcal{O}_{\overline{X}}(-D)=\overline{\mathcal{I}}\), the integral closure of \(\mathcal{I}\) in \(\mathcal{O}_{X}\).
**Lemma 2.14**.: (cf. Example 9.6.19 in [28]; see also [10], Lemma 11.16). _Let \(X\) be a normal complex space of dimension \(n\) and \(\mathfrak{a}\subset\mathcal{O}_{X}\) a nonzero ideal. Then, there exists an open covering \(\{U_{\alpha}\}_{\alpha\in\mathbb{N}}\) of \(X\) such that \(\mathfrak{a}|_{U_{\alpha}}\) has a reduction \(\mathfrak{b}_{\alpha}\) generated by at most \(n\) elements._
## 3. Proofs of the main results
### Proof of Theorem 1.3
Since all of the statements are local, without loss of generality, we may assume that \(X\) is an \(n_{\geq 2}\)-dimensional normal (Hermitian) complex subspace of some domain in \(\mathbb{C}^{N}\) with \(\varphi\in\mathrm{QPSh}(X)\) and \(\mathfrak{a}=(g_{1},\ldots,g_{r})\cdot\mathcal{O}_{X}\) an ideal sheaf generated by holomorphic functions \(g_{1},\ldots,g_{r}\) on \(X\). Moreover, we may also assume that \(\varphi\) is (locally) a strictly psh function on \(X\) if necessary, by adding some smooth strictly psh
function. It is easy to see that the implications (1) \(\Longrightarrow\) (2), (3) \(\Longrightarrow\) (4), (5) \(\Longrightarrow\) (6) and (7) \(\Longrightarrow\) (8) \(\Longrightarrow\) (9) are trivial; in particular, we will present a proof in the following order:
(1) (2) (2) (3) (4) (5) (6)
"(2) (7)". By the definition of Nadel-Lebesgue multiplier ideal sheaf, it follows that
\[\mathfrak{a}\cdot\mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{\mathfrak{a} })\subset\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{\mathfrak{a}}),\]
and so it is sufficient to show the reverse inclusion.
**Case (i).** When \(r\leq n\).
Let \(\pi:\widetilde{X}\to X\) be a common log resolution of \(\mathcal{J}ac_{X}\) and \(\mathfrak{a}\) such that \(\mathfrak{a}\cdot\mathcal{O}_{\widetilde{X}}=\mathcal{O}_{\widetilde{X}}(-F)\) for some effective divisors \(F\) on \(\widetilde{X}\). Denote by
\[\mathcal{A}_{m} :=\mathcal{O}_{\widetilde{X}}(\widetilde{K}_{\widetilde{X}/X}) \otimes\mathcal{A}(\varphi\circ\pi+m\varphi_{\mathfrak{a}}\circ\pi)\] \[=\mathcal{O}_{\widetilde{X}}(\widetilde{K}_{\widetilde{X}/X}) \otimes\mathcal{A}(\varphi\circ\pi)\otimes\mathcal{O}_{\widetilde{X}}(-mF)\]
for any \(m\in\mathbb{N}\), and consider the Koszul complex determined by \(g_{1},\ldots,g_{r}\):
\[0\to\Lambda^{r}V\otimes\mathcal{O}_{\widetilde{X}}(rF)\to\cdots\to\Lambda^{2 }V\otimes\mathcal{O}_{\widetilde{X}}(2F)\to V\otimes\mathcal{O}_{\widetilde{ X}}(F)\to\mathcal{O}_{\widetilde{X}}\to 0,\]
where \(V\) is the vector space spanned by \(g_{1},\ldots,g_{r}\). Note that the Koszul complex is locally split and its syzygies are locally free, so twisting through by any coherent sheaf will preserve the exactness. Then, by twisting with \(\mathcal{A}_{k}\) (\(k\geq r\)), we obtain the following long exact sequence
\[0\to\Lambda^{r}V\otimes\mathcal{A}_{k-r}\to\cdots\to\Lambda^{2}V\otimes \mathcal{A}_{k-2}\to V\otimes\mathcal{A}_{k-1}\to\mathcal{A}_{k}\to 0.\]
On the other hand, for any \(m\in\mathbb{N}\), by (2) we have the local vanishing of the higher direct images \(R^{q}\pi_{*}\mathcal{A}_{m}=0\) (\(1\leq q<n\)). Note that
\[\mathcal{I}_{\mathrm{NL}}(\varphi+m\varphi_{\mathfrak{a}})=\pi_{*}\mathcal{A }_{m}\]
by the functoriality property with respect to direct images of sheaves by modifications, and then by taking direct images of (\(\star\)) we will deduce the following so-called exact Skoda complex (cf. [28], p. 228):
\[0\to\Lambda^{r}V\otimes\mathcal{I}_{\mathrm{NL}}(\varphi+(k-r)\varphi_{ \mathfrak{a}})\to\cdots\to V\otimes\mathcal{I}_{\mathrm{NL}}(\varphi+(k-1) \varphi_{\mathfrak{a}})\to\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{ \mathfrak{a}})\to 0.\]
In particular, the map \(V\otimes\mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{\mathfrak{a}})\to \mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{\mathfrak{a}})\) is surjective, by which we can infer that \(\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{\mathfrak{a}})\subset\mathfrak{a }\cdot\mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{\mathfrak{a}})\) for any \(k\geq r\).
**Case (ii).** When \(r>n\).
As the statement is local, then by Lemma 2.14 we may assume that \(\mathfrak{b}\) is a reduction of \(\mathfrak{a}\) generated by \(n\) elements \(\widetilde{g}_{1},...,\widetilde{g}_{n}\). Consider a common log resolution \(\pi:\widetilde{X}\to X\) of \(\mathcal{J}ac_{X},\ \mathfrak{a}\) and \(\mathfrak{b}\) such that \(\mathfrak{a}\cdot\mathcal{O}_{\widetilde{X}}=\mathfrak{b}\cdot\mathcal{O}_{ \widetilde{X}}=\mathcal{O}_{\widetilde{X}}(-F)\) for some effective divisors \(F\) on \(\widetilde{X}\). Then, by the same argument as above, we can deduce the following exact Skoda complex:
\[0\to\Lambda^{n}V\otimes\mathcal{I}_{\mathrm{NL}}(\varphi+(k-n)\varphi_{ \mathfrak{a}})\to\cdots\to V\otimes\mathcal{I}_{\mathrm{NL}}(\varphi+(k-1) \varphi_{\mathfrak{a}})\to\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{ \mathfrak{a}})\to 0.\]
for any \(k\geq n\), where \(V\) is the vector space spanned by \(\widetilde{g}_{1},...,\widetilde{g}_{n}\). Therefore, it follows that
\[\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{a})\subset\mathfrak{b}\cdot \mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{a})\subset\mathfrak{a}\cdot \mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{a}).\]
"\((3)\Longrightarrow(5)\)". It follows from the assumption that we have a Stein neighborhood \(\Omega\subset\subset X\) of the point \(x\) with a Kahler metric \(\omega\) such that \(\mathrm{Ric}(\omega)\geq 0\) on \(\Omega_{\mathrm{reg}}\). Let \(\varphi\in\mathrm{SPsh}(\Omega)\) be any smooth strictly psh function on \(\Omega\) and \(L=\Omega\times\mathbb{C}\) be a trivial bundle equipped with the trivial Hermitian metric, which implies that
\[\sqrt{-1}\Theta(L)+\mathrm{Ric}(\omega)+\sqrt{-1}\partial\overline{\partial} \varphi\geq\sqrt{-1}\partial\overline{\partial}\varphi>0\]
on \(\Omega_{\mathrm{reg}}\).
Since \(\Omega\) is a Stein space, we are able to choose a complex hypersurface \(Z\subset\Omega\) which contains the singular locus \(\Omega_{\mathrm{sing}}\) of \(\Omega\) such that \(\Omega-Z\subset\Omega_{\mathrm{reg}}\) is a Stein manifold. Then, by Theorem 2.6 we obtain that, for any smooth \(\varphi\in\mathrm{SPsh}(\Omega)\) and \(v\in L^{2}_{0,q}(\Omega_{\mathrm{reg}},L)\) satisfying \(\overline{\partial}v=0\) and
\[\int_{\Omega_{\mathrm{reg}}}\langle A^{-1}_{\varphi}v,v\rangle\,e^{-2\varphi} dV_{\omega}<+\infty,\]
we can find \(u\in L^{2}_{0,q-1}(\Omega_{\mathrm{reg}},L)\) such that \(\overline{\partial}u=v\) and
\[\int_{\Omega_{\mathrm{reg}}}|u|^{2}e^{-2\varphi}dV_{\omega}\leq\int_{\Omega_ {\mathrm{reg}}}\langle A^{-1}_{\varphi}v,v\rangle\,e^{-2\varphi}dV_{\omega}.\]
"\((6)\Longrightarrow(4)\)". As a straightforward application of Theorem 2.7 on \(\Omega_{\mathrm{reg}}\), it yields that \(\sqrt{-1}\Theta(L)+\mathrm{Ric}(\omega)\geq 0\) on \(\Omega_{\mathrm{reg}}\). Let \(\Omega^{\prime}\subset\Omega\) be a small Stein neighborhood of the point \(x\) such that the Hermitian line bundle \(L\) has a smooth potential \(\psi\) on \(\Omega^{\prime}\). Therefore, we deduce that
\[\sqrt{-1}\partial\overline{\partial}\psi+\mathrm{Ric}(\omega)=\sqrt{-1} \Theta(L)+\mathrm{Ric}(\omega)\geq 0\]
on \(\Omega^{\prime}_{\mathrm{reg}}\).
"\((4)\Longrightarrow(7)\)". Due to the definition and Lemma 2.14, it is sufficient to prove
\[\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{a})\subset\mathfrak{a}\cdot \mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{a})\]
for the case \(r\leq n\) near the point \(x\in X\). Let \(f\in\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{a})_{x}\) with \(k\geq\min[n,r]=r\), then by the strong openness of multiplier ideals there exists small enough \(\varepsilon>0\) such that \(f\in\mathcal{I}_{\mathrm{NL}}(\varphi+(k+\varepsilon)\varphi_{a})_{x}\).
By the assumption of (4), we let \(\Omega\subset\subset X\) be a Stein neighborhood of the point \(x\) with a Kahler metric \(\omega\) and a smooth real function \(\psi\) on \(\Omega\) such that \(\mathrm{Ric}(\omega)+\sqrt{-1}\partial\overline{\partial}\psi\geq 0\) on \(\Omega_{\mathrm{reg}}\). After shrinking \(\Omega\) if necessary, we may assume that the function \(\psi\) is bounded on \(\Omega\) and \(f\) is holomorphic on \(\Omega\) such that
\[\int_{\Omega}|f|^{2}\cdot|g|^{-2(r+\varepsilon)}e^{-2(\varphi+(k-r)\varphi_{ a})}dV_{\omega}<+\infty.\]
In addition, we also choose a complex hypersurface \(Z\subset\Omega\) which contains the singular locus \(\Omega_{\mathrm{sing}}\) of \(\Omega\) and the common zero-set of holomorphic functions \(g_{1},...,g_{r}\) such that \(\Omega^{\prime}:=\Omega-Z\) is a Stein manifold.
Let \(E=\Omega^{\prime}\times\mathbb{C}^{r}\) and \(Q=\Omega^{\prime}\times\mathbb{C}\) be the trivial bundles on \(\Omega^{\prime}\) and \(L=K^{-1}_{\Omega^{\prime}}\) be the anti-canonical line bundle with the induced metric twisted by a weight \(e^{-\psi}\). The morphism \(g:E\to Q\) determined by holomorphic functions \(g_{1},...,g_{r}\) is given by
\[(h_{1},...,h_{r})\mapsto\sum_{m=1}^{r}g_{m}\cdot h_{m}=g\cdot h.\]
Note that \(\widetilde{g}^{\varepsilon}=\mathrm{Id}_{Q}\) when rank \(Q=1\), and on \(\Omega^{\prime}\) we have
\[\sqrt{-1}\Theta(L)-(r-1+\varepsilon)\sqrt{-1}\Theta(\det Q)=\mathrm{Ric}( \omega)+\sqrt{-1}\partial\overline{\partial}\psi\geq 0.\]
Thus, we can apply Theorem 1.1 on \(\Omega^{\prime}\) and then obtain an \(r\)-tuple \((h_{1},...,h_{r})\) of holomorphic functions on \(\Omega^{\prime}\) such that \(f=g\cdot h\) on \(\Omega^{\prime}\) and
\[\int_{\Omega^{\prime}}|h|^{2}\cdot|g|^{-2(r-1+\varepsilon)}e^{-2(\varphi+(k-r) \varphi_{a})}dV_{\omega}=\int_{\Omega^{\prime}}|h|^{2}e^{-2(\varphi+(k-1+ \varepsilon)\varphi_{a})}dV_{\omega}<+\infty.\]
We can now extend every \(h_{m}\) to be a holomorphic function on \(\Omega\) from the \(L^{2}\) estimate above and normality of \(X\), which implies that
\[\mathcal{I}_{\mathrm{NL}}(\varphi+k\varphi_{a})\subset\mathfrak{a}\cdot \mathcal{I}_{\mathrm{NL}}(\varphi+(k-1+\varepsilon)\varphi_{a})\subset \mathfrak{a}\cdot\mathcal{I}_{\mathrm{NL}}(\varphi+(k-1)\varphi_{a})\]
on \(\Omega\); we finish the argument.
"\((9)\Longrightarrow(10)\)". By the assumption, we have \(\mathcal{I}_{\mathrm{NL}}(n\varphi_{a})\subset\mathfrak{a}\). Suppose that \(x\in X\) is a singular point. Then, by the local parametrization for analytic sets, we can find a local coordinate system \((z^{\prime};z^{\prime\prime})=(z_{1},...,z_{n};z_{n+1},...,z_{N})\) near \(x\) such that for some constant \(C>0\), we have \(|z^{\prime\prime}|\leq C\cdot|z^{\prime}|\) for any point \(z\in X\) near \(x\).
Let \(\mathfrak{a}\subset\mathcal{O}_{X}\) be the ideal sheaf generated by holomorphic functions \(\widehat{z_{1}},...,\widehat{z_{n}}\in\mathcal{O}_{X}\) (shrinking \(X\) if necessary), where \(\widehat{z_{k}}\) are the residue classes of \(z_{k}\) in \(\mathcal{O}_{X}\). From the non-smoothness of \(X\) at the point \(x\), we deduce that the embedding dimension \(\dim_{\Sigma}(n\varphi_{a})\subset\mathfrak{a}\). Thus, we obtain \(n+1\) of \(X\) at \(x\), which implies that there exists \(k_{0}\) (\(n+1\leq k_{0}\leq N\)) such that \(\widehat{z_{k_{0}}}\notin\mathfrak{a}\).
On the other hand, after shrinking \(X\) again, it follows that
\[\int_{X}\frac{|z_{k_{0}}|^{2}}{|z^{\prime}|^{2n}}dV_{\omega}\leq C^{2}(1+C^{2 })^{n-1}\cdot\int_{X}|z|^{-2(n-1)}dV_{\omega}<+\infty,\]
where the finiteness of the integration follows from Lemma 2.4. Then, we infer that \(\widehat{z_{k_{0}}}\in\mathcal{I}_{\mathrm{NL}}(n\varphi_{a})\), but \(\widehat{z_{k_{0}}}\notin\mathfrak{a}\), which contradicts to the assumption \(\mathcal{I}_{\mathrm{NL}}(n\varphi_{a})\subset\mathfrak{a}\). Thus, we obtain that \(x\in X\) is a regular point.
"\((10)\Longrightarrow(1)\)". It is a straightforward consequence of Theorem 2.8.
"\((10)\Longrightarrow(3)\)". Since \(x\) is a regular point of \(X\), after choosing an appropriate coordinate neighborhood of \(x\), we may assume that \(\Omega\ni x\) is a Stein domain in \(\mathbb{C}^{n}\). Therefore, we can take \(\omega=\frac{\sqrt{-1}}{2}\sum\limits_{k=1}^{n}dz_{k}\wedge d\widehat{z}_{k}\) to be the standard Euclidean metric on \(\mathbb{C}^{n}\) and then we have \(\mathrm{Ric}(\omega)=0\) on \(\Omega\); the proof of Theorem 1.3 is concluded.
_Remark 3.1_.: In addition, we can deduce from the proof of Theorem 1.3 that
(i) if (1) or (2) holds for each quasi-psh function \(\varphi\) with analytic singularities, then \(x\in X\) is a regular point;
(ii) both of the statements (3) and (4) could be respectively modified to be \(\mathrm{Ric}(\omega)\geq 0\) and \(\mathrm{Ric}(\omega)+\sqrt{-1}\partial\overline{\partial}\psi\geq 0\) on a Zariski open subset of \(\Omega\) contained in \(\Omega_{\mathrm{reg}}\).
### Proof of Theorem 1.7
It is sufficient to prove the necessity.
Let \(x\in X\) be any point. Since \(\omega\) is a smooth Kahler metric on \(X\), then \(\omega\) has smooth local potentials, i.e., there exists a Stein neighborhood \(\Omega\subset X\) of \(x\) and a smooth strictly psh functions \(\psi\) on \(\Omega\) such that \(\omega=\sqrt{-1}\partial\overline{\partial}\psi\) on \(\Omega_{\mathrm{reg}}\), which implies that \(\mathrm{Ric}(\omega)+\sqrt{-1}\partial\overline{\partial}\psi\geq 0\) on \(\Omega_{\mathrm{reg}}\) whenever \(\mathrm{Ric}(\omega)=\pm\omega\), \(0\). Thus, it follows from (4) in Theorem 1.3 that \(x\in X\) is a regular point.
_Remark 3.2_.: The same arguments as in the proof of Theorem 1.3 and 1.7 also imply that each local potential of weak Kahler-Einstein metric \(\omega\) is \(\mathcal{C}^{2}\) differentiable on \(X\) if and only if \(X\) is non-singular, and that there exists no _singular_ normal Kahler space such that the Kahler metric is Kahler-Einstein on the regular locus.
In fact, our method is still available when the weak Kahler-Einstein metric is (locally) equivalent to the standard induced Kahler metric by restriction near the singularities; for instance, when the weak Kahler-Einstein metric is of locally bounded coefficients.
## Appendix A Uniform bounds of powers associated to an \(L^{2}\) division problem
The ideal membership is an important object to study in commutative algebra, algebraic geometry and several complex variables, e.g., the famous Hilbert's Nullstellensatz and Briancon-Skoda theorem and so on. In this part, we are mainly interested in the uniform bounds of powers associated to an \(L^{2}\) division problem, a kind of special ideal membership. Let \(X\) be a Stein manifold of dimension \(n\) and \(\mathfrak{a}=(g_{1},\ldots,g_{r})\cdot\mathcal{O}_{X}\) an ideal sheaf generated by holomorphic functions \(g_{1},\ldots,g_{r}\) on \(X\). In general, the division problem states that, given a positive integer \(k\in\mathbb{N}\) and holomorphic function \(f\) on \(X\), we wish to determine when \(f\) is generated by holomorphic functions \(g_{1},\ldots,g_{r}\); more precisely, when we can find holomorphic functions \(h_{1},\ldots,h_{r}\in\mathfrak{a}^{k-1}\) on \(X\) such that
\[f=\sum_{m=1}^{r}g_{m}\cdot h_{m}.\]
Thanks to the Oka-Cartan's theory on Stein manifolds, the division problem is solvable if and only if \(f\in\mathfrak{a}^{k}\).
Note that the condition \(f\in\mathfrak{a}^{k}\) is purely algebraic, and so it is natural to ask whether we could find an analytic condition to replace the algebraic one. It is easy to see that \(f\in\mathfrak{a}^{k}\) implies that \(|f|e^{-\varphi_{k}}\) is locally bounded on \(X\), or \(L^{2}_{\mathrm{loc}}\) more generally; where \(\varphi_{k}:=k\log|\mathfrak{g}|\) and \(|g|^{2}:=|g_{1}|^{2}+\cdots+|g_{r}|^{2}\). On the other hand, local boundedness of \(|f|e^{-\varphi_{k}}\) is equivalent to the fact that \(f\in\overline{\mathfrak{a}^{k}}\), the integral closure of \(\mathfrak{a}^{k}\) in \(\mathcal{O}_{X}\) (see Theorem 2.12). Thus, it is an interesting question whether we could establish solvability of an \(L^{2}\) analogue of the division problem.
Let \(\varphi\in\mathrm{Psh}(X)\) be a psh function on \(X\) and denote by
\[A^{2}_{\mathrm{loc}}(X,\varphi):=\left\{f\in\mathcal{O}_{X}(X)\,\big{|}\,|f|^ {2}e^{-2\varphi}\text{ is locally integrable on }X\right\}.\]
Then, we raise the following \(L^{2}\) division problem:
**Question A.1**.: _Let \(X\) be an \(n\)-dimensional Stein manifold with a psh function \(\varphi\in\mathrm{Psh}(X)\), and \(\mathfrak{a}=(g_{1},\ldots,g_{r})\cdot\mathcal{O}_{X}\) an ideal sheaf generated by holomorphic functions \(g_{1},\ldots,g_{r}\) on \(X\). Given positive integer \(k\in\mathbb{N}\) and \(f\in A^{2}_{\mathrm{loc}}(X,\varphi+\varphi_{k})\), are there holomorphic functions \(h_{1},\ldots,h_{r}\in A^{2}_{\mathrm{loc}}(X,\varphi+\varphi_{k-1})\) such that_
\[f=\sum_{m=1}^{r}g_{m}\cdot h_{m}?\]
### A solution to Question A.1
Unfortunately, the answer of Question A.1 is negative for general \(k\) (see Example A.2). Motivated by the Skoda's \(L^{2}\) division theorem (cf. Theorem 1.1), it seems to be reasonable to find a uniform integer \(k_{0}\), depending only on \(n\), such that Question A.1 is solvable for any \(k\geq k_{0}\). The goal of this part is to present an optimal uniform lower bounds of powers associated to Question A.1. In particular, we will establish the following
**Theorem A.1**.: _There exists a uniform integer \(k_{0}=\min\{n,r\}\) such that the solution to Question A.1 is positive for any \(k\geq k_{0}\). In further, the uniform lower bound \(k_{0}=\min\{n,r\}\) is optimal._
In fact, the optimality of uniform integer \(k_{0}=\min\{n,r\}\) is straightforward by the following:
**Example A.2**.: _Let \(B_{n}(\mathbf{0})\) be the unit ball centered at the origin \(\mathbf{0}=(\mathbf{0}^{\prime},\mathbf{0}^{\prime\prime})\) in \(\mathbb{C}^{r}\times\mathbb{C}^{n-r}\,(1\leq r\leq n)\) and take \(g_{1}=z_{1},...,g_{r}=z_{r},f\equiv 1,\varphi\equiv 0\) on \(B_{n}(\mathbf{0})\). Then, for every \(k<k_{0}=r\), the answer of Question A.1 is negative._
_Indeed, by the fact that the log canonical threshold \(\mathrm{LCT}_{(\mathbf{0}^{\prime},z^{\prime\prime})}(\varphi_{k})=\frac{r}{k}>1\) of \(\varphi_{k}\) at any point \((\mathbf{0}^{\prime},z^{\prime\prime})\), one can derive that \(f\in A^{2}_{\mathrm{loc}}(B_{n}(\mathbf{0}),\varphi+\varphi_{k})\). Then, we infer from the fact that
\(f\) has no zeros in \(B_{n}(\mathbf{0})\) that there exist no holomorphic functions \(h_{1},\ldots,h_{r}\) on \(B_{n}(\mathbf{0})\) such that \(f=\sum\limits_{m=1}^{r}g_{m}\cdot h_{m}\)._
**Proof of Theorem A.1.** It follows from the local vanishing (Theorem 2.8) and the arguments as in the proof of Theorem 1.3 that for any \(k\geq\min\{n,r\}\), we have
\[\mathcal{I}(\varphi+\varphi_{k})=\mathfrak{a}\cdot\mathcal{I}(\varphi+\varphi _{k-1}).\]
Let
\[\tau:\mathcal{I}(\varphi+\varphi_{k-1})^{\oplus r}\longrightarrow\mathcal{I}( \varphi+\varphi_{k})\]
be the sheaf homomorphism defined by
\[\tau(h_{1,x},\ldots,h_{r,x})=\sum\limits_{m=1}^{r}g_{m}\cdot h_{m,x}\]
for any germs \(h_{m,x}\in\mathcal{I}(\varphi+\varphi_{k-1})_{x}\). Then, we have an exact sequence of sheaves
\[\mathcal{I}(\varphi+\varphi_{k-1})^{\oplus r}\stackrel{{\tau}}{ {\longrightarrow}}\mathcal{I}(\varphi+\varphi_{k})\longrightarrow 0.\]
It follows from the Oka-Cartan theory on Stein manifolds that the induced sequence of sections
\[\Gamma\Big{(}X,\mathcal{I}(\varphi+\varphi_{k-1})^{\oplus r}\Big{)} \stackrel{{\tau^{*}}}{{\longrightarrow}}\Gamma\Big{(}X, \mathcal{I}(\varphi+\varphi_{k})\Big{)}\longrightarrow 0\]
is also exact, which implies that any section \(f\in\Gamma\Big{(}X,\mathcal{I}(\varphi+\varphi_{k})\Big{)}\) can be written as the image \(f=\sum\limits_{m=1}^{r}g_{m}\cdot h_{m}\) for some sections \(h_{m}\in\Gamma\Big{(}X,\mathcal{I}(\varphi+\varphi_{k-1})\Big{)}\).
_Remark A.3_.: _(An alternative argument on Theorem A.1)._ In fact, we could also give another argument on the proof of Theorem A.1 depending on the strong openness of multiplier ideals established by Guan-Zhou [21] and the Skoda's \(L^{2}\) division theorem for holomorphic functions (see Theorem 1.1).
Since the statement is local, it follows from Lemma 2.14 that it is sufficient to prove \(\mathcal{I}(\varphi+\varphi_{k})\subset\mathfrak{a}\cdot\mathcal{I}(\varphi+ \varphi_{k-1})\) for the case \(r\leq n\). Given \(f\in\Gamma\Big{(}X,\mathcal{I}(\varphi+\varphi_{k})\Big{)}\), after shrinking \(X\), we may assume that \(X\) is the unit ball in \(\mathbb{C}^{n}\) and
\[\int_{X}|f|^{2}e^{-2(\varphi+\varphi_{k})}d\lambda_{n}=\int_{X}|f|^{2}\cdot|g| ^{-2k}e^{-2\varphi}d\lambda_{n}<+\infty.\]
Then, for each \(k\geq r\), by the strong openness of multiplier ideals there exists sufficiently small \(\varepsilon>0\) such that
\[\int_{X}|f|^{2}e^{-2(\varphi+(1+\varepsilon)\varphi_{k})}d\lambda_{n}=\int_{ X}|f|^{2}\cdot|g|^{-2(1+\varepsilon)k}e^{-2\varphi}d\lambda_{n}<+\infty,\]
shrinking \(X\) if necessary. Finally, combining with Theorem 1.1, we deduce the desired result.
### A global \(L^{2}\) version of Question A.1
Let \((X,\omega)\) be an \(n\)-dimensional Stein manifold with a Kahler form \(\omega\). Let \(\varphi\in\mathrm{Psh}(X)\) and \(\mathcal{I}=(g_{1},\ldots,g_{r})\cdot\mathcal{O}_{X}\) an ideal sheaf generated by holomorphic functions \(g_{1},\ldots,g_{r}\) on \(X\). Denote by
\[A^{2}(X,\varphi):=\left\{f\in\mathcal{O}_{X}(X)\;\Big{|}\;\int_{X}|f|^{2}e^{-2 \varphi}dV_{\omega}<+\infty\right\}.\]
Then, we have the following global analogue of Question A.1:
**Question A.2**.: _Can we find a uniform integer \(k_{0}\) such that for each \(k\geq k_{0}\) and \(f\in A^{2}(X,\varphi+\varphi_{k})\), there exist \(h_{1},\ldots,h_{r}\in A^{2}(X,\varphi+\varphi_{k-1})\) satisfying_
\[f=\sum\limits_{m=1}^{r}g_{m}\cdot h_{m}?\]
As an immediate consequence of Theorem 1.1, we obtain the following
**Theorem A.4**.: _Let \(X\) be a pseudoconvex domain in \(\mathbb{C}^{n}\). Then, there exists a uniform integer \(k_{0}=\min\{n+2,r+1\}\) such that the solution to Question A.2 is positive._
_Remark A.5_.: (1) More generally, Theorem A.4 also holds for any complete Kahler domain in \(\mathbb{C}^{n}\) with smooth \(\mathrm{psh}\) function \(\varphi\in\mathrm{Psh}(X)\).
(2) In this case, combining with the Example A.2, it follows that the optimal uniform lower bound \(k_{0}\) is at least \(\min\{n,r\}\), and at most \(\min\{n+2,r+1\}\).
|
2309.11963 | Generating Hierarchical Structures for Improved Time Series
Classification Using Stochastic Splitting Functions | This study introduces a novel hierarchical divisive clustering approach with
stochastic splitting functions (SSFs) to enhance classification performance in
multi-class datasets through hierarchical classification (HC). The method has
the unique capability of generating hierarchy without requiring explicit
information, making it suitable for datasets lacking prior knowledge of
hierarchy. By systematically dividing classes into two subsets based on their
discriminability according to the classifier, the proposed approach constructs
a binary tree representation of hierarchical classes. The approach is evaluated
on 46 multi-class time series datasets using popular classifiers (svm and
rocket) and SSFs (potr, srtr, and lsoo). The results reveal that the approach
significantly improves classification performance in approximately half and a
third of the datasets when using rocket and svm as the classifier,
respectively. The study also explores the relationship between dataset features
and HC performance. While the number of classes and flat classification (FC)
score show consistent significance, variations are observed with different
splitting functions. Overall, the proposed approach presents a promising
strategy for enhancing classification by generating hierarchical structure in
multi-class time series datasets. Future research directions involve exploring
different splitting functions, classifiers, and hierarchy structures, as well
as applying the approach to diverse domains beyond time series data. The source
code is made openly available to facilitate reproducibility and further
exploration of the method. | Celal Alagoz | 2023-09-21T10:34:50Z | http://arxiv.org/abs/2309.11963v1 | Generating Hierarchical Structures for Improved Time Series Classification Using Stochastic Splitting Functions
###### Abstract
This study introduces a novel hierarchical divisive clustering approach with stochastic splitting functions (SSFs) to enhance classification performance in multi-class datasets through hierarchical classification (HC). The method has the unique capability of generating hierarchy without requiring explicit information, making it suitable for datasets lacking prior knowledge of hierarchy. By systematically dividing classes into two subsets based on their discriminability according to the classifier, the proposed approach constructs a binary tree representation of hierarchical classes. The approach is evaluated on 46 multi-class time series datasets using popular classifiers (svm and rocket) and SSFs (potr, srtr, and lsoo). The results reveal that the approach significantly improves classification performance in approximately half and a third of the datasets when using rocket and svm as the classifier, respectively. The study also explores the relationship between dataset features and HC performance. While the number of classes and flat classification (FC) score show consistent significance, variations are observed with different splitting functions. Overall, the proposed approach presents a promising strategy for enhancing classification by generating hierarchical structure in multi-class time series datasets. Future research directions involve exploring different splitting functions, classifiers, and hierarchy structures, as well as applying the approach to diverse domains beyond time series data. The source code is made openly available to facilitate reproducibility and further exploration of the method.
Hierarchical Classification Automated Hierarchy Generation Hierarchical Clustering Time Series Classification
## 1 Introduction
HC is a method of organizing data or objects into a tree-like structure of nested categories or groups, where each category is a subset of a larger category, forming a hierarchy. HC is typically used in many fields such as text classification [1, 2], image understanding [3] and annotation [4, 5], and in bioinformatics problems such as protein function prediction [6, 7, 8, 9] where usually large set of labels are designated with a hierarchical structure in advance. In such cases, algorithms have been developed to take advantage of the structure to improve classification accuracy. However, most multi-class classification problems do not have structured labels, and the potentiality of performance improvements which was seen in hierarchical problems are under explored in those domain of applications. One such field is time series classification where multi-class labels assume traditional flat labels.
Only a limited number of studies have reported the benefits of inducing a hierarchy from datasets that are typically utilized with flat labels. In a web content analysis study, an automatic taxonomy of documents when they are not pre-defined was retrieved via construction of a binary tree [2]. Tree construction was performed with hierarchical clustering of classes in a top-down approach where cluster splitting was done Spherical K-means. Features to distinguish classes were selected using Fisher index criteria. In a later study [10] they reported superiority of any tree over binary tree in classification performance. Assuming presence of a latent hierarchy on synthetic and various image data has shown to result in notable improvement in downstream classification task [11]. In their investigation, two distinct clustering methods for constructing a hierarchy were explored. The first method involved estimating the conditional means and clustering them using Gaussian Mixture Models. The second method entails measuring the pairwise task similarity between conditional distributions and utilizing a combination of spectral embedding and Gaussian Mixture Models for clustering.
The aim of this study is to examine the impact of assuming the presence of a hierarchy in time-series data, even when it is not directly defined in the label set. The process of revealing the hierarchical structure is combined with the technique of constructing a hierarchy through hierarchical clustering. The findings reveal that this assumption is advantageous in the some of the datasets in UCR archive [12]. The results suggest that incorporating a hierarchy as a pre-processing step proves beneficial when employing off-the-shelf classifiers for multi-class time series problems. Additionally, there is potential for exploring numerous other hierarchical schemes that can be applied to time series data.
The contributions of this work can be summarized as follows:
_Proposed Hierarchical Divisive Clustering Approach:_ The work introduces a novel hierarchical divisive clustering approach with SSFs to improve classification performance in multi-class datasets. This method systematically constructs a hierarchical tree by dividing classes into subsets based on their similarity, optimizing the hierarchical organization without requiring explicit hierarchy information.
_Enhancement of Classification Performance:_ The proposed approach demonstrates substantial improvements in classification performance, more notably when using the specialized time series classifier rocket. By effectively leveraging the hierarchical structure, the approach achieves enhanced classification performance compared to FC.
Automatic Hierarchy Generation:The method enables automatic hierarchy generation when explicit hierarchical information is not available. By systematically partitioning classes based on their similarity, the approach offers an efficient solution to construct the hierarchical tree representation.
Balance Factors for Classes and Datapoints:The work introduces novel concepts of Balance Factor for Classes (BFC) and Balance Factor for Datapoints (BFD) to characterize the balance within the hierarchical tree structure. These factors provide valuable insights into the distribution of classes and datapoints within the hierarchy.
Insights into Dataset Features:The study explores the relationship between dataset features and hierarchical clustering performance. It identifies the number of classes and FC score as significant factors for specific classifiers, offering valuable insights into the impact of dataset characteristics on clustering outcomes.
Efficiency and Efficacy Considerations:The work acknowledges the stochastic nature of the approach and highlights potential limitations related to efficiency, convergence, solution quality, and sampling bias. The discussion paves the way for future research to explore more efficient optimization techniques and strategies to strike a balance between stochasticity and convergence.
Potential for Practical Adoption:The proposed approach is implemented using standard programming libraries and tools, making it relatively easy to implement. The open-source availability of the source code2 and detailed explanations in the study materials enhance the accessibility and usability of the approach for researchers and practitioners interested in exploring hierarchical clustering methods. The results in this paper are reproducible and the code can be easily adapted to work with different classifiers and different datasets.
Footnote 2: [https://github.com/alagoz/hc4tsc_hdc_ssf](https://github.com/alagoz/hc4tsc_hdc_ssf)
## 2 Background
### Hierarchical Classification
The primary area of focus in machine learning research has been on developing models for typical classification problems, where an object is assigned to a single class from a set of non-overlapping classes. This type of classification in the present work is called FC. However, there is a specialized category of tasks where classes are not non-overlapping, but instead organized into a hierarchy. This is known as HC, where objects are associated with a superclass (or parents) and its corresponding subclasses (or children), and the correspondence may be with all or only some of the subclasses. One distinct feature that HC exhibits compared to regular classification is that the classes are structured in a hierarchical manner, meaning that an example belonging to a particular class automatically belongs to all of its superclasses. This is known as _hierarchical constraint_.
HC is characterized by three attributes [13, 14]: One critical aspect to consider is the representation of the hierarchical classes, which are depicted as nodes in the graph, and their interrelationships represented as edges in the graph. This attribute can take on either a tree (T) or a Directed Acyclic Graph (DAG) form. In this study, the datasets are structured hierarchically in the form of a tree.
Another aspect to consider is whether a data instance is permitted to have class labels associated with a single path or multiple paths in the class hierarchy. In the case of a single path, only one path of labels is allowed within the hierarchy. Conversely, in the case of multiple paths, the problem involves instances that have more than one path of labels in the hierarchy. In particular, a single example can be associated with multiple classes concurrently. This labeling scenario is commonly referred to as a Hierarchical Multi-Label Classification (HMC) problem in the literature. This study deals with HC only, does not consider the multi-label option.
The depth of classification within the hierarchy is another key aspect to consider. It refers to whether the output of the classifier always corresponds to a leaf node, known as Mandatory LeafNode Prediction, or if the predicted class can be positioned at any level within the hierarchy, referred to as Non-Mandatory Leaf Node Prediction. This study focuses specifically on addressing the problem of mandatory leaf node prediction.
There are four approaches of HC [14] in terms of how classifiers are deployed. First is called global or big-bang HC approach where all hierarchical classes are assigned to a single classifier. Local classifier per level (LCL) trains a multi-class classifier for each level at hierarchy. Local classifier per node (LCN) approach trains a binary classifier for each hierarchical class that excludes root node. Local classifier per parent node (LCPN) trains a multi-class (a binary one in case of binary tree) classifier for each parent node that includes the root node. This study adopts LCPN approach.
### Time Series Classification
Time series classification is a subfield of machine learning that deals with the problem of assigning a label to a sequence of data points over time. In other words, time series classification aims to identify the category or class to which a given time series belongs, based on its temporal behavior. The UCR Time Series Classification Archive is a collection of over 128 time series datasets that have been widely used as benchmarks for evaluating the performance of time series classification algorithms. The datasets cover a wide range of application domains, including biomedicine, finance, and manufacturing, among others.
## 3 Approach
In this study, the terms _category_ or _group_ are used to refer to a set of classes. Similarly, the terms _sub-class_, _sub-group_, _sub-sets_, and _children_ are used interchangeably to denote a subset or subdivision within a category. Likewise, the terms _super-class_, _super-group_, _super-set_, and _parent_ are used interchangeably to refer to a higher-level category or set that encompasses the subclasses or sub-groups. These terms are employed throughout the study to describe the hierarchical relationships and divisions within the class structure.
To provide a foundation for the upcoming sections, several definitions and premises are presented. Consider a dataset denoted by \(X=\{x_{i},y_{i}\}_{i=0}^{|X|-1}\), where \(X\) represents the set of data points and \(Y\) represents the set of corresponding class labels. Each data
point \(x_{i}\) is associated with a class label \(y_{i}\) drawn from a set of \(|C|\) unique class labels, denoted as \(C\).
In the context of this study, certain terms and notations are introduced to describe the hierarchical structure. A subscript used for a dataset \(X\) denotes a subset of the dataset. For example, \(X_{c_{i}}\) refers to the set of data points associated with the class label \(c_{i}\). When a set includes subsets, as is the case for representing parent nodes in this study, the set exhibits a nested structure. In this context, a class in a subset is denoted as \(P_{i,j}\), where the subscript after the comma represents the elements within the nested subset.
To differentiate between the set of flat class labels and the set of hierarchical class labels, the notations \(C\) and \(P\) are used, respectively. Since the hierarchy is represented as a binary tree, the following premises hold: (i) \(P_{0}\) represents the root node of the tree, (ii) The number of parent nodes, denoted as \(|P|\), is one less than the number of flat labels, denoted as \(|C|\), (iii) The total number of nodes, including both parent and leaf nodes, is \(2|C|\)-1, (iv) Each subset of \(P\), denoted as \(P_{i}\), contains exactly two elements, i.e., \(|P_{i}|=2\), where \(i\) ranges from 0 to \(|P|\)-1. This property arises because each parent node has both a left child and a right child.
In the context of classification and evaluation, the following notations are used: \(X_{tr}\) and \(X_{te}\) represent the training and testing subsets of the dataset \(X\), respectively. \(X^{k}\) denotes the kth fold of the dataset in a k-fold division. For prediction using a flat classifier algorithm, the notation \(f(X)\) is used, indicating the prediction made by the flat classifier on the dataset \(X\). For prediction using a hierarchical classifier algorithm, the notation \(h(P,X)\) is used, where \(P\) represents the hierarchy and \(X\) is the dataset. This notation denotes the prediction made by the hierarchical classifier algorithm on the dataset \(X\) considering the given hierarchy \(P\). The evaluation of a classifier is denoted as \(g(Y,f(X))\), where \(Y\) represents the true class labels and \(f(X)\) represents the predicted class labels by the classifier. This notation represents the evaluation of the classifier's performance by comparing the predicted labels with the true labels.
The problem at hand is to generate a hierarchy of classes \(P\) from a set of flat classes \(C\). The process is outlined in Algorithm 1. The algorithm begins by assigning the set of flat classes to the root node. Then, it iteratively partitions each parent node into two clusters, following a top-down clustering approach. An example procedure is depicted in Figure 1. The height of a parent node corresponds to the number of classes present under that node. Parent nodes that already have two elements do not need to be further partitioned. The next parent node to be partitioned is determined based on its height, with the highest one chosen first. This approach ensures the monotonicity of the resulting tree.
The output of the algorithm is a set of parent nodes, each of which has exactly one parent (except for the root node) and two children. As an example, tree in Figure 1 is represented as
Figure 1: Demonstration of hierarchical divisive clustering with stochastic splitting using the pick-one-then-regroup (potr) algorithm through an example case, the βBeefβ dataset from UCR archive. The process involves two iterations: first, the root node is partitioned, and then node 1. Nodes 2 and 3 did not require further partitioning. Since a LCPN approach is used, classifiers are placed at non-singleton cluster nodes only. The nodes with classifiers are sketched with dashed lines.
\(P=\{\{[c_{1},c_{4}\},\{c_{0},c_{2},c_{3}\}],\{[c_{3}\},\{c_{2},c_{0}]\},\{[c_{1} \},\{c_{4}]\},\{[c_{2}\},\{c_{0}]\}\}\).
The parent-child relationship is implicit in the set \(P\). If a subset includes all the elements of another subset, then the former subset is considered the parent of the latter. This relationship is more explicitly represented in the form of a tree object in the source code.2
Footnote 2: The parent-child relationship is implicit in the set \(P\).
To analyze the resultant hierarchy represented as a binary tree, it is important to characterize the balance of the tree. The term _tree balance_ can refer to the balance between the number of left and right descendant nodes [15] or class labels, or the balance between the number of data instances for the root node. In this study, two metrics are introduced to the literature to assess the tree balance: BFC and BFD which are defined as:
\[BFC(P) =\sum_{\omega\in P}(|\omega_{1}|\!-\!|\omega_{0}|)/\sum_{\omega\in P }(|\omega_{1}|\!+\!|\omega_{0}|\!-\!2) \tag{1}\] \[BFD(P,X) =\sum_{\omega\in P}(|X_{\omega_{1}}|\!-\!|X_{\omega_{0}}|)/\sum_{ \omega\in P}(|X_{\omega_{1}}|\!+\!|X_{\omega_{0}}|\!-\!2) \tag{2}\]
The concept of BFC in this study is similar to the balance factor proposed in [15], but with some modifications. BFC focuses on the balance between the number of class labels in the left and right branches of the tree throughout all parent nodes not for only the root node as it was considered in [15]. Similarly, BFD assesses the balance in terms of the number of data instances. Additionally, contribution of each parent node to the balance in both BFC and BFD is weighted with the number of elements in that node.
By comparing the number of class labels or data instances in the left and right branches of each parent node, BFC and BFD provide a measure of the balance or imbalance within the hierarchical tree structure. The normalization of BFC and BFD ensures that the values are within the range of [-1, 1]. A negative value indicates a left-heavy tree or an imbalance towards the left branch, while a positive value indicates a right-heavy tree or an imbalance towards the right branch. A value of 0 indicates a balanced tree with an equal distribution of class labels or data instances between the left and right branches. It is important to note that for a parent node \(P_{i}\), \(P_{i,0}\) and \(P_{i,1}\) represent the set of classes under the left and right child nodes, respectively.
After the generation of the hierarchical tree, the HC process is carried out. The training phase of HC is straightforward, where each classifier trains independently based on the assigned classes at each parent node in the tree. This makes the training phase highly efficient, especially when leveraging multiprocessing techniques. This study employs joblib.parallel module in the Joblib library using 'locky' backend for the process-based parallelism. More details can be found in the source code.2
Footnote 2: The \(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}( \mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{O}(\mathcal{ O(\mathcal{O}(\mathcal{O(\mathcal{O(\mathcal{O((((((((((((((((((((((( 0 0 } } \
\[\operatorname*{argmax}_{i}g\left(\{Y_{c_{i}},Y_{c_{j}}\},f(\{X_{c_{i}},X_{c_{j}}\}) \right),\ \forall_{i},i\neq j \tag{3}\]
The aim is to find a partition of the classes that maximizes this objective function. Since there are \(2^{|C|-1}\)-1 possibilities to form two non-empty subsets from a set of classes with \(|C|\) members, exhaustively trying all possibilities becomes computationally prohibitive for larger numbers of classes because the computational cost grows exponentially. Therefore, more efficient approaches are needed. The following algorithms discuss alternative methods that can achieve the desired splitting while reducing computational complexity. These algorithms aim to strike a balance between computational efficiency and the quality of the obtained partition.
Pick-one-then-regroup:The algorithm, described in Algorithm 2, starts by randomly selecting an element from the given set and placing it in one cluster, while the remaining elements are placed in the other cluster. The initial classification performance of this partition is recorded. Next, each remaining element is individually experimented with, by translocating them. The algorithm monitors the change in classification performance for each experiment. If there is an improvement in the current performance, the experimented member stays in the present cluster, and the maximum score is updated. This iterative process continues for all remaining elements, evaluating the potential improvements in the classification performance by transferring them to the other cluster. Figure 1 provides an example to illustrate the pick-one-then-regroup approach.
```
Data: A dataset with \(X\) and \(Y\) and set of classes \(C\) to be partitioned Result: Siblings \(C_{0}\) and \(C_{1}\) partitioned from \(C\) Let \(j\) be a random variable \(\in[0,|C|\)-1\(]\cap\mathbb{Z}\)\(C_{0}\leftarrow\{C_{j}\}\)\(C_{1}\leftarrow[C_{1:j},\)\(score_{max}\gets g(\{Y_{C_{0}},Y_{c_{1}}\},f(\{X_{C_{0}},X_{c_{1}}\}))\) for\(c\in C_{1}\)do \(C^{\prime}_{0}\gets C_{0}\cup c\)\(C^{\prime}_{1}\gets C_{1}\setminus c\)\(score\gets g(\{Y_{C^{\prime}_{0}},Y_{C^{\prime}_{1}}\},f(\{X_{C^{\prime}_{0}},X_{C^{ \prime}_{1}}\}))\) \(C_{0},C_{1},score_{max}\leftarrow\operatorname*{updateScoreAndGroups}\) end for
```
**Algorithm 2**potr: Pick-one-then-regroup
Split-randomly-then-regroup:The algorithm described in Algorithm 3 begins by shuffling the given set of classes and splitting it from a random point. The initial classification performance of this partition is recorded. Next, each element is individually translocated, with the condition that there will be no empty set after the translocation. The algorithm then monitors the change in classification performance resulting from each translocation. If there is an improvement in the classification performance, the element remains in the set where it was transfered. The algorithm continues this process for each element in the set.
Overall, both potr and srtr approaches provide a systematic way to find an optimized partition of the classes by iteratively evaluating the impact of translocating elements between clusters and updating the classification performance accordingly. The potr approach reduces the number of possibilities to \(|C|\)-1, while the srtr approach reduces it to \(|C|\). Both approaches significantly reduce the computational cost compared to exhaustively considering all possible partitions, resulting in linear computational complexity.
By considering the performance improvement achieved by translocating elements, both approaches efficiently find a partition that maximizes the classification performance. These approaches offer efficient alternatives to handle the exponential number of possibilities and effectively identify an optimal partition of the classes.
Leave-salient-one-out:This approach specifically leaves one member out rather than translocating the members between clusters. As described in Algorithm 4, it starts with shuffling the given set and iteratively leaves each member out. The member that results in the maximum classification performance when it is left out is identified. The lsoo approach provides an efficient way to find an optimized partition of the classes by iteratively evaluating the impact of leaving each member out and selecting the optimal configuration. The leave-one-out approach reduces the number of possibilities to \(|C|\), resulting in a linear computational complexity.
A maximum score and the child groups update procedure (see Algorithm 5) is used in all split functions. If the current score is better than the score which is maximum so far, then groups and the maximum score is updated. Also, a stopping criterion is implemented whenever a performance of 100% achieved. In this case, there is no further need to proceed to seek the best split, and the algorithm terminates. This then reduces the number of possibilities to even less than \(|C|\).
```
Data: A dataset with \(X\) and \(Y\) and set of classes \(C\) to be partitioned Result: Siblings \(C_{0}\) and \(C_{1}\) partitioned from \(C\)\(C\leftarrow\) shuffle(\(C\)) \(score_{max}\gets 0\) for\(c\in C\)do \(C^{\prime}_{0}\gets c\) \(C^{\prime}_{1}\gets C\setminus c\) \(score\gets g(\{Y_{C^{\prime}_{0}},Y_{C^{\prime}_{1}}\},f(\{X_{C^{\prime}_ {0}},X_{C^{\prime}_{1}}\}))\) \(C_{0},C_{1},\)\(score_{max}\leftarrow\)updateScoreAndGroups0
```
**Algorithm 4**Isoo: Leave-salient-one-out
```
Data: Maximum score \(score_{max}\), current score \(score\), and siblings \(C^{\prime}_{0}\) and \(C^{\prime}_{1}\) Result: Maximum score \(score_{max}\) and siblings \(C_{0}\) and \(C_{1}\) if\(score>score_{max}\)then \(score_{max}\gets score\) \(C_{0}\gets C^{\prime}_{0}\) \(C_{1}\gets C^{\prime}_{1}\) if\(score==100\)then break; end if end if
```
**Algorithm 5**updateScoreAndGroups: Routine to update score and groups
### Evaluation, Hyperparameters, and Cross Validation
In this study, two different classifiers were utilized: rocket [16] and svm with a linear kernel [17]. These classifiers were chosen for their efficiency and suitability for time series classification tasks.
For the rocket classifier, the number of kernels was set to 512. This choice allows for a balance between computational efficiency and capturing relevant features from the time series data.
To ensure compatibility and reproducibility, the random number generator seed was set to a default value, which in this case is zero. Setting the seed value helps in obtaining consistent results across different runs of the experiment.
The classification performance was evaluated using the f1 score, which is a commonly used metric for binary classification tasks. In the case of multi-class classification, the f1 macro score was utilized. The f1 macro score calculates the f1 score for each class and then takes the average, giving equal weight to each class. By employing the f1 score and f1 macro score as evaluation metrics, the study assesses the classification performance of the chosen classifiers in a comprehensive and robust manner.
The proposed hierarchy generation model makes use of stochastic splitting algorithms, which require executing the tree generation procedure multiple times in order to find a suboptimal tree. The number of iterations is manually specified and experimented with different values to find the best tree, which corresponds to the model's hyperparameter. The goal is to maximize the performance of the model on unseen data, represented by the test set. To achieve this, a nested cross-validation (CV) approach, also known as double CV [18], is employed.
The nested CV consists of an outer and an inner CV loop (see Algorithm 6). The outer loop is responsible for partitioning the dataset into training and testing subsets. In this study, a 5-fold cross-validation is applied, where in each iteration of the outer loop, one fold is held out as the test set, and the remaining 4 folds are used as the training set.
```
Data: A dataset with \(X\) and \(Y\) Result: Nested CV score \(score\_nCV\) and set of selected trees \(T\)\(scores\_out\leftarrow\{\}\) \(T\leftarrow\{\}\) for\(ko\in\{1,...,N_{te}\}\)do \(X^{ko},Y^{ko}\leftarrow\) splitData(\(X,Y,\) shuffle=0) \(scores\_out_{ko}\gets 0\) for\(i\in\{1,...,N_{iter}\}\)do \(X^{i},Y^{i}\leftarrow\) splitData(\(X^{ko}_{te},Y^{ko}_{te}\), shuffle=1) \(P\leftarrow\) fitLcpnTree(\(X^{i},Y^{i}\)) checkDuplicatesAndLimit() \(scores\_in\)\(\leftarrow\{\}\) for\(ki\in\{1,...,N_{ua}\}\)do \(X^{ki},Y^{ki}\leftarrow\) splitData(\(X^{ko}_{te},Y^{ko}_{te}\), shuffle=0) \(scores\_in_{ki}\gets g(Y^{ki}_{te},h(P,X^{ki}_{te}))\) end if mean(\(scores\_in\)) \(>scores\_out_{ko}\)then \(scores\_out_{ko}\leftarrow\) mean(\(scores\_in\)) \(T_{ko}\gets P\) end if end for end for \(score\_nCV\) = mean(\(scores\_out\))
```
**Algorithm 6**Nested CV procedure
Before the inner loop, there is an additional loop to iterate over the specified number of iterations for the model. This results in two levels of nested loops in total. The loop in the middle and the innermost loop are specifically designed to find the best tree.
In each iteration of the loop in the middle, a new tree is generated using the training set from the outer loop. The training set is shuffled and then divided into 4 folds, with one fold used as the validation set and the remaining 3 folds used for training.
For each generated tree, the innermost loop calculates an inner CV score. This is done by performing 4 iterations, where each iteration uses one fold as the validation set and the remaining 3 folds as the training set. The mean score of the inner CV represents the algorithm's performance on different subsets of the training data. The goal is to select the optimal tree that maximizes this mean score. By evaluating the performance of the algorithm on multiple subsets of the training data, the mean score provides an estimate of how well the algorithm generalizes to unseen data. The tree with the highest mean score is considered the optimal tree, as it demonstrates the best performance on the validation sets within the inner CV loop.
By employing this nested CV approach, the proposed model can evaluate and compare the performance of different trees while accounting for variations in the training and validation subsets. This helps to mitigate potential overfitting and provides a more reliable estimate of the model's performance on unseen data.
To assess the generalization performance of the model, in addition to the nested CV scheme, a separate scheme known as flat CV, as described in the work by Wainer (2021) [19], is employed. The flat CV scheme involves dividing the dataset into multiple folds using a single k-fold CV step, without employing a nested loop. Each flat CV fold iteration is dedicated to both selecting the optimal hyperparameters and evaluating the average performance across all folds. It is important to note that this estimation may be biased, but previous research has shown that it can still be used to determine the best hyperparameters, which aligns with the results obtained from nested CV in certain cases [19]. However, in this particular work, the flat CV scheme is not utilized to assess the generalization performance of the proposed model. Rather, its purpose is to compare the performance of the flat CV approach with that of the nested CV approach. This comparison provides insights into the relative performance of the proposed model when evaluated using different CV schemes.
Flat CV used in this study has two loops (see Algorithm 7). Similar to nested CV, the outer loop is responsible for partitioning the dataset into training and testing subsets. The inner loop, on the other hand, is dedicated to tree generation and iterates over a specified number of iterations.
```
Data: A dataset with \(X\) and \(Y\) Result: Flat CV score score\(\_\)\(f\)\(CV\) and selected trees \(P\)\(scores\_\)\(out\leftarrow\{\}\) \(T\leftarrow\{\}\) for\(ko\in\{1,...,N_{\textit{re}}\}\)do \(X^{\textit{ko}},Y^{\textit{ko}}\leftarrow\) splitData(\(X,Y\),shuffle=0) \(scores\_out_{ko}\gets 0\) for\(i\in\{1,...,N_{\textit{iter}}\}\)do \(X^{i},Y^{i}\leftarrow\) splitData(\(X^{ko}_{\textit{re}},Y^{\textit{ko}}_{\textit{re}},\)shuffle=1) \(P\leftarrow\) fitLcpnTree(\(X^{i},Y^{i}\)) checkDuplicatesAndLimit() \(score\gets g(Y^{ko}_{\textit{re}},h(P,X^{ko}_{\textit{re}}))\) if\(score>scores\_out_{\textit{ko}}\)then \(scores\_\textit{out}_{ko}\gets score\) \(T_{ko}\gets P\) end for end for \(score\_\)\(f\)\(CV\) = mean(\(scores\_\)\(out\))
```
**Algorithm 7** Flat CV procedure
In the inner loop of flat CV, the training set is shuffled and divided into 4 folds. One fold is used as the validation set, while the remaining 3 folds are utilized for training. It is important to note that tree generation is performed using both the training and validation sets, without including the holdout data. The evaluation of the generated trees, however, is conducted using the test data. This is where the bias in the flat CV approach arises. While the tree generation process itself is unbiased, the evaluation of the selected tree is influenced by the use of the test data. This bias stems from the fact that the selected tree is evaluated on the same data that was used for testing, potentially leading to an overly optimistic estimation of the model's performance. Overall, the flat CV approach used in this study introduces bias in the evaluation of the selected tree, but not in the tree generation process itself.
To optimize computational efficiency during the process of generating multiple trees and finding the optimal tree, two critical aspects are considered: managing duplicate trees and determining when to stop the iteration based on the total number of distinct trees processed. The checkDuplicatesAndLimit function serves the purpose of addressing these considerations. It performs two essential tasks:
Handling Duplicate Trees:To handle duplicate trees during the iteration process, a notion of "tree similarity" is utilized specific to the problem at hand. It is important to note that tree similarity is not the same as tree equality, where the positions of all parent and leaf nodes need to be identical. Instead, tree similarity focuses on whether the parent nodes contain the same classes, disregarding the order of the classes within the subgroups as well as super-groups. For example, consider two trees: \(P^{1}=\{\{[c_{0},c_{1}],\{c_{2},c_{3}\}],\{[c_{0}\},\{c_{1}\}],\{[c_{2}\},\{c_{3 }\}]\}\}\) and \(P^{2}=\{\{[c_{3},c_{2}],\{c_{1},c_{0}\}],\{[c_{3}\},\{c_{2}\}],\{[c_{1}\},\{c_{0 }\}]\}\). Despite having different class orders within the subgroups as well as super-groups, these trees are considered similar. To detect tree similarity, a pre-order traversal of the trees is performed. The corresponding parent nodes of the trees are then compared to determine whether they contain the same classes, irrespective of their order. This comparison allows for the identification of similar trees and helps in managing duplicates during the iteration process. For a more detailed understanding of the implementation and the exact code logic, referring to the source code is recommended.[2] It provides further insights into how tree similarity is assessed and utilized to handle duplicate trees efficiently.
Iteration Termination:The function includes a mechanism to determine when to stop the iteration based on the total number of distinct trees processed reaches a certain threshold. This enables the termination of the iteration process once the total number of distinct trees has been reached, effectively saving computational resources. This threshold is determined based on the number of classes in the problem and can be recursively evaluated as:
\[T_{|C|}=\begin{pmatrix}|C|\\ |C|-1\end{pmatrix}T_{|C|+1}+\begin{pmatrix}|C|\\ |C|-2\end{pmatrix}T_{|C|+2}+...+\alpha\begin{pmatrix}|C|\\ |C|/2\end{pmatrix}T_{|C|/2|} \tag{4}\]
where \(\alpha\) is 1 if \(|C|\) is odd and 1/2 otherwise. Due to recursive nature of the estimation, computational cost grows unbounded with increasing number of classes. To mitigate this issue and save computational resources, a lookup table is utilized. This lookup table helps to store and retrieve previously computed results, eliminating the need for redundant computations. For further details on the implementation of the program, more information can be found in the source code.[2]
In summary, the proposed approach involves using 4-fold-within-5-fold nested CV and 5-fold flat CV to compare with the nested CV and assess an evaluation of model performance. In both CV procedures, the best tree is selected for each training set from the most outer loop, which consists of 5 training sets in this study. Each of these CV procedures includes an additional loop
for hyperparameter search, a common practice in hyperparameter tuning. It is worth noting that the hyperparameter selection loop performs data partitioning by shuffling the available data. However, apart from this specific partitioning performed in the hyperparameter selection loop, all other partitions across all datasets remain fixed. This ensures compatibility between the flat and HC schemes and enables an accurate comparison between them.
A stratified split is utilized in all partitions of the dataset. This means that the splitting process ensures that each partition maintains the same class distribution as the original dataset.
### Computational Complexity Analysis
Computational cost can be separately considered for preprocessing, training, and prediction phases. Determining the time complexity of the problem is not straightforward since there are several factors that can not be controlled directly. Some of those can be listed as (i) split functions are non-deterministic and early stop conditions may occur, (ii) data-points allocation varies with tree structure, (iii) the dataset may exhibit class imbalance, indicating that there is an uneven distribution of data points among the different classes, (iv) classifier performance during testing phase affects the pathway for the testing data instances. On the other hand, it is possible to exploit the preprocessing and the training phases to utilize process-based parallelism.
The following notations will be helpful in the clarity and comprehensibility of the forthcoming analysis. Let \(M\) be time-points length of data instances and let \(\phi_{tr}\) and \(\phi_{te}\) represent time complexity of a classifier per each data instance with a single time point (i.e. length of 1) during training and testing phases, respectively. Let \(N_{iter}\) denote the number of iterations specified for the stochastic tree generation procedure.
Tree generation is considered as the preprocessing phase, where an iterative splitting operation is carried out using a top-down approach. In the first iteration, all data instances from the training set are utilized. In the subsequent iterations, the total number of datapoints to be processed can vary depending on the balance of datapoint allocation resulting from the previous iteration, specifically from the parent node. In particular terms, the more datapoints are grouped in an imbalanced way, the more datapoints are processed by the classifier in question. Therefore, a minimum number of datapoints to be processed can be approximated assuming each parent maintains a perfect balance of cluster allocation, on the contrary, a maximum number of datapoints to be processed can be approximated assuming maximum imbalance of cluster allocation at each parent node where a single datapoint falls into one clusters while the rest belong to the other:
\[\approx\begin{cases}2|X_{tr}||C|,&\text{if }BFC(P)=BFD(P,X_{tr})=0.\\ |X_{tr}||C|^{2}/2,&\text{if }BFC(P)=BFC(P,X_{tr})\in\{\text{-}1,1\}.\end{cases} \tag{5}\]
Since the training set splits further into training and validation sets in the preprocessing phase, let \(|X_{trval}|\) and \(|X_{val}|\) denote the number of data instances in the training used for validation and validation sets, respectively. Hence, for the preprocessing phase, time complexity is expected to vary between \(\mathcal{O}(N_{iter}|C|M(|X_{trval}|\phi_{tr}+|X_{val}|\phi_{te}))\) and \(\mathcal{O}(N_{iter}|C|^{2}M(|X_{trval}|\phi_{tr}+|X_{val}|\phi_{te}))\). Tree structure has similar effect on the number of datapoints to be processed in the training phase, then the time complexity in the training phase is expected to vary between the values of \(\mathcal{O}(|X_{tr}||C|M\phi_{tr})\) and \(\mathcal{O}(|X_{tr}||C|^{2}M\phi_{tr})\), as determined by the aforementioned derivations. It is worth noting that the actual execution time is expected to decrease due to the highly parallelizable nature of hierarchical training.
As for the testing phase, complexity during prediction of each data instance depends on tree depth of that instance, specifically how many levels it traverse the tree down until it reaches a leaf node. By considering all test instances, an average tree depth per flat class label can be estimated assuming there is no class imbalance. A shortest path average can be approximated assuming maximum balance of class labels at parent nodes (i.e. \(BFC(P)=0\)) while a longest path average can be approximated assuming maximum imbalance of class labels at parent nodes (i.e. \(BFC(P)=\{\text{-}1,1\}\)):
\[\approx\begin{cases}log_{2}|C|,&\text{if }BFC(P)=BFD(P,X_{te})=0.\\ |C|/2,&\text{if }BFC(P)\in\{\text{-}1,1\},BFD(P,X_{te})=0.\end{cases} \tag{6}\]
The time complexity of the testing phase is then expected to vary between \(\mathcal{O}(|X_{te}|Mlog_{2}|C|\phi_{te})\) and \(\mathcal{O}(|X_{te}|M|C|\phi_{te})\).
Overall, the complexity of HC is expected to vary between \(\mathcal{O}(M(|C|(N_{iter}|X_{trval}|\phi_{tr}+|X_{val}|\phi_{te})+|X_{tr}|\phi_ {tr})+log_{2}|C|(X_{te}|\phi_{te}))\) and \(\mathcal{O}(M|C|(C|(N_{iter}|(X_{trval}|\phi_{tr}+|X_{val}|\phi_{te})+|X_{tr}| \phi_{tr})+|X_{te}|\phi_{te}))\). These can be compared to the FC cost \(O(M(|X_{tr}|\phi_{tr}+|X_{te}|\phi_{te}))\). The overall expected cost is expected to increase in a respective order with the trees generated using split functions srtr, brt, and lsoo. Specifically, the algorithm srtr is expected to be the most efficient among them.
## 4 Experiments on Time Series Data
The efficacy of the proposed approach is tested on a selection of datasets from the UCR archive [12]. The UCR archive contains a collection of 128 univariate time series datasets. For this study, only multi-class cases with more than two classes are considered. Additionally, datasets with classification accuracy greater than 99.5% for both svm and rocket classifiers were excluded, as there is little room for improvement in such cases.
After applying these criteria, a total of 46 datasets out of the original 128 were selected for evaluation. The selected datasets were downloaded and handled using the sktime library [20], which provides tools for handling and analyzing time series data. These datasets were used to assess the performance of the proposed approach for HC.
Figure 2 displays the number of improvements achieved using different classifiers (svm or rocket) and SSFs (potr, srtr, or lsoo) during hierarchical divisive clustering, along with nested or flat CV evaluation schemes. The results were compared for 3, 5, 10, and 50 iterations for both rocket and svm. It can be seen that the number of improvements achieved declines as the number of iterations exceeds 20 for both rocket and svm. For the coming analyses, the focus was on 10 iterations since it seems as a reasonable choice, and the results were included in the Appendix, presented in Table 2 for svm and Table 3 for rocket. These tables display the FC and HC scores obtained using each splitting algorithm, with the improvement state highlighted using bold font weight.
In the initial observations, a comparison between the classifiers showed that rocket outperforms svm in terms of the number of improved datasets. As anticipated, the flat CV scheme resulted in more improvements compared to the nested CV scheme. Approximately, rocket achieved improvements in around 50% of the datasets using the nested CV scheme, while svm achieved improvements in around 30%. With the flat CV scheme, these numbers increased to approximately 85% and 70% for rocket and svm, respectively. Regarding the splitting functions, all of them yielded overall comparable results.
Regarding the flat CV, the number of improvements generally increases with an increasing number of iterations, but the rate of increase decreases for larger iteration values. This trend suggests that a saturation point might be reached for even higher iterations. On the other hand, in nested CV, the number of improvements fluctuates for smaller iteration values such as 3, 5, and 10. For example, in the case of rocket, the number of improvements is higher when the number of iterations is 3 compared to 5. For svm, a consistent decline in the number of improvements is observed when the number of iterations increases from 20 to 50. This indicates that using a number of iterations greater than 20 is not advisable. For the forthcoming analyses, a feasible choice would be to consider 10 iterations (\(N_{iter}=10\)) for both classifiers. This range can be narrowed down to 3 to 10 iterations for svm. Hence, 10 iterations will be adopted for the subsequent analyses.
The further analyses involve exploring the relationship between dataset features and the improvement obtained using HC over FC. The considered dataset features include the number of classes and FC score, along with the tree balance metrics BFC and BFD introduced in this study. The outcome of interest can be either the number of improvements or the difference in classification performance, specifically f1 score evaluated by the function \(g(.)\), between HC and FC, denoted as \(\Delta g\). In other words, the dependent variable can be treated as either a categorical variable (indicating whether there is an improvement or not) or a continuous interval variable. Both types of outputs are considered in this study.
To assess the association between dataset features and the outcome, statistical tests are conducted. For the continuous interval variable outcome \(\Delta g\), Pearson correlation analysis is performed since the independent variables are also continuous interval variables. To obtain a second opinion, multiple linear regression analysis is also implemented. When the outcome is treated as categorical (indicating whether there is an improvement or not), multiple logistic regression analysis is used. The dataset includes 230 samples for the statistical analysis, obtained from the scores during the testing phase of the nested CV scheme for 46 datasets with 5 observations each. The statistical significance threshold is set at \(p<0.001\), meaning that any p-value less than 0.001 is considered statistically significant in the analyses.
Table 1 presents the results of Pearson correlation tests, including correlation coefficients and corresponding p-values. For the svm classifier, the number of classes is found to be statistically significant in all cases, showing a negative correlation with \(\Delta g\). Additionally, the FC score is also statistically significant in all cases for the rocket classifier, demonstrating a negative correlation with \(\Delta g\).
For the svm classifier, the BFD metric is positively correlated with \(\Delta g\) when using srtr, while it is negatively correlated when using lsoo. On the other hand, for the svm classifier, the BFC metric is positively correlated with \(\Delta g\) when using potr and srtr as the SSFs.
The multiple linear regression analysis results presented in Appendix Table 4 confirm the findings from the Pearson correlation analysis regarding the effect of the number of classes on svm and the FC score on rocket. However, for svm, BFC was found to be significant when using only potr, whereas BFD was found to be significant when using only lsoo.
The multiple logistic regression results, which were conducted when the output was considered as categorical, are presented in Appendix Table 5. These results confirm the findings from the Pearson correlation analysis regarding the effect of the number of classes on svm and the FC score on rocket. However, for svm, no significant relationship was found for BFD and BFC. Additionally, in contrast to the Pearson correlation results, the FC score was found to be significant when using potr and srtr as the SSFs.
Figure 3 provides a graphical representation of the comparison between the number of classes and the improvement obtained using HC over FC using a grouped bar chart. For svm, it is evident that the number of improvements decreases as the number of classes increases, which aligns with the negative correlation identified through the Pearson correlation test. Conversely, for rocket, no significant change in the number of improvements is observed with varying the number of classes, consistent with the finding of no significant relation from the Pearson correlation test.
In order to have a comprehensive visual overview of the data distribution and relationships between the features and the outcome, Figure 4 presents a visual comparison between the remaining dataset features and the improvement obtained using HC over FC, represented as a point cloud. In the case of rocket, the dark-colored markers are concentrated more towards the bottom of the point cloud space, which aligns with the negative correlation coefficient identified through the Pearson correlation test.
Figure 2: Comparison of the number of iterations versus the number of improved results for svm (a) and rocket (b) when using different splitting functions, such as potr, srtr, or lsoo, and different evaluation schemes, such as nested CV or flat CV. The results obtained with flat CV are shown with lighter marker colors.
However, for the other features, no significant concentration or patterns are observed in the point cloud sets. This indicates that there may not be strong linear relationships between these features and the improvement obtained using HC over FC.
Figures 5, 6, and 7 in the Appendix present graphical representations of the comparison between each individual feature (i.e., BFC, BFD, and FC score) and \(\Delta g\), which represents the difference between HC and FC scores, using 2D point clouds with fitted linear regression lines.
In Figure 5, showing the relationship between BFC and \(\Delta g\), a positive-sloped line is observed for svm in both potr and srtr cases, while a flattish line is observed for all cases of rocket.
In Figure 6, illustrating the relationship between BFD and \(\Delta g\), all cases exhibit flattish lines except for svm, where a positive-sloped line is observed for srtr and a negative-sloped line is observed for lsoo.
Lastly, in Figure 7, displaying the relationship between FC score and \(\Delta g\), all lines are found to be flattish for svm, while they are negatively sloped for rocket.
These graphical representations provide visual confirmation of the findings obtained from the Pearson correlation tests, supporting the presence of certain relationships between the features and the improvement achieved using HC over FC.
## 5 Discussions
### Discussions of the Results
The results obtained from this study offer valuable insights into the effectiveness and behavior of the proposed hierarchical divisive clustering approach with SSFs for enhancing classification performance in multi-class datasets. The comparison between the classifiers svm and rocket indicates that rocket consistently outperforms svm in terms of the number of improved datasets. This observation suggests that rocket is better equipped to handle the hierarchical data structure of time series data. The advantage of rocket can be attributed to its specialized design tailored for time series data, enabling it to capture temporal dependencies and patterns effectively. As a result, it achieves enhanced classification performance in the hierarchical divisive clustering, which strongly relies on the performance of the underlying classifier.
Conversely, svm, being a more generic classifier, may not fully capitalize on the time series-specific information, leading to comparatively fewer improvements in this hierarchical context. The observed trend further highlights the importance of using domain-specific classifiers for time series datasets, where the inherent temporal nature of the data can significantly impact the classification results. The findings underscore the benefits of employing specialized classifiers like rocket when dealing with time series data in hierarchical clustering scenarios.
Furthermore, the flat CV scheme demonstrated a higher number of improvements compared to the nested CV scheme, as anticipated due to the increased optimization opportunities provided by the former. Specifically, in the flat CV scheme, an inherent bias was introduced during the evaluation phase when searching for the optimal tree. This observation provides valuable insight, indicating that there is potential to enhance the generalization performance of the proposed approach. Although some trees were generated during the nested CV scheme, they
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{svm} & \multicolumn{4}{c}{rocket} \\ \cline{2-13} & \multicolumn{2}{c}{potr} & \multicolumn{2}{c}{srtr} & \multicolumn{2}{c}{lsoo} & \multicolumn{2}{c}{potr} & \multicolumn{2}{c}{srtr} & \multicolumn{2}{c}{lsoo} \\ \cline{2-13} Features & \(r\) & \(p\) & \(r\) & \(p\) & \(r\) & \(p\) & \(r\) & \(p\) & \(r\) & \(p\) & \(r\) & \(p\) \\ \hline \#class & **-0.605** & **0.000** & **-0.568** & **0.000** & **-0.623** & **0.000** & -0.054 & 0.417 & -0.102 & 0.124 & 0.026 & 0.699 \\ FCscore & -0.032 & 0.625 & -0.026 & 0.696 & -0.009 & 0.888 & **-0.272** & **0.000** & **-0.340** & **0.000** & **-0.307** & **0.000** \\ BFD & -0.073 & 0.272 & **0.209** & **0.001** & **-0.510** & **0.000** & -0.081 & 0.224 & -0.040 & 0.544 & -0.064 & 0.337 \\ BFC & **0.309** & **0.000** & **0.270** & **0.000** & - & - & -0.139 & 0.035 & -0.034 & 0.604 & β & β \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results of the Pearson correlation tests assessing the correlation coefficient and p-values for independence between dataset features and the performance difference between HC and FC (FC), denoted as \(\Delta g\). The tests were conducted with \(N_{iter}=10\) iterations, considering the individual testing scores of the nested CV scheme. Note that the test is not applicable to lsoo as the BFC metric always results in a value of 1.)
Figure 3: The figure illustrates a comparison of the number of classes versus the number of improved results for svm (a) and rocket (b) when utilizing different splitting functions, namely potr, srtr, and lsoo. The tests were performed with 10 iterations, considering the individual testing scores of the nested CV scheme. To ensure a balanced representation of the total observations in each bin of the grouped bar chart, some classes were grouped together.
-escaped notice during the evaluation phase. This realization serves as motivation to further refine and improve the technique for finding the optimal tree in the hierarchical divisive clustering process.
Regarding the number of iterations in the hierarchical divisive clustering process, it was found that using 3, 5, or 10 iterations is sufficient to achieve meaningful improvements for both rocket and svm. The number of improvements declined when the number of iterations exceeded 20. Therefore, the recommendation is to use 10 iterations for svm to strike a balance between improvement and computational cost.
The correlation and regression analyses provide additional insights into the relationship between dataset features and the improvements achieved through hierarchical divisive clustering. The results consistently demonstrate that the number of classes and FC score are significant features for svm and rocket, respectively. However, when it comes to BFC and BFD metrics, there were discrepancies in their significance depending on the classifier and SSF used, as well as the specific statistical tests applied. These variations may be attributed to the intricate relationships between the dataset features and the hierarchical structure, as well as the specific characteristics of each classifier and splitting function.
The findings from Tables 2 and 3 highlight an interesting pattern: for datasets with a smaller number of classes, consistently better results are observed when imposing a hierarchy over FC, particularly in cases involving at least one of the SSFs. This observation suggests that for datasets with fewer classes, there may exist at least one hierarchical structure that outperforms the FC approach.
One possible explanation for this pattern is the reduced search space for trees when dealing with datasets with a smaller number of classes. With fewer classes, the number of possible hierarchical structures to consider is smaller, making it more feasible to explore the space of potential hierarchies and identify ones that lead to improved performance.
As a result, the focus of future works should be on improving the efficiency and efficacy of the proposed approach in finding these hierarchical structures, even for datasets with a larger number of classes. Developing more efficient algorithms or optimization techniques that can effectively search through the larger search
Figure 4: Point cloud visualization of dataset features extracted using the classifiers svm (a) and rocket (b) and SSFs potr, strtr, and lsoo as indicated (on the left, center, and right, respectively). The tests were conducted with 10 iterations, taking into account the individual testing scores of the nested CV scheme. The x-axis and y-axis of the point cloud represent BFD and BFC, respectively. The z-axis shows FC score. The marker color indicates the improvement state; dark color shows there is an improvement, and light color shows there is no improvement. The marker size represents the difference of performance evaluation between HC and FC, denoted as \(\Delta\textsl{g}\); a larger marker size indicates a larger \(\Delta\textsl{g}\).
space of trees could lead to the discovery of valuable hierarchical structures that enhance the classification performance.
### Limitations of the Approach
Indeed, the stochastic nature of the approach, while beneficial in enhancing the exploration aspect, also introduces certain limitations. These limitations can be summarized as follows: i) Efficiency: The proposed approach requires multiple iterations, which can decrease efficiency compared to deterministic methods that converge in fewer iterations. ii) Convergence: Convergence of the proposed approach is not guaranteed, and the convergence behavior can be erratic. It may take a varying number of iterations for the algorithm to converge, making it challenging to determine when the optimal solution is reached. iii) Solution Quality: The proposed approach does not guarantee finding the true solution. The obtained solution may only be a local optimum or a suboptimal solution, depending on the specific problem and the randomness introduced during the iterations. iv) Sampling Bias: The stochastic nature of the approach introduces sampling bias, which can affect the generalization performance of the algorithm. The sampled subset may not be representative of the entire dataset, leading to suboptimal performance on unseen data.
To address this limitation, future research can focus on implementing advanced optimization techniques that strike a balance between exploration and exploitation. Techniques such as adaptive learning rates, dynamic exploration probabilities, or guided search strategies can be explored to optimize the trade-off between stochasticity and convergence.
An additional noteworthy limitation of the approach is its computational cost, which can be up to approximately \(N_{iter}|C|^{2}\) times more expensive than FC (as discussed in Section 3.3). Nonetheless, the impact of this cost on the overall efficiency can be moderated by utilizing multiprocessing techniques. Moreover, the implementation of the proposed approach is not impractical, as it is relatively easy to implement. The algorithms and techniques used in the approach, such as the SSFs and hierarchical divisive clustering, can be implemented using standard programming libraries and tools. The source code[2] and detailed explanations provided in the study's materials can further aid in the implementation process. This simplicity in implementation makes the approach accessible and user-friendly for researchers and practitioners interested in exploring HC methods.
### Future Works
The proposed hierarchical divisive clustering approach with SSFs offers a promising framework for improving classification performance in multi-class time series datasets. However, there are several avenues for future exploration and improvement:
Improving Efficacy, Generalizability, and Efficiency:Exploring additional dataset features and metrics, as well as novel splitting functions, could provide deeper insights into their impact on clustering performance. Incorporating a wider range of classifiers specifically tailored for time series data and exploring ensemble techniques could enhance the generalizability of the approach. Efforts to reduce computational cost through more efficient algorithms and parallelization techniques will be essential to scale the approach to larger datasets and make it practically applicable in various domains.
Exploring Different Hierarchy Structures:Currently, the approach follows a Local Classifier per Node (LCPN) configuration, but it can be adapted to different hierarchy setups, such as a global approach. However, incorporating a global approach for analyzing time series data comes with challenges, especially when adapting traditional algorithms not explicitly designed for hierarchical structures. Exploring decision trees or uncertainty forests as base classifiers may be potential solutions to effectively handle class hierarchies and temporal dependencies in time series data.
Applying the Approach to Different Domains:The proposed approach can be extended to various domains and applications beyond time series data. Comparing its performance with state-of-the-art hierarchical clustering techniques in different domains will provide valuable insights into its effectiveness and competitiveness.
In summary, future research should focus on refining the efficacy, generalizability, and efficiency of the proposed approach by exploring new features, classifiers, and optimization techniques. Additionally, investigating different hierarchy structures and adapting the approach to various domains will further expand its applicability and potential impact in real-world scenarios. With continuous efforts and advancements, the proposed approach can become a valuable tool for hierarchical clustering and classification tasks in diverse fields.
## 6 Conclusions
In this study, a novel approach was presented for enhancing HC by generating the hierarchy of classes from a given set of flat classes, without requiring explicit hierarchical information. The proposed method employed a hierarchical divisive clustering approach with SSFs, which systematically divided the set of classes into two subsets based on their similarity in terms of discriminability by the classifier. This allowed for the construction of a binary tree representation of hierarchical classes that maximized the classification score between the formed groups.
The efficacy of the approach was evaluated on a diverse collection of 46 multi-class datasets from the UCR archive. Two popular classifiers, svm and rocket, and three SSFs, potr, srtr, and lsoo, were utilized to assess the impact on HC performance.
The results demonstrated that the proposed method significantly improved HC performance, particularly when rocket was used as the classifier. This finding underscored the importance of effectively leveraging the hierarchical structure when dealing with complex and diverse datasets. Furthermore, the use of the flat CV scheme led to more improvements compared to the nested CV scheme, indicating the potential of the approach in optimizing hierarchical classifiers.
A key advantage of the approach was its ability to generate the hierarchy of classes when explicit hierarchical information was not available. By automatically partitioning the classes into two subsets based on their similarity, the approach offered a systematic and efficient solution for constructing the hierarchical tree.
Additionally, the concepts of BFC and BFD were introduced to characterize the balance within the hierarchical tree structure, providing valuable insights for further analysis.
The study also investigated the relationship between dataset features and the improvement in HC. While the number of classes and FC score were found to be significant factors for svm and rocket, respectively, some variations in the results were observed depending on the choice of splitting function.
In conclusion, the proposed hierarchical divisive clustering approach with SSFs offers an effective strategy for enhancing HC, particularly in cases where explicit hierarchy information is not provided. By optimizing the hierarchical tree structure and effectively utilizing the advantages of the hierarchical organization, the approach shows promising potential in improving classification performance. Future research can further explore different splitting functions and classifiers to achieve even better performance in HC tasks.
|
2306.00033 | Sign-Balanced Pattern-Avoiding Permutation Classes | A set of permutations is called sign-balanced if the set contains the same
number of even permutations as odd permutations. Let $S_n(\sigma_1, \sigma_2,
\ldots, \sigma_r)$ be the set of permutations in the symmetric group $S_n$
which avoids patterns $\sigma_1, \sigma_2, \ldots, \sigma_r$. The aim of this
paper is to investigate when, for certain patterns $\sigma_1, \sigma_2, \ldots,
\sigma_r$, $S_n(\sigma_1, \sigma_2, \ldots, \sigma_r)$ is sign-balanced for
every integer $n>1$. We prove that for any $\{\sigma_1, \sigma_2, \ldots,
\sigma_r\}\subseteq S_3$, if $\{\sigma_1, \sigma_2, \ldots, \sigma_r\}$ is
sign-balanced except $\{132, 213, 231, 312\}$, then $S_n(\sigma_1, \sigma_2,
\ldots, \sigma_r)$ is sign-balanced for every integer $n>1$. In addition, we
give some results in the case of avoiding some patterns of length $4$. | Junyao Pan, Pengfei Guo | 2023-05-31T07:32:14Z | http://arxiv.org/abs/2306.00033v1 | # Sign-Balanced Pattern-Avoiding Permutation Classes
# Sign-Balanced Pattern-Avoiding Permutation Classes
**Junyao Pan\({}^{1}\), Pengfei Guo\({}^{2,}\)1**
Footnote 1: Corresponding author.
E-mail addresses: [email protected] (Pengfei Guo), Junyao\({}_{-}\)[email protected] (Junyao Pan).
1. School of Sciences, Wuxi University, Wuxi, 214105, P. R. China
2. School of Mathematics and Statistics, Hainan Normal University, Haikou 571158, P. R. China
**Abstract:** A set of permutations is called sign-balanced if the set contains the same number of even permutations as odd permutations. Let \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) be the set of permutations in the symmetric group \(S_{n}\) which avoids patterns \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\). The aim of this paper is to investigate when, for certain patterns \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\), \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) is sign-balanced for every integer \(n>1\). We prove that for any \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\subseteq S_{3}\), if \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\) is sign-balanced except \(\{132,213,231,312\}\), then \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) is sign-balanced for every integer \(n>1\). In addition, we give some results in the case of avoiding some patterns of length \(4\).
**Keywords:** Permutation, Sign-balanced, Symmetric group, Avoid patterns.
**Mathematics Subject Classification (2020):** 05A05, 06A07.
## 1 Introduction
In this paper, \(S_{n}\) is always the symmetric group of degree \(n\), its identity element is used by \(id_{n}\), and the binomial coefficient is denoted by \(C_{n}^{k}\).
Fix a permutation \(\sigma=\sigma_{1}\sigma_{2}\cdots\sigma_{k}\in S_{k}\) and let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in S_{n}\) with \(k\leq n\). If there exists a subset of indices \(1\leq i_{1}<i_{2}<\cdots<i_{k}\leq n\) such that \(\pi_{i_{s}}>\pi_{i_{t}}\) if and only if \(\sigma_{s}>\sigma_{t}\) for any \(1\leq s<t\leq k\), then we call that \(\pi\) contains _the pattern_\(\sigma\), and the subsequence \(\pi_{i_{1}}\pi_{i_{2}}\cdots\pi_{i_{k}}\) is called an _occurrence_ of \(\sigma\) in \(\pi\), and expressed by \(\sigma\leq\pi\). Otherwise, we call that \(\pi\)_avoids_\(\sigma\). For instance, \(132\leq 24153\) since \(253\) is an occurrence of \(132\) in \(24153\), and \(53412\) avoids \(132\). In fact, the investigation of pattern avoidance in permutations began in 1968, when Knuth [9] introduced a stack-sorting machine and showed that a permutation can be sorted to the identity permutation using this machine if and only if it avoids the pattern \(231\). Henceforth, pattern avoidance in permutations has been studied by many scholars, for some details see [2, 3, 4, 5, 11, 14, 15].
Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\) be a permutation in \(S_{n}\). A pair of indices \(i<j\) forms an _inversion_ in the permutation \(\pi\) if \(\pi_{i}>\pi_{j}\), otherwise \(i<j\) forms a _noninversion_ in the permutation \(\pi\). Additionally, we denote by \(\tau(\pi)\) (\(\theta(\pi)\)) the number of inversions (noninversions) in \(\pi\). If \(\tau(\pi)\) is an even number, then we say that \(\pi\) is an _even permutation_, and otherwise \(\pi\) is an _odd permutation_. In addition, we say that a set of permutations is _sign-balanced_ if the number of even permutations equals the number of odd permutations in this set. Let \(S_{n}(\sigma)\) denote the set of permutations in \(S_{n}\) that avoids pattern \(\sigma\). Simion and Schmidt [13] proved that \(S_{n}(321)\) is sign-balanced if \(n\) is even, and the number of even permutations in \(S_{n}(321)\) exceeds the number of odd permutations by the Catalan number \(C_{\frac{1}{2}(n-1)}\) if \(n\) is odd. Afterwards, the sign-balance property of permutation class that avoids one pattern was studied under various conditions in [7, 8, 12, 14], and further the sign-balance of permutation class with respect to certain statistics was studied in [1, 10].
In this note, we consider the permutation class that avoids several patterns. Let \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) be the set of permutations in \(S_{n}\) which avoids all patterns \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\). We focus on the following problem and further obtain some results.
**Question 1.1**: _Are there some patterns \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\) such that \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) is sign-balanced for every integer \(n>1\)?_
**Theorem 1.2**: _Suppose \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\in S_{3}\). Then, \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) is sign-balanced for every integer \(n>1\) if and only if \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\neq\{132\), \(213\), \(231\), \(312\}\) and \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\) is sign-balanced._
**Theorem 1.3**: _All of \(S_{n}(1234,3214)\), \(S_{n}(4321,4123)\), \(S_{n}(4321,2341)\), \(S_{n}(1234,1432)\), \(S_{n}(1243,2143)\), \(S_{n}(3421,3412)\), \(S_{n}(4312,3413)\), \(S_{n}(1423,1432)\), \(S_{n}(3241,2341)\), \(S_{n}(4132,4123)\) and \(S_{n}(2314,3214)\) are sign-balanced for every integer \(n>1\)._
## 2 Preliminaries
Recall some notions and notations. Consider \(\sigma\in S_{l}\) and \(\pi\in S_{m}\). The _direct sum_ of \(\sigma\) and \(\pi\) is denote by \(\sigma\oplus\pi\), that is,
\[\sigma\oplus\pi(i)=\begin{cases}\sigma(i),&\text{if $1\leq i\leq l$};\\ \pi(i)+l,&\text{if $l+1\leq i\leq l+m$}.\end{cases}\]
and the _skew sum_ of \(\sigma\) and \(\pi\) is denote by \(\sigma\ominus\pi\), that is,
\[\sigma\ominus\pi(i)=\begin{cases}\sigma(i)+m,&\text{if $1\leq i\leq l$};\\ \pi(i-l),&\text{if $l+1\leq i\leq l+m$}.\end{cases}\]
Moreover, for any \(\sigma\in S_{n}\), its reversal \(\overline{\sigma}\) is given by \(\overline{\sigma}(i)=\sigma(n+1-i)\); its complement \(\sigma^{*}\in S_{n}\) is the permutation \(\sigma^{*}(i)=n+1-\sigma(i)\); the inverse \(\sigma^{-1}\) is the usual group
theoretic inverse permutation. Let \(R=\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\) be a set of permutations in \(S_{n}\). We use \(R^{-1}\) and \(R^{*}\) and \(\overline{R}\) to denote \(\{\sigma_{1}^{-1},\sigma_{2}^{-1},\ldots,\sigma_{r}^{-1}\}\) and \(\{\sigma_{1}^{*},\sigma_{2}^{*},\ldots,\sigma_{r}^{*}\}\) and \(\{\overline{\sigma_{1}},\overline{\sigma_{2}},\ldots,\overline{\sigma_{r}}\}\) respectively.
Next we give some results which are useful in proving Theorem 1.2 and Theorem 1.3 by inductive method.
**Lemma 2.1**: _Let \(\pi\in S_{n}\) and \(\mu\) be a permutation that is from exchanging the numbers in two different positions of \(\pi\). Then the parity of \(\pi\) and \(\mu\) is opposite._
**Proof** It is well-known that the parity of \(\tau(\pi)\) and \(\tau(\mu)\) is opposite. Thus, the parity of \(\pi\) and \(\mu\) is opposite. \(\square\)
**Lemma 2.2**: _Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{n}\in S_{n}\) and \(\mu=\pi_{1}\pi_{2}\cdots\pi_{i}(n+1)\pi_{i+1}\cdots\pi_{n}\in S_{n+1}\) for some \(i=0,1,\ldots,n\). In particular, if \(i=0\) then \(\mu=(n+1)\pi_{1}\pi_{2}\cdots\pi_{n}\in S_{n+1}\). If \(n-i\) is an even number, then \(\pi\) and \(\mu\) have the same parity; if \(n-i\) is an odd number, then \(\pi\) and \(\mu\) have the opposite parity._
**Proof** Clearly, \(\tau(\mu)=\tau(\pi)+n-i\). Thus, if \(n-i\) is an even number, then \(\pi\) and \(\mu\) have the same parity; if \(n-i\) is an odd number, then \(\pi\) and \(\mu\) have the opposite parity. \(\square\)
**Lemma 2.3**: _Let \(R\) be a set of permutations. Then \(S_{n}(R^{*})=S_{n}^{*}(R)\) and \(S_{n}(\overline{R})=\overline{S_{n}(R)}\)._
**Proof** Let \(R=\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\) and \(\pi\in S_{n}^{*}(R)\). If \(\pi\notin S_{n}(R^{*})\), then \(\pi\) contains \(\sigma_{i}^{*}\) that is in \(R^{*}\). Thus, \(\pi^{*}\) contains \(\sigma_{i}\), in other words, \(\pi^{*}\notin S_{n}(R)\). However, \(\pi\in S_{n}^{*}(R)\) means that \(\pi^{*}\in S_{n}(R)\), a contradiction. Therefore, \(S_{n}^{*}(R)\subseteq S_{n}(R^{*})\). Since \(R=(R^{*})^{*}\), it follows that \(S_{n}^{*}(R^{*})\subseteq S_{n}(R)\), and which implies that \(S_{n}(R^{*})\subseteq S_{n}^{*}(R)\). Hence, \(S_{n}(R^{*})=S_{n}^{*}(R)\). Similarly, we can infer that \(S_{n}(\overline{R})=\overline{S_{n}(R)}\). \(\square\)
**Lemma 2.4**: _Let \(\sigma\in S_{n}\). If \(C_{n}^{2}\) is an even number, then \(\sigma^{*}\), \(\overline{\sigma}\) and \(\sigma\) have the same parity; otherwise, the parity of \(\sigma\) is opposite to both of \(\sigma^{*}\) and \(\overline{\sigma}\)._
**Proof** Note that \(\theta(\sigma)+\tau(\sigma)=C_{n}^{2}\) and \(\theta(\sigma)=\tau(\sigma^{*})=\tau(\overline{\sigma})\). Thus, if \(C_{n}^{2}\) is an even number, then \(\tau(\sigma^{*})\), \(\tau(\overline{\sigma})\) and \(\tau(\sigma)\) have the same parity, and otherwise the parity of \(\tau(\sigma)\) is opposite to both of \(\tau(\sigma^{*})\) and \(\tau(\overline{\sigma})\), as desired. \(\square\)
**Corollary 2.5**: _Let \(R\) be a set of permutations such that \(S_{n}(R)\) is sign-balanced. Then \(S_{n}(R^{*})\) and \(S_{n}(\overline{R})\) are sign-balanced._
**Proof** Let \(\Omega_{+}=\{\pi\in S_{n}(R):\pi\) is an even permutation\(\}\) and \(\Omega_{-}=\{\pi\in S_{n}(R):\pi\) is an odd permutation\(\}\). Since \(S_{n}(R)\) is sign-balanced, we deduce that \(|\Omega_{+}|=|\Omega_{-}|\). Applying Lemma 2.3, it follows that \(S_{n}(R^{*})=\Omega_{+}^{*}\cup\Omega_{-}^{*}\). Set \(\Delta_{+}=\{\pi\in S_{n}(R):\pi\) is an even permutation\(\}\) and \(\Delta_{-}=\{\pi\in S_{n}(R):\pi\) is an odd permutation\(\}\). Then by Lemma 2.4, we see that \(\Delta_{+}=\Omega_{+}^{*}\) and \(\Delta_{-}=\Omega_{-}^{*}\) if \(C_{n}^{2}\) is an even number, and \(\Delta_{-}=\Omega_{+}^{*}\) and \(\Delta_{+}=\Omega_{-}^{*}\) if \(C_{n}^{2}\) is an odd number. In either case, \(|\Delta_{+}|=|\Delta_{-}|\) holds. Similarly, we can deduce that \(S_{n}(\overline{R})\) is also sign-balanced. \(\square\)
**Lemma 2.6**: _Let \(\sigma\in S_{l}\) and \(\pi\in S_{m}\). If \(ml\) is an even number, then \(\tau(\sigma\ominus\pi)\) and \(\tau(\sigma)+\tau(\pi)\) have the same parity; if \(ml\) is an odd number, then the parity of \(\tau(\sigma\ominus\pi)\) is opposite to that of \(\tau(\sigma)+\tau(\pi)\)._
**Proof** The lemma follows from the fact that \(\tau(\sigma\ominus\pi)=\tau(\sigma)+\tau(\pi)+ml\). \(\Box\)
**Lemma 2.7**: [13, Proposition 17] If \(R\subseteq S_{3}\) and \(|R|=4\), then
(a) \(|S_{n}(R)|=0\) if \(R\supset\{123,321\}\) and \(n\geq 5\);
(b) \(|S_{n}(R)|=2\) if \(R\not\supset\{123,321\}\), \(n\geq 2\).
More precisely, the permutations counted in (b) are the appropriate two, depending on \(R\), from among the identity, \(23\cdot\cdot\cdot n1\), \(n(n-1)\cdot\cdot\cdot 312\), their reversals and complements. For the values of \(n\) omitted in part (a), we have: \(S_{1}(R)=1\), \(S_{2}(R)=2\), \(S_{4}(R)=0\) except for \(S_{4}(123,321,132,213)=S_{4}(123,321,231,312)=1\).
**Lemma 2.8**: [6, Erdos-Szekeres Theorem] Any sequence of \(mk+1\) distinct real numbers contains either an increasing subsequence of \(m+1\) terms or a decreasing subsequence of \(k+1\) terms.
## 3 Main results
We first consider Question 1.1 under the assumption of \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\in S_{3}\). In this case, we see that if \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) is sign-balanced for every integer \(n>1\), then \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\) is sign-balanced because \(S_{3}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})=S_{3}\setminus\{\sigma_{1}, \sigma_{2},\ldots,\sigma_{r}\}\). Thus for Theorem 1.2, it suffices to check that \(S_{n}(\sigma_{1},\sigma_{2},\ldots,\sigma_{r})\) is sign-balanced for every integer \(n>1\) if \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\neq\{132,213,231,312\}\) and \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\}\) is sign-balanced, and further \(S_{n}(132,213,231,312)\) is not sign-balanced for some integer \(n>1\). Now we start from the case that \(r=2\).
**Proposition 3.1**: _All of \(S_{n}(132,231)\), \(S_{n}(312,213)\), \(S_{n}(132,312)\) and \(S_{n}(231,213)\) are sign-balanced for every integer \(n>1\)._
**Proof** Since \(132\) is an odd permutation while \(231\) is an even permutation, we infer that \(S_{n}(132,231)\) is sign-balanced for \(n=2,3\). Next we prove this proposition by induction on \(n\). Assume that \(S_{n}(132,231)\) is sign-balanced for \(n=k>1\). Consider \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k}\pi_{k+1}\in S_{k+1}\). Note that if \(\pi\in S_{k+1}(132,231)\), then \(\pi_{1}=k+1\) or \(\pi_{k+1}=k+1\), otherwise \(\pi\) contains either \(132\) or \(231\). Suppose \(\pi_{1}=k+1\). It is clear that \(\pi\in S_{k+1}(132,231)\) if and only if \(\pi^{\prime}=\pi_{2}\pi_{3}\cdots\pi_{k+1}\in S_{k}(132,231)\). Also, if \(\pi_{k+1}=k+1\), then \(\pi\in S_{k+1}(132,231)\) if and only if \(\pi^{\prime\prime}=\pi_{1}\pi_{2}\cdots\pi_{k}\in S_{k}(132,231)\). Therefore, we deduce that
\[S_{k+1}(132,231)=\{1\ominus\mu|\mu\in S_{k}(132,231)\}\cup\{\mu\oplus 1|\mu\in S _{k}(132,231)\}.\]
Let \(\Omega_{+}=\{1\ominus\mu|\mu\in S_{k}(132,231),\mu\) is an even permutation\(\}\) and \(\Omega_{-}=\{1\ominus\mu|\mu\in S_{k}(132,231),\mu\) is an odd permutation\(\}\). Then by Lemma 2.2 we infer that if \(k\) is even, \(\Omega_{+}\) is an even permutation set and \(\Omega_{-}\) is an odd permutation set; and if \(k\) is
odd, \(\Omega_{+}\) is an odd permutation set and \(\Omega_{-}\) is an even permutation set. Applying inductive hypothesis, we deduce that \(\{1\ominus\mu|\mu\in S_{k}(132,231)\}\) is sign-balanced no matter what the parity of \(k\) is. Similarly, we see that \(\{\mu\oplus 1|\mu\in S_{k}(132,231)\}\) is sign-balanced. Therefore, \(S_{k+1}(132,231)\) is sign-balanced, and so \(S_{n}(132,231)\) is sign-balanced for every integer \(n>1\).
Note \(\{312,213\}=\{132^{*},231^{*}\}\). It follows from Corollary 2.5 that \(S_{n}(312,213)\) is sign-balanced for every integer \(n>1\). Additionally, we see that \(\{132,312\}=\{132^{-1},231^{-1}\}\) and \(\{231,213\}=\{312^{-1},213^{-1}\}\). Using a result in [13, Lemma 1], we deduce that \(S_{n}(132,312)=S_{n}^{-1}(132,231)\) and \(S_{n}(231,213)=S_{n}^{-1}(312,213)\). Since the permutations \(\sigma\) and \(\sigma^{-1}\) have the same parity, it follows that \(S_{n}(132,312)\) and \(S_{n}(231,213)\) are sign-balanced for every integer \(n>1\). \(\Box\)
**Proposition 3.2**: _All of \(S_{n}(123,213)\), \(S_{n}(321,231)\), \(S_{n}(123,132)\) and \(S_{n}(321,312)\) are sign-balanced for every integer \(n>1\)._
**Proof** It is clear that \(S_{n}(123,213)\) is sign-balanced for \(n=2,3\) because \(123\) is an even permutation while \(213\) is an odd permutation. By induction on \(n\), we assume that \(S_{n}(123,213)\) is sign-balanced for \(n=k>1\). Consider \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k}\pi_{k+1}\in S_{k+1}\). We see that if \(\pi\in S_{k+1}(123,213)\) then \(\pi_{1}=k+1\) or \(\pi_{2}=k+1\), otherwise \(\pi\) contains either \(123\) or \(213\). In addition, it is straightforward to show that if \(\pi_{1}=k+1\) then \(\pi\in S_{k+1}(123,213)\) if and only if \(\pi^{\prime}=\pi_{2}\pi_{3}\cdots\pi_{k+1}\in S_{k}(123,213)\), and if \(\pi_{2}=k+1\) then \(\pi\in S_{k+1}(132,231)\) if and only if \(\pi^{\prime\prime}=\pi_{1}\pi_{3}\cdots\pi_{k+1}\in S_{k}(123,213)\). Proceeding as in the proof of Proposition 3.1, we deduce that \(S_{k+1}(123,213)\) is sign-balanced from Lemma 2.2 and the inductive assumption. Therefore, \(S_{n}(123,213)\) is sign-balanced for every integer \(n>1\).
In addition, we see that \(\{321,231\}=\{123^{*},213^{*}\}\), \(\{123,132\}=\{321^{-1},231^{-1}\}\) and \(\{321,312\}=\{123^{-1},213^{-1}\}\). An argument similar to the one used in the proof of Proposition 3.1 shows that all of \(S_{n}(321,231)\), \(S_{n}(123,132)\) and \(S_{n}(321,312)\) are sign-balanced for every integer \(n>1\). \(\Box\)
**Proposition 3.3**: \(S_{n}(123,321)\) _is sign-balanced for every integer \(n>1\)._
**Proof** It is obvious that \(321\) is an odd permutation while \(123\) is an even permutation, and thus \(S_{n}(123,321)\) is sign-balanced for \(n=2,3\). Additionally, it follows from Lemma 2.8 that it suffices to consider \(S_{4}(123,321)\). One easily checks that \(S_{4}(123,321)\cap A_{4}=\{2143,3412\}\) and \(S_{4}(123,321)\cap(S_{4}\setminus A_{4})=\{3142,2413\}\), where \(A_{4}\) is the alternating group of degree \(4\). The completes the proof. \(\Box\)
So far, we have verified all situations of \(S_{n}(\sigma_{1},\sigma_{2})\) under the assumption of \(\sigma_{1},\sigma_{2}\in S_{3}\). Secondly, we consider \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) in case when \(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\in S_{3}\), and obtain the following result.
**Proposition 3.4**: _Let \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\subseteq S_{3}\). Then, \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) is sign-balanced for every integer \(n>1\) if and only if \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\neq\{132,213,231,312\}\) and \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\) is sign-balanced._
**Proof** If \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}=\{132,213,231,312\}\), then \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\not\supset\{123,321\}\) and further \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\cap\{123,321\}=\emptyset\). Applying Lemma 2.7, we deduce that \(S_{n}(132,213,231,312)=\{12\cdots n,n\cdots 21\}\). However, Lemma 2.4 shows that \(12\cdots n\) and \(n\cdots 21\) have the same parity if \(C_{n}^{2}\) is an even number, and so \(S_{n}(132,213,231,312)\) is not sign-balanced when \(C_{n}^{2}\) is an even number.
Assume that \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\cap\{123,321\}\neq\emptyset\) and \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\) is sign-balanced. It suffices to prove that \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) is sign-balanced for every integer \(n>1\). It is obvious that \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) is sign-balanced when \(n=2,3\). Next we consider \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) for \(n>3\).
Consider \(|\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\cap\{123,321\}|=2\). Namely \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\supset\{123,321\}\). According to Lemma 2.7, we deduce that \(|S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})|=0\) for every integer \(n\geq 5\). In addition, Lemma 2.7 shows that \(|S_{4}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})|\neq 0\) if and only if \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}=\{123,321,132,213\}\) or \(\{123,321,231,312\}\). However, we see that \(\{123,321,132,213\}\) and \(\{123,321,231,312\}\) are not sign-balanced. Thus, we deduce that \(|S_{4}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})|=0\). Therefore, \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) is sign-balanced for every integer \(n>1\).
Consider the case that \(|\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\cap\{123,321\}|=1\). Note \(321=123^{*}\). By Corollary 2.5, we can assume that \(123\notin\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\) and \(321\in\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\). Since \(\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\) is sign-balanced, it follows that either \(123,132\notin\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\) or \(123,213\notin\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\). If \(123,132\notin\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\), then \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})=\{1234\cdots n,1234\cdots(n -2)n(n-1)\}\) by Lemma 2.7. Clearly, \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) is sign-balanced in this case. If \(123,213\notin\{\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\}\), then \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})=\{1234\cdots n,2134\cdots n\}\) by Lemma 2.7. In this case, \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) is also sign-balanced. The proof of this proposition is completed. \(\Box\)
**Proof of Theorem 1.2** Consider \(S_{n}(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4},\sigma_{5},\sigma_{6})=S_{n} (S_{3})\). It is obvious that \(S_{n}(S_{3})\) is sign-balanced for every integer \(n>1\) because \(S_{n}(S_{3})=\emptyset\) when \(n\geq 3\). So we have solved Question 1.1 under the assumption of \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\in S_{3}\). At the same time, we derive Theorem 1.2 immediately by Propositions 3.1-3.4. \(\Box\)
Thirdly, we solve Question 1.1 in case when \(\sigma_{1},\sigma_{2},\ldots,\sigma_{r}\in S_{4}\), and obtain some results as follows.
**Proposition 3.5**: _Let \(R=\{\{1234,3214\},\{4321,4123\},\{4321,2341\},\{1234,1432\}\}\). Then for any \(\{\sigma_{1},\sigma_{2}\}\in R\), \(S_{n}(\sigma_{1},\sigma_{2})\) is sign-balanced for every integer \(n>1\)._
**Proof** Note that \(\{4321,4123\}=\{\overline{1234},\overline{3214}\}\), \(\{4321,2341\}=\{1234^{*},3214^{*}\}\) and \(\{1234,\ 1432\}=\{\overline{4321},\overline{2341}\}\). Then by Corollary 2.5, it suffices to prove that \(S_{n}(1234,3214)\) is sign-balanced for every integer \(n>1\). Since \(1234\) is an even permutation while \(3214\) is an odd permutation, it follows that \(S_{n}(1234,3214)\) is sign-balanced for \(n=2,3,4\). Assume that \(S_{n}(1234,3214)\) is sign-balanced for \(1<n\leq k\) with \(k>3\). Let \(X^{(i)}=\{\pi\in S_{k+1}(1234,3214):\pi(i)=k+1\}\). It is clear that \(S_{k+1}(1234,3214)\) is the disjoint union \(\bigcup_{i=1}^{k+1}X^{(i)}\). It follows from Lemma 2.8 that
\(X^{(i)}=\emptyset\) if \(i>5\). Therefore, we deduce that
\[S_{k+1}(1234,3214)=\bigcup_{i=1}^{5}X^{(i)}.\]
Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in S_{k+1}\) with \(\pi_{i}=k+1\) and \(1\leq i\leq 3\). Clearly, \(\pi\in X^{(i)}\) if and only if \(\pi^{\prime}=\pi_{1}\pi_{2}\cdots\pi_{i-1}\pi_{i+1}\cdots\pi_{k+1}\in S_{k}(123 4,3214)\). In particular, if \(i=1\) then \(\pi^{\prime}=\pi_{2}\pi_{3}\cdots\pi_{k+1}\). Proceeding as in the proof of Proposition 3.1, we deduce that all of \(X^{(1)},X^{(2)}\) and \(X^{(3)}\) are sign-balanced by Lemma 2.2 and the inductive assumption. Next we prove that \(X^{(4)}\) and \(X^{(5)}\) are also sign-balanced.
Consider \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in S_{k+1}\) with \(\pi_{4}=k+1\). Let \(\pi^{\prime}=\pi_{1}\pi_{2}\pi_{3}\pi_{5}\cdots\pi_{k+1}\). Clearly, if \(\pi\in X^{(4)}\) then \(\pi^{\prime}\in S_{k}(1234,3214)\). For convenience, we set
\[A^{(4)}=\{\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in S_{k+1}:\pi_{4}=k+1,\pi_{1}\pi_ {2}\pi_{3}\pi_{5}\cdots\pi_{k+1}\in S_{k}(1234,3214)\}.\]
Proceeding as in the proof of Proposition 3.1, we deduce that \(A^{(4)}\) is sign-balanced by Lemma 2.2 and the inductive assumption. Suppose that \(\pi\notin X^{(4)}\) and \(\pi^{\prime}\in S_{k}(1234,3214)\). It is simple to see that \(\pi_{1}\pi_{2}\pi_{3}\pi_{4}\) is an occurrence of \(1234\) or \(3214\). Moreover, if \(\pi_{1}\pi_{2}\pi_{3}\pi_{4}\) is an occurrence of \(1234\), then \(\pi_{1}<\pi_{2}<\pi_{3}=k\) otherwise \(\pi^{\prime}\) contains \(1234\); if \(\pi_{1}\pi_{2}\pi_{3}\pi_{4}\) is an occurrence of \(3214\), then \(\pi_{1}=k>\pi_{2}>\pi_{3}\) otherwise \(\pi^{\prime}\) contains an occurrence of \(3214\). Indeed, if \(\pi^{\prime}\in S_{k}(1234,3214)\), then \(\pi\notin X^{(4)}\) if and only if \(\pi_{1}<\pi_{2}<\pi_{3}=k\) or \(\pi_{1}=k>\pi_{2}>\pi_{3}\). Let \(B^{(4)}=\{\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in A^{(4)}:\pi_{1}=k>\pi_{2}>\pi_{3}\}\) and \(C^{(4)}=\{\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in A^{(4)}:\pi_{1}<\pi_{2}<\pi_{3}=k\}\). Thus, we have
\[X^{(4)}=A^{(4)}\setminus(B^{(4)}\cup C^{(4)}).\]
Consider \(B^{(4)}\cup C^{(4)}\). We observe that \(B^{(4)}=\big{\{}\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in S_{k+1}:\pi_{1}=k,\pi_{2} >\pi_{3},\pi_{4}=k+1,\pi_{2}\pi_{3}\pi_{5}\cdots\pi_{k+1}\in S_{k-1}(1234,3214) \big{\}}\) and \(C^{(4)}=\big{\{}\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}:\pi_{1}<\pi_{2},\pi_{3}=k, \pi_{4}=k+1,\pi_{1}\pi_{2}\pi_{5}\cdots\pi_{k+1}\in S_{k-1}(1234,3214)\big{\}}\). Since \(k-1\) and \(k-3\) have the same parity, it follows that \(B^{(4)}\cup C^{(4)}\) is sign-balanced from Lemma 2.2 and the inductive assumption. Therefore, \(X^{(4)}\) is sign-balanced.
Suppose that \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in X^{(5)}\) with \(\pi_{5}=k+1\). Clearly, \(\pi_{1}\pi_{2}\pi_{3}\pi_{4}\) avoids \(123\) and \(321\). Therefore, we deduce that either \(\pi_{1}>\pi_{2},\pi_{3}>\pi_{4},\pi_{3}>\pi_{1},\pi_{4}>\pi_{2}\) or \(\pi_{1}<\pi_{2},\pi_{3}<\pi_{4},\pi_{3}<\pi_{1},\pi_{4}<\pi_{2}\), otherwise \(\pi_{1}\pi_{2}\pi_{3}\pi_{4}\) contains an occurrence of \(123\) or \(321\). Moreover, for any \(\pi=\pi_{1}\pi_{2}\pi_{3}\pi_{4}\pi_{5}\cdots\pi_{k+1}\in X^{(5)}\), we observe that \(\pi_{1}\pi_{3}\pi_{2}\pi_{4}\pi_{5}\cdots\pi_{k+1}\in X^{(5)}\). In other words, exchanging the entries \(2\) and \(3\) is a bijection from \(X^{(5)}\) to \(X^{(5)}\) and further the parity of the image is opposite to that of the original image under this bijection. Thus, \(X^{(5)}\) is sign-balanced. According to the above arguments, it follows that \(S_{k+1}(1234,3214)\) is sign-balanced. The proof of this proposition is completed. \(\Box\)
**Proposition 3.6**: _Let \(R=\{\{1243,2143\},\{3421,3412\},\{4312,3412\},\{2134,2143\}\}\). Then for any \(\{\sigma_{1},\sigma_{2}\}\in R\), \(S_{n}(\sigma_{1},\sigma_{2})\) is sign-balanced for every integer \(n>1\)._
**Proof** Note that \(\{3421,3412\}=\{\overline{1243},\overline{2143}\}\), \(\{4312,3412\}=\{1243^{*},2143^{*}\}\) and \(\{2134,2143\}=\{3421^{*},3412^{*}\}\). According to Corollary 2.5, it suffices to confirm
that \(S_{n}(1243,2143)\) is sign-balanced for every integer \(n>1\). Since \(2143\) is an even permutation while \(1243\) is an odd permutation, we have that \(S_{n}(1234,3214)\) is sign-balanced for \(n=2,3,4\). By induction on \(n\), we assume that \(S_{n}(1243,2143)\) is sign-balanced for \(1<n\leq k\) with \(k>3\). Now we consider \(S_{k+1}(1243,2143)\). Let \(X^{(i)}=\{\pi\in S_{k+1}(1243,2143):\pi(i)=k+1\}\). It is easy to see that \(S_{k+1}(1243,2143)\) is the disjoint union \(\bigcup_{i=1}^{k+1}X^{(i)}\). We claim that \(X^{(i)}\) is sign-balanced for \(i=1,2,...,k+1\).
Consider \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in S_{k+1}\) with \(\pi_{1}=k+1\). Note that \(\pi\in X^{(1)}\) if and only if \(\pi_{2}\cdots\pi_{k}\in S_{k}(1243,2143)\). By Lemma 2.2 and the inductive assumption, we obtain that \(X^{(1)}\) is sign-balanced. Similarly, we can infer that \(X^{(2)}\) and \(X^{(k+1)}\) are sign-balanced.
Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in X^{(i)}\) with \(2<i<k+1\). Then there at most one number in \(\{\pi_{1},\pi_{2},\ldots,\pi_{i-1}\}\) which is smaller than the biggest number in \(\{\pi_{i+1},\ldots,\pi_{k+1}\}\), otherwise \(\pi\) contains an occurrence of \(1243\) or \(2143\). Thus, \(\{\pi_{1},\pi_{2},...,\pi_{i-1}\}\) is either \(\{k-i+2,k-i+3,...,k\}\) or \(\{m,k-i+3,...,k\}\) for some \(m\in\{1,2,...,k-i+1\}\). In the case that \(\{\pi_{1},\pi_{2},...,\pi_{i-1}\}=\{k-i+2,k-i+3,...,k\}\), set
\[\Omega_{1}=\big{\{}\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in X^{(i)}:\{\pi_{1}, \pi_{2},...,\pi_{i-1}\}=\{k-i+2,k-i+3,...,k\}\big{\}}.\]
We see that \(\Omega_{1}=\{(\sigma\oplus 1)\ominus\rho:\sigma\in S_{i-1}(1243,2143),\rho\in S _{k-i+1}(1243,2143)\}\). Proceeding as in the proof of Proposition 3.1, we deduce that \(\Omega_{1}\) is sign-balanced by Lemma 2.2, Lemma 2.6 and the inductive assumption. In the case that \(\{\pi_{1},\pi_{2},...,\pi_{i-1}\}=\{m,k-i+3,...,k\}\) for every \(m\in\{1,2,...,k-i+1\}\), we set
\[\Omega^{m}=\big{\{}\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in X^{(i)}:\{\pi_{1},\pi_ {2},...,\pi_{i-1}\}=\{m,k-i+3,...,k\}\big{\}}.\]
Notice that exchanging the entries that are \(k-i+2\) and \(m\) is a bijection from \(\Omega^{m}\) to \(\Omega_{1}\), and further the parity of the image is opposite to that of the original image under this bijection. Since \(\Omega_{1}\) is sign-balanced, it follows that \(\Omega^{m}\) is sign-balanced. Therefore, \(X^{(i)}\) is sign-balanced by the fact that \(X^{(i)}\) is the disjoint union \(\Omega_{1}\bigcup(\bigcup_{m=1}^{k-i+1}\Omega^{m})\). According to the above arguments, it follows that \(S_{k+1}(1243,2143)\) is sign-balanced. The proof of this proposition is completed. \(\Box\)
**Proposition 3.7**: _Let \(R=\{\{1423,1432\},\{3241,2341\},\{4132,4123\},\{2314,3214\}\}\). Then for any \(\{\sigma_{1},\sigma_{2}\}\in R\), \(S_{n}(\sigma_{1},\sigma_{2})\) is sign-balanced for every integer \(n>1\)._
**Proof** Note that \(\{3241,2341\}=\{\overline{1423},\overline{1432}\}\), \(\{4132,4123\}=\{1423^{*},1432^{*}\}\) and \(\{2314,3214\}=\{3241^{*},2341^{*}\}\). So it suffices to prove that \(S_{n}(1423,1432)\) is sign-balanced for every integer \(1<n\leq 4\). Since \(1423\) is an even permutation while \(1432\) is an odd permutation, we have that \(S_{n}(1423,1432)\) is sign-balanced for \(n=2,3,4\). By induction on \(n\), we assume that \(S_{n}(1423,1432)\) is sign-balanced for \(1<n\leq k\) with \(k>3\). Now we consider \(S_{k+1}(1423,1432)\). Let \(X^{(i)}=\{\pi\in S_{k+1}(1423,1432):\pi(i)=k+1\}\). It is easy to see that \(S_{k+1}(1423,1432)\) is the disjoint union \(\bigcup_{i=1}^{k+1}X^{(i)}\). We claim that \(X^{(i)}\) is sign-balanced for \(i=1,2,...,k+1\).
Consider \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in S_{k+1}\) with \(\pi_{1}=k+1\). Note that \(\pi\in X^{(1)}\) if and only if \(\pi_{2}\cdots\pi_{k}\in S_{k}(1423,1432)\). Applying Corollary 2.2 and the inductive
assumption, we obtain that \(X^{(1)}\) is sign-balanced. Similarly, we can infer that \(X^{(k)}\) and \(X^{(k+1)}\) are sign-balanced.
Let \(\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in X^{(i)}\) with \(1<i<k\). Then there at most one number in \(\{\pi_{i+1},\pi_{i+2},\ldots,\pi_{k+1}\}\) which is bigger than the smallest number in \(\{\pi_{1},\ldots,\pi_{i-1}\}\), otherwise \(\pi\) contains an occurrence of \(1423\) or \(1432\). Therefore, \(\{\pi_{i+1},\pi_{i+2},\ldots,\pi_{k+1}\}\) is either \(\{1,2....,k-i+1\}\) or \(\{1,2....,k-i,m\}\) for some \(m\in\{k-i+2,k-i+3,...,k\}\). In the case that \(\{\pi_{i+1},\pi_{i+2},\ldots,\pi_{k+1}\}=\{1,2....,k-i+1\}\), set
\[\Omega_{1}=\big{\{}\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in X^{(i)}:\{\pi_{i+1}, \pi_{i+2},\ldots,\pi_{k+1}\}=\{1,2....,k-i+1\}\big{\}}.\]
We see that \(\Omega_{1}=\{(\sigma\oplus 1)\ominus\rho:\sigma\in S_{i-1}(1243,2143),\rho\in S _{k-i+1}(1243,2143)\}\). Proceeding as in the proof of Proposition 3.1, we deduce that \(\Omega_{1}\) is sign-balanced by Lemma 2.2 and Lemma 2.6 and the inductive assumption. In the case that \(\{\pi_{i+1},\pi_{i+2},\ldots,\pi_{k+1}\}=\{1,2....,k-i,m\}\) for every \(m\in\{k-i+2,k-i+3,...,k\}\), we set
\[\Omega^{m}=\big{\{}\pi=\pi_{1}\pi_{2}\cdots\pi_{k+1}\in X^{(i)}:\{\pi_{i+1}, \pi_{i+2},\ldots,\pi_{k+1}\}=\{1,2....,k-i,m\}\big{\}}.\]
Notice that exchanging the entries that are \(k-i+1\) and \(m\) is a bijection from \(\Omega^{m}\) to \(\Omega_{1}\), and further the parity of the image is opposite to that of the original image under this bijection. Since \(\Omega_{1}\) is sign-balanced, it follows that \(\Omega^{m}\) is sign-balanced. Therefore, \(X^{(i)}\) is sign-balanced by the fact that \(X^{(i)}\) is the disjoint union \(\Omega_{1}\bigcup(\bigcup_{m=1}^{k-i+1}\Omega^{m})\). According to the above arguments, it follows that \(S_{k+1}(1243,2143)\) is sign-balanced. The proof of this proposition is completed. \(\Box\)
**Proof of Theorem 1.3** Applying Propositions 3.5-3.7, we have that Theorem 1.3 holds immediately. \(\Box\)
However, we observe that \(S_{n}(\sigma_{1},\sigma_{2})\) is not always sign-balanced for \(\sigma_{1},\sigma_{2}\in S_{4}\). Finally, we give an example which seems to mean that Question 1.1 is more and more complicated as the length of avoiding pattern increases.
**Example 3.8**: _Both \(S_{n}(1324,2143)\) and \(S_{n}(4231,3412)\) are not sign-balanced for some integer \(n>1\)._
**Proof** Note that \(\{4231,3412\}=\{\overline{1324},\overline{2143}\}\). So it suffices to verify \(S_{n}(1324,2143)\). In addition, it is clear that \(S_{n}(1324,2143)\) is sign-balanced for all integer \(1<n\leq 4\). Next we observe that \(S_{5}(2341,2143)\) is not sign-balanced.
Let \(X^{(i)}=\{\pi\in S_{5}(1324,2143):\pi(i)=5\}\). It is clear that \(5\pi_{1}\pi_{2}\pi_{3}\pi_{4}\in X^{(5)}\) if and only if \(\pi_{1}\pi_{2}\pi_{3}\pi_{4}\in S_{4}(1324,2143)\). Thus \(X^{(1)}\) is sign-balanced. Similarly, we deduce that \(X^{(2)}\) is sign-balanced. By the computation, it follows that \(X^{(3)}=\{12534\), \(13542\), \(14523\), \(23514\), \(24531\), \(34512\), \(41532\), \(42513\), \(43521\}\cup\{12543\), \(14532\), \(23541\), \(24513\), \(34521\), \(41523\), \(42531\), \(42531\), \(43512\}\). Note that the number of even permutations is one more than that of odd permutations in \(X^{(3)}\). Similarly, \(X^{(4)}=\{12453\), \(23451,31452\), \(34251,41253\), \(42351,43152\}\cup\{12354\), \(13452,32451\), \(34152,41352,43251\}\)
and the number of even permutations is also one more than that of odd permutations in \(X^{(4)}\); \(X^{(5)}=\{12345,\,23145,31245,\,32415,34125,\,42135,43215\}\cup\{21345,\,23415,\,3 2145,\,34215,\,41235,\,42315,\,43125\}\) and the number of even permutations is equal to that of odd permutations. Hence, \(S_{5}(1324,2143)\) is not sign-balanced.
The work was supported by the National Natural Science Foundation of China (No. 12061030) and Hainan Provincial Natural Science Foundation of China (No. 122RC652).
|
2309.09692 | A Method for Finding Exact Solutions to the 2D and 3D Euler-Boussinesq
Equations in Lagrangian Coordinates | We study the Boussinesq approximation for the incompressible Euler equations
using Lagrangian description. The conditions for the Lagrangian fluid map are
derived in this setting, and a general method is presented to find exact fluid
flows in both the two-dimensional and the three-dimensional case. There is a
vast amount of solutions obtainable with this method and we can only showcase a
handful of interesting examples here, including a Gerstner type solution to the
two-dimensional Euler-Boussinesq equations. In two earlier papers we used the
same method to find exact Lagrangian solutions to the homogeneous Euler
equations, and this paper serves as an example of how these same ideas can be
extended to provide solutions also to related, more involved models. | Tomi Saleva, Jukka Tuomela | 2023-09-18T11:57:56Z | http://arxiv.org/abs/2309.09692v1 | A Method for Finding Exact Solutions to the 2D and 3D Euler-Boussinesq Equations in Lagrangian Coordinates
###### Abstract
We study the Boussinesq approximation for the incompressible Euler equations using Lagrangian description. The conditions for the Lagrangian fluid map are derived in this setting, and a general method is presented to find exact fluid flows in both the two-dimensional and the three-dimensional case. There is a vast amount of solutions obtainable with this method and we can only showcase a handful of interesting examples here, including a Gerstner type solution to the two-dimensional Euler-Boussinesq equations. In two earlier papers we used the same method to find exact Lagrangian solutions to the homogeneous Euler equations, and this paper serves as an example of how these same ideas can be extended to provide solutions also to related, more involved models.
Euler equations, Boussinesq equations, Explicit solutions, Lagrangian formulation, Stratified fluids
The first author was supported by Finnish Cultural Foundation.
## 1 Introduction
We continue our investigations of finding explicit solutions to the incompressible Euler equations in the Lagrangian framework. In [12, 13] we considered the homogeneous case while here we treat the Boussinesq approximation of the heterogeneous case. The idea behind the Boussinesq approximation is that the density of the fluid does not fluctuate very much around the mean value, so that one supposes that the density is constant, except in the term that represents the gravity. There are many physical situations where this simplification is justified. The general introduction to this type of models can be found in [11], while in regard to the Lagrangian description in general we refer to [3].
One of the first explicit solutions to the homogeneous Euler equations in Lagrangian formulation is due to Gerstner [9], and it turns out that solutions of the Gerstner type show up in many related models. For example the Gerstner solution plays a prominent role also in the heterogeneous Euler equations, even without using the Boussinesq approximation. In [8] it was shown that the Gerstner wave can also be a barotropic flow, i.e. a flow where pressure and density are functions of each other, see also a modern exposition by Stuhlmeier [14]. Gerstner type solutions are also relevant in many physical models that are somehow modifications of the Euler equations. For example Constantin [4] found a barotropic Gerstner type exact solution to the equatorial water wave equations with the beta-plane approximation of the Coriolis effect. For more papers
on the applicability of the Gerstner waves see for example [10, 7, 6, 5] and especially the survey articles [2, 15]. In [16] there is also a Gerstner type solution to the first and second terms of the asymptotic expansion to the heterogeneous Euler equations. One remarkable property of Gerstner type solutions that makes them popular, is that they can in many cases be interpreted as free surface solutions, or in other words one can model the interface of two different fluids. Apparently there are no other known explicit solutions with this property.
Many explicit solutions to stratified fluid flows are shear flows, which are flows with two-dimensional horizontal motion that only depends on height. Most of the studies use the Eulerian framework. As for the Lagrangian description, Yakubovich and Shrira [17] found solutions with columnar motion using the Boussinesq approximation, but it seems that elsewhere the Eulerian framework is used.
As stated above there are numerous models for geophysical fluid flows, depending on the particular context. We have chosen to analyze the Euler-Boussinesq model in the present article. For a thorough discussion of this model and its applicability we refer to [11]. As in [12, 13], we use a separation of variables method to compute our solutions, that is, we search for fluid particle maps that can be expressed as the product of a time-only dependent matrix and a spatial-only dependent vector. This approach leads for example to a two-dimensional Gerstner type flow (4.2), as well as a plethora of other solutions both in the 2D and the 3D setting. Solution (4.2) does not satisfy the free boundary condition like most Gerstner type flows, but it still gives valid internal flows, which are an important application of the Euler-Boussinesq equations. Note in particular that our approach is not restricted to the Euler equations and Euler-Boussinesq equations, and we are confident that it could be used profitably to analyze other related models. For example Abrashkin [1] derived the governing Lagrangian equations for equatorial beta-plane flows, allowing anyone to readily attempt the separation of variables strategy that we use below. Also the Gerstner type solutions in all those different models discussed above could have been found by our method.
This paper is organized as follows. In Section 2 we give basic definitions and specify the model which will be analyzed. In Section 3 we discuss columnar flows and generalize one family of solutions which was given in [17]. In Section 4 we find exact solutions to the two-dimensional Euler-Boussinesq equations while utilizing the similarities to the homogeneous situation [12]. In Section 5 we consider the three-dimensional case and use the framework outlined in [13] to find solutions to the Euler-Boussinesq equations.
In some cases we are able to find explicit periodic and nonperiodic solutions. In other cases one can show that a solution exists for all times for convenient parameter values. Finally there are cases where all one can say that a local solution is well-defined. We allow for both stably and unstably stratified fluids as both types of situations can often be covered by a single formula, though when we give specific examples we concentrate on the stably stratified case. Some solutions that we find are neither stably nor unstably stratified. We also note that it has not been possible to explore all different cases in the present article.
## 2 Euler-Boussinesq equations
### Notation
Let \(A\,:\,\mathbb{R}\to\mathbb{R}^{n\times m}\) and \(v\,:\,\mathbb{R}^{n}\to\mathbb{R}^{m}\). We denote the columns of \(A\) by \(A_{i}\) and its entries by \(a_{ij}\). Now the minors of \(A\) are denoted by
\[\begin{split} p_{ij}=&\det(A_{i},A_{j})\,\,\text{if}\,\,n=2\,\\ p_{ijk}=&\det(A_{i},A_{j},A_{k})\,\,\text{if}\,\,n=3. \end{split} \tag{2.1}\]
Similarly the the minors of \(dv\) are
\[\begin{split} g_{ij}=&\det(\nabla v^{i},\nabla v^{ j})\,\,\text{if}\,\,n=2\,\\ g_{ijk}=&\det(\nabla v^{i},\nabla v^{j},\nabla v^{ k})\,\,\text{if}\,\,n=3\.\end{split} \tag{2.2}\]
It will also be useful to define
\[\begin{split} Q_{ij}=&\langle A^{\prime}_{i},A_{j} \rangle-\langle A^{\prime}_{j},A_{i}\rangle\,\\ G_{ij}=&\nabla v^{i}\times\nabla v^{j}\,\end{split} \tag{2.3}\]
where \(A^{\prime}_{i}\) is the (time) derivative of \(A_{i}\) and \(G_{ij}\) is useful only when \(n=3\). For derivatives of \(v\) we use multiindices so that for example
\[v^{1}_{201}=\frac{\partial^{3}v^{1}}{\partial z^{2}_{1}\partial z_{3}}\.\]
At times we will also say that functions \(v^{1}\) and \(v^{2}\) are an anti-Cauchy-Riemann pair, or an anti-CR pair, if
\[\begin{cases}v^{1}_{10}+v^{2}_{01}=0\\ v^{1}_{01}-v^{2}_{10}=0\end{cases}\.\]
### Model
The \(n\)-dimensional heterogeneous incompressible Euler equations are given by the system
\[\begin{split}\nabla\cdot u&=0\\ \tilde{\rho}(u_{t}+u\nabla u+ge_{n})+\nabla p&=0\\ \tilde{\rho}_{t}+\langle u,\nabla\tilde{\rho}\rangle&=0\.\end{split} \tag{2.4}\]
Here \(u\) is the velocity field, \(\tilde{\rho}\) is the density, \(g\) is the acceleration due to gravity, and \(e_{n}\) is the vertical unit vector parallel to gravity. Let us write these equations in the Lagrangian framework. Let \(D\subset\mathbb{R}^{n}\) be a domain and let us consider a family of diffeomorphisms \(\varphi^{t}\,:\,D\to\Omega_{t}=\varphi^{t}(D)\). The coordinates in \(D\) are denoted by \(z\) and in \(\Omega_{t}\) by \(x\). We can also define
\[\varphi\,:\,D\times\mathbb{R}\to\mathbb{R}^{n}\quad,\quad\varphi(z,t)=\varphi ^{t}(z)\,.\]
Now given such \(\varphi\) we can define the associated vector field \(u\) by the formula
\[\frac{\partial}{\partial t}\varphi(z,t)=u(\varphi(z,t),t)\,. \tag{2.5}\]
Then \((u,\tilde{\rho},p)\) solves (2.4) if \(\det(d\varphi)\neq 0\) and
\[\frac{d}{dt}\det(d\varphi) =0 \tag{2.6a}\] \[\hat{\rho}\big{(}d\varphi^{T}(\varphi^{\prime\prime}+ge_{n})\big{)} +\nabla\hat{p} =0\] (2.6b) \[\frac{d}{dt}\hat{\rho} =0. \tag{2.6c}\]
where \(\hat{p}=p\circ\varphi\) and \(\hat{\rho}=\tilde{\rho}\circ\varphi\). Hence \(\hat{\rho}\) is a function of spatial variables only, which is one great advantage of using the Lagrangian description.1 Typically one cannot explicitly recover \(u\) from \(\varphi\), since it requires computing the inverse of \(\varphi^{t}\). One exception to this is obtained when \(\varphi=A(t)z\) for some square matrix \(A\), yielding \(u=A^{\prime}A^{-1}x\).
Footnote 1: Strictly speaking in the Lagrangian description we should have \(\det(d\varphi)=1\). However, given \(\varphi\) as above we can define \(\Phi^{t}=\varphi^{t}\circ(\varphi^{0})^{-1}\).
The standard way to apply the Boussinesq approximation is to assume that \(\hat{\rho}\) is constant in every term except when it is multiplied by \(g\). Thus the Boussinesq approximation takes into account how density variations affect buoyancy. This would mean that Newton's second law (2.6b) would be replaced by the equation
\[d\varphi^{T}(\overline{\rho}\varphi^{\prime\prime}+\hat{\rho}ge_{n})+\nabla \hat{p}=0\, \tag{2.7}\]
where \(\overline{\rho}\) is the average density. However, as was shown in [17], we can make the model slightly more accurate without making the equations any more difficult to study. Supposing that \(n=3\) and taking the curl of (2.6b), we obtain
\[\nabla\hat{\rho}\times(d\varphi^{T}\varphi^{\prime\prime})+\hat{\rho}\Big{(} \sum_{i=1}^{3}\nabla\varphi_{i}^{\prime\prime}\times\nabla\varphi_{i}\Big{)}+g \nabla\hat{\rho}\times\nabla\varphi_{3}=0\,. \tag{2.8}\]
Now assuming only \(\nabla\hat{\rho}\times(d\varphi^{T}\varphi^{\prime\prime})=0\) in (2.8), dividing by \(\hat{\rho}\) and defining \(\rho=g\ln\hat{\rho}\) we obtain
\[\sum_{i=1}^{3}\nabla\varphi_{i}^{\prime\prime}\times\nabla\varphi_{i}+\nabla \rho\times\nabla\varphi_{3}=0. \tag{2.9}\]
If we had used (2.7), we would have obtained the same equation but with \(\rho\) defined as \(\rho=g\hat{\rho}/\overline{\rho}\) instead of \(\rho=g\ln\hat{\rho}\). Hence it should be kept in mind that \(\rho\) is not really density, but the density \(\hat{\rho}\) can be recovered from \(\rho\) using either \(\rho=g\hat{\rho}/\overline{\rho}\) or \(\rho=g\ln\hat{\rho}\).
Integrating the left hand side of (2.9) with respect to \(t\) we obtain the equivalents of what are the Cauchy invariants for the Euler equations:
\[h=(h^{1},h^{2},h^{3})=\sum_{i=1}^{3}\nabla\varphi_{i}^{\prime}\times\nabla\varphi _{i}+\nabla\rho\times\nabla\int\varphi_{3}\ dt\,. \tag{2.10}\]
This integrated form is often useful since it removes the second-order dependence of \(\varphi_{1}\) and \(\varphi_{2}\). But calling the components of \(h\) the "Cauchy invariants of the Euler-Boussinesq equations" is perhaps a bit of a stretch since the time integral of \(\varphi_{3}\) produces an arbitrary function of \(z\) and so there is no canonical way to choose \(h\).
In the two-dimensional case \(h\) is just a scalar and it is convenient to write it in the following form:
\[h=\sum_{i=1}^{2}\det\left(\nabla\varphi_{i}^{\prime},\nabla\varphi_{i}\right) +\det\left(\nabla\rho,\nabla\int\varphi_{2}\ dt\right). \tag{2.11}\]
We have now established the conditions for a solution to the 2D and 3D Euler-Boussinesq equations:
**Theorem 2.1**.: _The pair \((\varphi,\rho)\) provides a solution to the Euler-Boussinesq equations (2.4) via (2.5) if and only if \(\det(d\varphi)\neq 0\) everywhere, and \(\det(d\varphi)\), \(\rho\) and \(h\) are independent of time, where \(h\) is given by (2.11) in the 2D case and by (2.10) in the 3D case._
### Separation of variables
As in our previous papers [12, 13], we try to find solutions of the form \(\varphi(z,t)=A(t)v(z)\), where \(A\,:\,\mathbb{R}\rightarrow\mathbb{R}^{n\times m}\) and \(v\,:\,D\subset\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\). To this end we need convenient formulas for \(\det(d\varphi)\) and \(h\). Using (2.1), (2.2) and the Cauchy-Binet formula we obtain
\[\begin{split}\det(d\varphi)&=\sum_{1\leq i<j\leq m }p_{ij}g_{ij}\,\ \text{if}\ n=2\,\\ \det(d\varphi)&=\sum_{1\leq i<j<k\leq m}p_{ijk}g_{ ijk}\,\ \text{if}\ n=3\.\end{split} \tag{2.12}\]
Then using (2.3) we compute that
\[\begin{split} h&=\sum_{1\leq i<j\leq m}Q_{ij}g_{ij} +\sum_{i=1}^{m}y_{i}\det(\nabla\rho,\nabla v^{i})\,\ \text{if}\ n=2\,\\ h&=\sum_{1\leq i<j\leq m}Q_{ij}G_{ij}+\sum_{i=1}^{m }y_{i}\nabla\rho\times\nabla v^{i}\ \,\ \text{if}\ n=3\,\end{split} \tag{2.13}\]
where \(y_{i}\) is a function such that \(y_{i}^{\prime}=a_{ni}\).
Some properties of \(\varphi\) and \(\rho\) are gathered in the following Lemma.
**Lemma 2.2**.: _Let \((\varphi,\rho)\) be a solution to the Euler-Boussinesq equations._
1. _Let_ \(\psi\,:\,\hat{D}\to D\) _be an arbitrary diffeomorphism and let_ \(\tilde{\varphi}^{t}=\varphi^{t}\circ\psi\)_,_ \(\rho_{0}=\rho\circ\psi\)_. Then_ \((\tilde{\varphi},\rho_{0})\) _is a solution to the Euler-Boussinesq equations if and only if_ \((\varphi,\rho)\) _is._
2. _Let_ \(\varphi=Av\)_. If_ \(H\) _is a regular_ \(m\times m\) _matrix with constant entries and_ \(\tilde{v}=Hv\)_,_ \(\tilde{A}=AH^{-1}\)_, then_ \((\tilde{A}\tilde{v},\rho)\) _is a solution._
3. _In the 3D case, if_ \(R\) _is a constant rotation matrix such that_ \((R\varphi)_{3}=\varphi_{3}\)_, then_ \((R\varphi,\rho)\) _is also a solution. In the 2D case there is no nontrivial rotation_ \(R\) _for which_ \(R\varphi\) _is always a solution._
The change of coordinates in the first part of this Lemma typically allows us to assume without loss of generality that \((v^{1},v^{2})=(z_{1},z_{2})\) in the 2D case and \((v^{1},v^{2},v^{3})=(z_{1},z_{2},z_{3})\) in the 3D case. In the present paper we always use this simplification, but in practice the general form allows for more flexibility since the inverse map needed for this transformation cannot always be explicitly computed. The second part of the Lemma can be used to bring the problems to simpler form without loss of generality. From this part it also
follows that if \(A_{i}\) or \(\nabla v^{i}\) are linearly dependent over \(\mathbb{R}\), the solution reduces to a case of lower \(m\). On the other hand, \(\nabla\rho\) being an \(\mathbb{R}\)-linear combination of \(\nabla v^{i}\) does not imply that the solution reduces in this way.
The formulas (2.12) and (2.13) allow us to deduce what kind of constraints we should set for the spatial functions \(v^{i}\) and \(\rho\). We want the time derivatives of \(\det(d\varphi)\) and \(h\) to vanish for all \(t\), so, for example in the two-dimensional case, fixing any \(t\) produces constraints of the form
\[\sum_{1\leq i<j\leq m}\beta_{ij}g_{ij} =0\,\] \[\sum_{1\leq i<j\leq m}\gamma_{ij}g_{ij}+\sum_{i=1}^{m}\gamma_{i} \det(\nabla\rho,\nabla v^{i}) =0\,\]
where \(\beta_{ij}\), \(\gamma_{ij}\) and \(\gamma_{i}\) are constants. Thus the spatial functions need to satisfy a number of constraints like these. By substituting these constraint equations back to the formulas of \(\det(d\varphi)\) and \(h\) we also immediately obtain the conditions that the time functions \(a_{ij}\) are required to satisfy. In [12] and [13] we derived the spatial constraints from the formulas of \(\det(d\varphi)\) and \(h\) for the homogeneous Euler equations, and especially in [12] we went into greater detail in the 2D case to show what were all possible constraint sets that are essentially different for the cases that we studied. In the case of the Euler-Boussinesq equations this analysis, which is done using the second part of Lemma 2.2, yields very similar results, so in the present paper we mainly consider cases that are similar to those in [12] and [13]. Here the presence of \(\rho\) in the formula of \(h\) makes the analysis slightly different, and while we consider several different possibilities for \(\rho\) as well, we omit large amount of situations that where the interplay between \(\rho\) and \(v\) is more intricate.
The equations for \(h\) are second-order differential equations for \(a_{nj}\). This makes the problem very hard in general, but we can still find some exact solutions, though we often need to restrict to special cases where there are not many terms in \(\varphi_{n}\). Even so, the number of different solutions we can find is so vast that we have to restrict to the cases that seem interesting as well as sufficiently simple. Many times, when we have an underdetermined system and are able to choose some functions arbitrarily, we choose the ones that have these second-order derivatives. Thus the system becomes a first-order system in terms of the unknown functions and is more easily solvable. However, the resulting solution formulas are sometimes quite ugly, and it is possible to analyze the systems with other approaches as well.
## 3 Columnar flows
Before studying any specific cases, we would like to show a general method of obtaining solutions to the Euler-Boussinesq equations from a specific type of flows that satisfy the homogeneous Euler equations. We consider the 3D case first, the 2D case is similar.
In many cases that we considered in [13], we could find solutions to the 3D Euler equations that were of the form
\[\varphi=\left(\varphi_{1}(z_{1},z_{2},t),\varphi_{2}(z_{1},z_{2},t),a(t)z_{3} \right)\,, \tag{3.1}\]
where \(a\) is always nonzero. Flows like this feature columnar motion, where the vertical columns can stretch and contract but otherwise remain intact as the flow evolves. These are also solutions to the Euler-Boussinesq equations when density is an arbitrary function of \(z_{3}\), as \(\nabla\rho\times\nabla\varphi_{3}\) vanishes and condition (2.10) reduces to the usual Cauchy invariants of the Euler equations. But in this case we can look for more solutions to the Euler-Boussinesq equations by choosing \(\rho=c_{0}z_{3}\) and
\[\varphi=\left(\varphi_{1}(z_{1},z_{2},t),\varphi_{2}(z_{1},z_{2},t),f(z_{1},z_ {2},t)+a(t)z_{3}\right)\,.\]
In [17] a particular solution of this form was presented in the stably stratified special case \(a(t)=1\), \(c_{0}<0\). In this case the general solution of \(\varphi_{3}\) is
\[\varphi_{3}=z_{3}+f^{1}(z_{1},z_{2})\cos(Nt)+f^{2}(z_{1},z_{2})\sin(Nt)\, \tag{3.2}\]
where the constant \(N\), given by \(N^{2}=-c_{0}\), is the Brunt-Vaisala frequency, \(f^{i}\) are arbitrary, and \(\varphi_{1}\) and \(\varphi_{2}\) have to satisfy the usual 2D Euler equations.
Actually one can describe the solutions for arbitrary \(a\). First we note that \(f\) has no effect on \(\det(d\varphi)\) and \(h^{3}\), whereas for the time derivatives of \(h^{1}\) and \(h^{2}\) we have
\[(h^{1})^{\prime} =f^{\prime\prime}_{010}a-a^{\prime\prime}f_{010}-c_{0}f_{010}\] \[-(h^{2})^{\prime} =f^{\prime}_{100}a-a^{\prime\prime}f_{100}-c_{0}f_{100}\.\]
Both \((h^{1})^{\prime}\) and \((h^{2})^{\prime}\) vanish if and only if
\[af^{\prime\prime}-(a^{\prime\prime}+c_{0})f=0. \tag{3.3}\]
This is a linear ODE where \(z\) appears only as a parameter so the general solution is of the form
\[f=a_{1}(t)f^{1}(z_{1},z_{2})+a_{2}(t)f^{2}(z_{1},z_{2})\.\]
Substituting this back to (3.3) we see that \(a_{i}\) are two linearly independent solutions of
\[y^{\prime\prime}+qy=0\,\text{where }q=-(a^{\prime\prime}+c_{0})/a\,. \tag{3.4}\]
Note that \(q\) and hence \(a_{i}\) are well-defined since we must anyway choose \(a\) such that \(a\neq 0\) for all \(t\).
So the solution set is parametrized by \(a\) via \(q\), and now by classical theorems if we choose \(a\) such that \(q>0\) then \(a_{j}\) are oscillating and bounded solutions. Note that there is no condition whatsoever for \(f^{1}\) and \(f^{2}\), other than that they do not depend on \(z_{3}\).
Of course we can also choose \(c_{0}=0\) to obtain new solutions to the usual Euler equations, although in that case \(a\), \(a_{1}\), and \(a_{2}\) are linearly dependent and we may assume \(a_{2}=0\).
In the 2D situation choosing \(\rho=c_{0}z_{2}\) the analogous \(\varphi\) is given by
\[\varphi=\big{(}\varphi_{1}(z_{1},t),f(z_{1},t)+a(t)z_{2}\big{)}. \tag{3.5}\]
The incompressibility condition (2.6a) already says that \(\det(d\varphi)=(\varphi_{1})_{10}a(t)\) is independent of time, so we may assume by a coordinate transformation that \(\varphi_{1}=z_{1}/a(t)\). Then the formula of \(h\) gives us the same condition as in the three-dimensional case so the solution can be written as
\[\varphi=\big{(}z_{1}/a,a_{1}f^{1}(z_{1})+a_{2}f^{2}(z_{1})+az_{2}\big{)}=Av= \begin{pmatrix}1/a&0&0&0\\ 0&a&a_{1}&a_{2}\end{pmatrix}\begin{pmatrix}z_{1}\\ z_{2}\\ f^{1}\end{pmatrix}\, \tag{3.6}\]
where \(a_{i}\) are the solutions of (3.4) and \(f^{i}\) are arbitrary. This is the only 2D solution of this form, so below we will only use the generalization method of this Section in 3-dimensional situations.
## 4 2-dimensional case
### \(m=2\)
Let us start by studying the two-dimensional case, which is the easier case. In case \(m=2\) we always have \(v=(z_{1},z_{2})\) by part 1 of Lemma 2.2, and by (2.12) and (2.13) \(A\) satisfies
\[\det(d\varphi) =a_{11}a_{22}-a_{12}a_{21}=1 \tag{4.1}\] \[h =a^{\prime}_{11}a_{12}-a^{\prime}_{12}a_{11}+a^{\prime}_{21}a_{22 }-a^{\prime}_{22}a_{21}-y_{1}\rho_{01}+y_{2}\rho_{10}\.\]
Here \(\det(d\varphi)=1\) can be assumed by scaling \(A\). By a linear transformation described in part 2 of Lemma 2.2 we may also assume that either \(a_{21}=0\), or \(a_{21}\) and \(a_{22}\) are linearly independent.
If \(a_{21}=0\), the incompressibility condition requires \(a_{22}\neq 0\), and in this case we must have \(\rho=c_{0}z_{1}+f(z_{2})\) where \(f\) is arbitrary. Then the equations are
\[a_{11}a_{22} =1\] \[a^{\prime}_{11}a_{12}-a^{\prime}_{12}a_{11}+c_{0}y_{2} =c\,\]
where \(c\) is a constant. If \(a_{22}\) is chosen arbitrarily, then \(a_{11}=1/a_{22}\) and \(a_{12}=\frac{1}{a_{22}}\int(c_{0}y_{2}-c)a_{22}^{2}\ dt\). We can also derive the Eulerian description for this solution by computing
\[\rho\circ\varphi^{-1} =c_{0}\Big{(}a_{22}x_{1}-(x_{2}/a_{22})\int(c_{0}y_{2}-c)a_{22}^{ 2}\ dt\Big{)}+f(x_{2}/a_{22})\,\] \[u =A^{\prime}A^{-1}x=\begin{pmatrix}-a^{\prime}_{22}/a_{22}&c_{0}y_ {2}-c\\ 0&a^{\prime}_{22}/a_{22}\end{pmatrix}x\.\]
If \(a_{21}\) and \(a_{22}\) are both nonzero and linearly independent, then \(\rho\) must be linear. In this case we may assume by a linear transformation that \(\rho=c_{0}z_{2}\). If we write \(A\) as
\[A=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{pmatrix}\begin{pmatrix}b&\ell b\\ 0&1/b\end{pmatrix}\,\]
then the incompressibility condition is satisfied and the equation for \(h\) gives
\[b^{2}\ell^{\prime}=2\theta^{\prime}-c-c_{0}y_{1}\,\]
where now \(y_{1}^{\prime}=a_{21}=b\sin(\theta)\). Thus we can choose \(b\) and \(\theta\) arbitrarily and solve for \(\ell\) to obtain all the solutions. In Eulerian coordinates the solution is
\[\rho\circ\varphi^{-1} =c_{0}b\big{(}\cos(\theta)x_{2}-\sin(\theta)x_{1}\big{)}\,\] \[u =\begin{pmatrix}\cos(2\theta)&-\sin(2\theta)\\ \sin(2\theta)&\cos(2\theta)\end{pmatrix}\begin{pmatrix}b^{\prime}/b&\theta^{ \prime}\\ \theta^{\prime}&-b^{\prime}/b\end{pmatrix}x+(c_{0}y_{1}+c)\begin{pmatrix}\sin (\theta)\cos(\theta)&-\cos^{2}(\theta)\\ \sin^{2}(\theta)&-\sin(\theta)\cos(\theta)\end{pmatrix}x\.\]
### \(m=4\)
We turn immediately to case \(m=4\), skipping the analysis of case \(m=3\) completely. There are solutions in case \(m=3\) for several choices of \(\rho\) but there is no room for us to consider them here.
We meticulously showed in [12] (Theorems 4.1 and 4.2) for the homogeneous Euler equations that when \(A\) is a \(2\times 4\) matrix, then without loss of generality the spatial component \(v\) may be assumed to satisfy one of these four cases:
1. \(g_{13}+g_{24}=g_{14}-g_{23}=0\),
2. \(g_{13}=g_{24}=0\),
3. \(g_{13}+g_{24}=g_{14}=0\), or
4. \(g_{14}=g_{24}=0\).
The same is true for the Euler-Boussinesq equations with an identical proof since the incompressibility condition (2.12), which is the same for both systems, is all that is required to prove this claim.
We will consider all four cases in this Section. The equations for the entries of \(A\) are not underdetermined in the first three cases so the second-order ODEs will make them difficult to study. We still find one special solution to all these cases. The fourth case, however, is underdetermined and we are able to present a general solution for one choice of \(\rho\).
#### 4.2.1 Case 1
Let \(v\) satisfy \(g_{13}+g_{24}=g_{14}-g_{23}=0\), in which case \(v=(z_{1},z_{2},f^{1},f^{2})\), where \(f^{1}\) and \(f^{2}\) are an anti-CR pair. It is shown in [12] how we can use linear transformations to bring \(A\) to a simpler form without loss of generality. In this case, [12, Lemma 5.3] implies that we may assume that
\[A=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}&\cos(\mu)b _{11}+\sin(\mu)b_{12}&\cos(\mu)b_{12}-\sin(\mu)b_{11}\\ 0&1/b_{11}&\sin(\mu)/b_{11}&\cos(\mu)/b_{11}\end{pmatrix}\.\]
In this form \(A\) satisfies the incompressibility condition (2.12) with
\[\det(d\varphi)=1-|\nabla f^{1}|^{2}\,\]
which has to be nonzero in \(D\).
Here \(\rho\) could be chosen to be for example an arbitrary linear combination of \(v^{i}\) but the equations would become too difficult for us to find any exact solutions. Instead we choose the more simple \(\rho=c_{0}z_{2}\), in which case (2.13) yields
\[h= (Q_{12}-c_{0}y_{1})g_{12}+Q_{34}g_{34}+(Q_{13}-Q_{24}-c_{0}y_{4})g _{13}+(Q_{14}+Q_{23}+c_{0}y_{3})g_{14}\] \[= c_{12}g_{12}+c_{34}g_{34}+c_{13}g_{13}+c_{14}g_{14}\.\]
We have chosen the subscripts of the constants \(c_{ij}\) according to the corresponding \(g_{ij}\). In later cases we often use this notation without further notice.
The conditions we collect from the formula of \(h\) are explicitly written as
\[Q_{12}-c_{0}y_{1}=2\theta^{\prime}+b^{\prime}_{11}b_{12}-b^{\prime }_{12}b_{11}-c_{0}y_{1} =c_{12}\] \[Q_{34}=2\theta^{\prime}+b^{\prime}_{11}b_{12}-b^{\prime}_{12}b_{ 11}+\mu^{\prime}(b^{2}_{11}+b^{2}_{12}+1/b^{2}_{11}) =c_{34}\] \[Q_{13}-Q_{24}-c_{0}y_{4}=\mu^{\prime}\sin(\mu)(b^{2}_{11}-b^{2}_{ 12}-1/b^{2}_{11})-2\mu^{\prime}\cos(\mu)b_{11}b_{12}-c_{0}y_{4} =c_{13}\] \[Q_{14}+Q_{23}+c_{0}y_{3}=\mu^{\prime}\cos(\mu)(b^{2}_{11}-b^{2}_{ 12}-1/b^{2}_{11})+2\mu^{\prime}\sin(\mu)b_{11}b_{12}+c_{0}y_{3} =c_{14}\.\]
We can bring this system to the form
\[b^{\prime}_{11}= -\big{(}c_{0}b_{11}b_{12}\cos(\theta)+2b_{12}s^{2}+c_{0}\sin( \theta)\big{)}/\big{(}4s\big{)}\] \[b^{\prime}_{12}= \big{(}c_{0}b^{2}_{11}b_{12}\sin(\theta)-c_{0}b^{3}_{11}b^{2}_{12 }\cos(\theta)-2c_{0}b_{11}\cos(\theta)+2(b^{4}_{11}-1)s^{2}\big{)}/\big{(}4b^ {3}_{11}s\big{)}\] \[s^{\prime}= c_{0}\big{(}b_{11}b_{12}\cos(\theta)-\sin(\theta)\big{)}/(2b_{11})\] \[\theta^{\prime}_{0}= \Big{(}2c^{2}_{0}b^{2}_{11}b_{12}\sin^{2}(\theta)+c^{2}_{0}b^{2} _{11}b_{12}-8b_{12}s^{4}+\big{(}4c_{0}b^{3}_{11}b_{12}s\theta_{0}-10c_{0}b_{11 }b_{12}s^{2}\big{)}\cos(\theta)\] \[+\big{(}4c_{0}b^{2}_{11}s\theta_{0}+2(3c_{0}b^{4}_{11}-c_{0})s^{2 }-(3c^{2}_{0}b^{3}_{11}b^{2}_{12}+5c^{2}_{0}b_{11})\cos(\theta)\big{)}\sin( \theta)\Big{)}/\big{(}16b^{3}_{11}s^{2}\big{)}\] \[\theta^{\prime}= \theta_{0}\,\]
where \(s=\mu^{\prime}\). The solution blows up if \(b_{11}\) or \(s\) reaches zero, and it is difficult to analyze with which initial data this happens. However, the equilibrium point
\[\big{(}b_{11},b_{12},s,\theta_{0},\theta\big{)}=\big{(}c_{1},0,\mu_{0},0,0 \big{)}\quad,\text{where}\quad\mu_{0}^{2}=\frac{c_{0}c_{1}}{c_{1}^{4}-1}\,\]
leads to a simple solution. We may assume that \(c_{1}>0\). Thus, if \(c_{0}>0\) then \(c_{1}>1\), and if \(c_{0}<0\) then \(c_{1}<1\). This gives
\[A=\begin{pmatrix}c_{1}&0\\ 0&1/c_{1}\end{pmatrix}\begin{pmatrix}1&0&\cos(\mu_{0}t)&-\sin(\mu_{0}t)\\ 0&1&\sin(\mu_{0}t)&\cos(\mu_{0}t)\end{pmatrix}\, \tag{4.2}\]
This looks like a usual Gerstner type solution from the case of Euler equations [12, Theorem 5.1], except that \(\varphi_{1}\) and \(\varphi_{2}\) have been scaled. In case \(c_{0}<0\) and \(c_{1}<1\), which describes the stably stratified situation, the particle trajectories are ellipses stretched in the vertical direction. The curves of constant density are formed by the particles whose centers of trajectory are on the same horizontal line, see Figure 4.1.
Unfortunately this equilibrium point is not hyperbolic so the linearization is inconclusive in regard to the stability of the equilibrium. However, the Jacobian has one zero eigenvalue and four purely imaginary eigenvalues if \(c_{0}<0\) and \(\alpha<c_{1}<1\), where \(\alpha\approx 0.71\) is the smaller positive root of \(q=y^{8}-12y^{4}+3\). So one might suspect that the flow is "stable" for \(\alpha<c_{1}<1\) and "unstable" for \(c_{1}<\alpha\). Indeed the numerical computations seem to suggest this, see Figures 4.2 and 4.3. We have varied \(s\) in these examples; varying other initial values gives similar results.
It is of interest to check whether the Gerstner type solution (4.2) satisfies the free boundary condition for some choice of \(f^{1}\) and \(f^{2}\). Assuming that \(\beta(s)\), \(s\in I\subset\mathbb{R}\), is a regular curve in the \(z\) plane such that \(\partial D=\beta(I)\), pressure should be constant along \(\beta\); in other words, we need to satisfy
\[\langle\nabla p(\beta(s),t),\beta^{\prime}\rangle=0 \tag{4.3}\]
for all \(t\) and \(s\). Unfortunately it turns out that this condition cannot be met as we will show. The type of Boussinesq approximation we use does not matter so let us calculate the partial derivatives of \(p\) from the
Figure 4.1: Example of solution (4.2): some ellipse-shaped trajectories and a curve of constant density at a fixed time. In this example \(f^{1}=e^{z_{2}}\cos(z_{1})\) and \(f^{2}=e^{z_{2}}\sin(z_{1})\).
standard Boussinesqq approximation (2.7). Putting \(\gamma=c_{1}^{2}-1/c_{1}^{2}\) we obtain
\[p_{10}/(\overline{p}\mu_{0}^{2})= \gamma(f^{1}f_{10}^{1}-f^{2}f_{01}^{1})\cos^{2}(\mu_{0}t)-\gamma(f^ {1}f_{01}^{1}+f^{2}f_{10}^{1})\cos(\mu_{0}t)\sin(\mu_{0}t)\] \[+(c_{1}^{2}f^{1}-\gamma z_{2}f_{01}^{1})\cos(\mu_{0}t)-(\gamma z_ {2}f_{10}^{1}+c_{1}^{2}f^{2})\sin(\mu_{0}t)+c_{1}^{2}f^{2}f_{01}^{1}+f^{1}f_{1 0}^{1}/c_{1}^{2}\,\] \[p_{01}/(\overline{p}\mu_{0}^{2})= \gamma(f^{1}f_{01}^{1}+f^{2}f_{10}^{1})\cos(\mu_{0}t)+\gamma(f^{1} f_{10}^{1}-f^{2}f_{01}^{1})\cos(\mu_{0}t)\sin(\mu_{0}t)\] \[+(\gamma z_{2}f_{10}^{1}+f^{2}/c_{1}^{2})\cos(\mu_{0}t)+(f^{1}/c_ {1}^{2}-\gamma z_{2}f_{01}^{1})\sin(\mu_{0}t)+f^{1}f_{01}^{1}/c_{1}^{2}-c_{1}^ {2}f^{2}f_{10}^{1}-\gamma z_{2}\.\]
Thus \(\nabla p/(\overline{p}\gamma\mu_{0}^{2})\) contains the expression
\[G(z)a(t)=\begin{pmatrix}g_{1}&-g_{2}\\ g_{2}&g_{1}\end{pmatrix}\begin{pmatrix}\cos^{2}(\mu_{0}t)\\ \cos(\mu_{0}t)\sin(\mu_{0}t)\end{pmatrix}\,\]
where \(g_{1}=f^{1}f_{10}^{1}-f^{2}f_{01}^{1}\) and \(g_{2}=f^{1}f_{01}^{1}+f^{2}f_{10}^{1}\). Now the free surface condition (4.3) implies \(G^{T}\beta^{\prime}=0\) for all \(s\in I\). Thus
\[\det(G)=g_{1}^{2}+g_{2}^{2}=|\nabla f^{1}|^{2}\big{(}(f^{1})^{2}+(f^{2})^{2} \big{)}=0\.\]
This implies that \(f^{1}\) and \(f^{2}\) are constant in \(\beta(I)\) and thus constant everywhere since they are an anti-CR pair. This leads to a trivial solution as we further obtain \(f^{1}=f^{2}=0\).
#### 4.2.2 Case 2
Suppose that we have the constraints \(g_{13}=g_{24}=0\), or that \(v=\big{(}z_{1},z_{2},f^{1}(z_{1}),f^{2}(z_{2})\big{)}\). Reducing as in [12, Lemma 5.4], we may suppose at the outset that
\[A=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}&\ell b_{12} &b_{11}/\ell\\ 0&1/b_{11}&\ell/b_{11}&0\end{pmatrix}\.\]
This satisfies the incompressibility condition (2.12) with
\[\det(d\varphi)=1-(f^{1})^{\prime}(f^{2})^{\prime}\neq 0\.\]
Again for density we choose the simplest situation \(\rho=c_{0}z_{2}\) and moreover we choose \(\theta=0\). Inspecting condition (2.13), we first find that \(b_{12}\) and \(b_{11}/\ell\) are linearly dependent, and by a linear transformation we may assume that \(b_{12}=0\). Then we obtain for \(\ell\) and \(b_{11}\)
\[(\ell^{\prime}/\ell^{2})b_{11}^{2} =c_{14}\] \[-\ell^{\prime}/b_{11}^{2}+c_{0}\int\ell/b_{11}\ dt =c_{23}\.\]
Further let \(b_{11}=y^{\prime}\). This can be transformed to the system
\[\ell^{\prime}= (c_{0}/2)y\ell\] \[y^{\prime}= \frac{k_{0}\sqrt{y}+c_{0}y^{3}}{10y}\.\]
If \(k_{0}=0\) there is an explicit solution
\[\ell=\frac{k_{3}}{(c_{0}t+k_{2})^{5}}\quad,\quad y=k_{1}-\frac{10}{k_{2}+c_{0 }t}\.\]
Here \(k_{1}\) does not appear in \(A\), \(k_{2}=0\) can be assumed by translation of \(t\) and by scaling we may take \(k_{3}=c_{0}^{5}\). Thus this solution can be written as
\[A=\begin{pmatrix}10/c_{0}&0\\ 0&c_{0}/10\end{pmatrix}\begin{pmatrix}t^{-2}&0&0&t^{3}\\ 0&t^{2}&t^{-3}&0\end{pmatrix}\.\]
This solution is unstably stratified regardless of how \(c_{0}\) is chosen.
#### 4.2.3 Case 3
Now we consider the case \(g_{13}+g_{24}=g_{14}=0\), where \(v=\big{(}z_{1},z_{2},z_{2}(f^{1})^{\prime}(z_{1})+f^{2}(z_{1}),f^{1}(z_{1}) \big{)}\), and we see from [12, Lemma 5.5] that
\[A=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{pmatrix}\begin{pmatrix}\ell b_{12}&b_{12}&b_{13} &\ell b_{13}\\ -\ell/b_{13}&-1/b_{13}&0&0\end{pmatrix}\]
is the general form that satisfies condition (2.12) with
\[\det(d\varphi)=-z_{2}(f^{1})^{\prime\prime}-(f^{2})^{\prime}\neq 0\.\]
Let us suppose that \(\theta=0\) and that \(\rho=c_{1}z_{1}+c_{2}z_{2}\). Then we get the following system from (2.13):
\[\ell^{\prime}= \frac{k_{0}}{b_{13}^{2}}\] \[b_{13}^{\prime}= \frac{(c_{2}\ell-c_{1})\,b_{13}^{4}}{4k_{0}}\] \[b_{12}^{\prime}= \frac{(c_{2}\ell-c_{1})\,b_{12}b_{13}^{3}}{4k_{0}}\.\]
Eliminating \(\ell\) from the second equation we obtain
\[4b_{13}b_{13}^{\prime\prime}-16(b_{13}^{\prime})^{2}+c_{2}b_{13}^{3}=0\.\]
If \(c_{2}=0\) the above equation can be solved explicitly. Putting moreover \(b_{12}=0\) gives the stably stratified solution
\[A=\begin{pmatrix}0&0&t^{-1/3}&(9/20)c_{1}t^{4/3}\\ -(9/20)c_{1}t^{2}&-t^{1/3}&0&0\end{pmatrix}\.\]
#### 4.2.4 Case 4
Studying the homogeneous situation in [12], we dismissed the fourth case as it only provided solutions that reduced to case \(m=3\). Here this does not happen; in fact solution (3.6) is already an example of this case, albeit with spatial variables named differently.
In this case we can rewrite the spatial constraints as \(g_{23}=g_{24}=0\) by changing the order of the components of \(v\). In this case we have \(v=\big{(}z_{1},z_{2},f^{1}(z_{2}),f^{2}(z_{2})\big{)}\), and due to the incompressibility condition (2.12) \(A\) can be assumed to be of the form
\[A=\begin{pmatrix}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{pmatrix}\begin{pmatrix}b_{1}&b_{2}&b_{3}&b_{4} \\ 0&1/b_{1}&0&0\end{pmatrix}\,\]
which gives simply \(\det(d\varphi)=1\). If we choose \(\rho=c_{0}f^{2}\), (2.13) yields the conditions
\[b^{\prime}_{1}b_{2}-b^{\prime}_{2}b_{1}+2\theta^{\prime} =c_{12}\] \[b^{\prime}_{1}b_{3}-b^{\prime}_{3}b_{1} =c_{13}\] \[b^{\prime}_{1}b_{4}-b^{\prime}_{4}b_{1}-c_{0}y_{1} =c_{14}\,\]
where \(y^{\prime}_{1}=a_{21}=b_{1}\sin(\theta)\). We can choose for example \(b_{1}\) and \(\theta\) arbitrarily and easily solve these equations for \(b_{2}\), \(b_{3}\) and \(b_{4}\), yielding
\[b_{2} =b_{1}\int\frac{2\theta^{\prime}-c_{12}}{b_{1}^{2}}\ dt\] \[b_{3} =-b_{1}\int\frac{c_{13}}{b_{1}^{2}}\ dt\] \[b_{4} =-b_{1}\int\frac{c_{0}y_{1}+c_{14}}{b_{1}^{2}}\ dt\.\]
We may assume \(c_{12}=0\) without loss of generality.
## 5 3-dimensional case
### \(m=3\)
Now we turn to the 3D case and once again we take the simplest case to be our first example and consider the case \(m=3\). Thus we have \(v=(z_{1},z_{2},z_{3})\). We apply the QR-decomposition to \(A\) and write \(A=RB\), where \(R\in\mathbb{SO}(3)\) is a rotation matrix and \(B\) is an upper triangle matrix. By the incompressibility condition (2.12) we then have
\[\det(d\varphi)=\det(B)=b_{11}b_{22}b_{33}=1.\]
For condition (2.13) we have, after substituting \(b_{33}=1/(b_{11}b_{22})\),
\[h^{1} =b^{\prime}_{12}b_{13}-b^{\prime}_{13}b_{12}+b^{\prime}_{22}b_{23 }-b^{\prime}_{23}b_{22}+\frac{w_{1}}{b_{11}}-\frac{w_{2}b_{12}}{b_{11}b_{22}}+ w_{3}(b_{12}b_{23}-b_{13}b_{22})+y_{3}\rho_{010}-y_{2}\rho_{001} \tag{5.1}\] \[h^{2} =-b^{\prime}_{11}b_{13}+b^{\prime}_{13}b_{11}+w_{2}/b_{22}-w_{3}b _{11}b_{23}+y_{1}\rho_{001}-y_{3}\rho_{100}\] \[h^{3} =b^{\prime}_{11}b_{12}-b^{\prime}_{12}b_{11}+w_{3}b_{11}b_{22}+y _{2}\rho_{100}-y_{1}\rho_{010}\,\]
where
\[w=2\big{(}\langle R^{\prime}_{2},R_{3}\rangle,-\langle R^{\prime}_{1},R_{3} \rangle,\langle R^{\prime}_{1},R_{2}\rangle\big{)}\.\]
As in the first 2D case (4.1), we may use linear transformations to assume without loss of generality that either \(\rho=f(z_{3})+c_{0}z_{2}\), in which case \(a_{31}=a_{32}=0\), or \(\rho=c_{0}z_{3}\).
Suppose first that \(\rho=f(z_{3})+c_{0}z_{2}\). Since \(a_{31}=a_{32}=0\), the rotation matrix \(R\) must be of the form
\[R=\begin{pmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{pmatrix} \tag{5.2}\]
for some function \(\theta(t)\), and \(y^{\prime}_{3}=b_{33}\). In this case \(w=(0,0,2\theta^{\prime})\) and \(h\) can be written as
\[h^{1} =b^{\prime}_{12}b_{13}-b^{\prime}_{13}b_{12}+b^{\prime}_{22}b_{23 }-b^{\prime}_{23}b_{22}+2\theta^{\prime}(b_{12}b_{23}-b_{13}b_{22})+c_{0}y_{3} =c_{23}\] \[h^{2} =-b^{\prime}_{11}b_{13}+b^{\prime}_{13}b_{11}-2\theta^{\prime}b_ {11}b_{23}=c_{13}\] \[h^{3} =b^{\prime}_{11}b_{12}-b^{\prime}_{12}b_{11}+2\theta^{\prime}b_ {11}b_{22}=c_{12}\.\]
There are different ways to find partial solutions to this system, but one fairly general solution can be found by assuming only that \(c_{12}=0\). We may then take \(b_{11}\), \(b_{22}\), and \(\theta\) as arbitrary functions and solve for \(b_{12}\), \(b_{23}\), and \(b_{13}\):
\[b_{12} =b_{11}\int\frac{2\theta^{\prime}b_{22}}{b_{11}}\ dt\] \[b_{23} =b_{22}\int\frac{-c_{13}b_{12}/b_{11}+c_{0}y_{3}-c_{23}}{b_{22}^{ 2}}\ dt\] \[b_{13} =b_{11}\int\frac{2\theta^{\prime}b_{11}b_{23}+c_{13}}{b_{11}^{2}} \ dt\.\]
Then suppose that \(\rho=c_{0}z_{3}\) is linear. Once again we present the solution to (5.1) in case \(h^{3}=0\). We choose \(b_{11}\) and \(b_{22}\), as well as the whole rotation matrix \(R\) arbitrarily, and solve for \(b_{12}\), \(b_{23}\), and \(b_{13}\) to obtain
\[b_{12} =b_{11}\int\frac{w_{3}b_{22}}{b_{11}}\ dt\] \[b_{23} =b_{22}\int\frac{(w_{1}+b_{12}(c_{0}y_{1}-c_{13}))/b_{11}-c_{0}y_ {2}-c_{23}}{b_{22}^{2}}\ dt\] \[b_{13} =b_{11}\int\frac{-w_{2}/b_{22}+w_{3}b_{11}b_{23}-c_{0}y_{1}+c_{13 }}{b_{11}^{2}}\ dt\.\]
Finally, if \(\rho=c_{0}z_{3}\) and we choose \(b_{13}=b_{23}=0\) and \(R\) as in (5.2), we have a flow of the form (3.1). Thus we can add two terms to \(\varphi_{3}\) as shown in Section 3 and obtain \(\varphi=Av\), where
\[A =\begin{pmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}&0&0&0\\ 0&b_{22}&0&0&0\\ 0&0&1/(b_{11}b_{22})&a_{1}&a_{2}\end{pmatrix}\,\] \[v =(z_{1},z_{2},z_{3},f^{1}(z_{1},z_{2}),f^{2}(z_{1},z_{2}))\,\]
where again \(b_{11}\), \(b_{22}\), and \(\theta\) are arbitrary,
\[b_{12}=b_{11}\int\frac{w_{3}b_{22}}{b_{11}}\ dt\]
and \(a_{i}\) are linearly independent solutions of (3.4) with \(a=1/(b_{11}b_{22})\).
### \(m=5\)
Much like for the 2D cases with \(m=4\), we will only find a rather small subset of the solutions in the 3D case when \(m\geq 5\). The three cases considered in [13, Section 5] readily yield solutions to the Euler-Boussinesq equations as well. For example in [13, Section 5.1] there was the solution \(\varphi=Av\) with
\[A =\begin{pmatrix}\cos(\theta)/\sqrt{\theta^{\prime}}&-\sin(\theta )/\sqrt{\theta^{\prime}}&0&\cos(\theta)/\sqrt{\theta^{\prime}}&\sin(\theta)/ \sqrt{\theta^{\prime}}\\ \sin(\theta)/\sqrt{\theta^{\prime}}&\cos(\theta)/\sqrt{\theta^{\prime}}&0&- \sin(\theta)/\sqrt{\theta^{\prime}}&\cos(\theta)/\sqrt{\theta^{\prime}}\\ 0&0&\theta^{\prime}&0&0\end{pmatrix}\,\] \[v =\left(z_{1},z_{2},z_{3},f^{1}(z_{1},z_{2},z_{3}),f^{2}(z_{1},z_{ 2},z_{3})\right)\,,\] \[\det(d\varphi) =1-\left(f_{100}^{1}\right)^{2}-\left(f_{010}^{1}\right)^{2}\neq 0\,\]
where \(\theta(t)\) is arbitrary and \(f^{1}\) and \(f^{2}\) are an anti-CR pair with respect to \(z_{1}\) and \(z_{2}\). Recall that a solution to the Euler equations is also a solution to the Euler-Boussinesq equations if \(\nabla\rho\times\nabla\varphi_{3}=0\). Here \(\varphi_{3}=\theta^{\prime}z_{3}\), so choosing \(\rho\) to be a function of \(z_{3}\) we obtain a solution to the Euler-Boussinesq equations. The other two cases in [13, Section 5] are
\[A =\begin{pmatrix}b&0&0&0&b\ell\\ 0&b\ell&0&b&0\\ 0&0&1/(b^{2}\ell)&0&0\end{pmatrix}\,\] \[v =\left(z_{1},z_{2},z_{3},f^{1}(z_{1},z_{3}),f^{2}(z_{2},z_{3}) \right)\,,\] \[\det(d\varphi) =1-f_{100}^{1}f_{010}^{2}\neq 0\,\]
where \(b^{2}\ell^{\prime}=1\); and
\[A =\begin{pmatrix}0&0&0&b&b\ell\\ b\ell&b&0&0&0\\ 0&0&1/b^{2}&0&0\end{pmatrix}\,\] \[v =\begin{pmatrix}z_{1},z_{2},z_{3},f^{1}(z_{1},z_{3})+z_{2}f_{100}^{ 2}(z_{1},z_{3}),f^{2}(z_{1},z_{3})\end{pmatrix}\,,\] \[\det(d\varphi) =f_{100}^{1}+z_{2}f_{200}^{2}\neq 0\,\]
where \(b^{2}\ell^{\prime}=1\). These are also solutions to the Euler-Boussinesq equations with \(\rho=\rho(z_{3})\).
We can also find the similar type of solutions in the other three cases mentioned at the start of [13, Section 5] if we require that \(\varphi_{3}=a_{33}z_{3}\). The derivation of the following formulas is very similar to the above three cases, which were derived in [13], so we present them without proofs. In the following formulas \(\theta\) is an arbitrary function with \(\theta^{\prime}>0\). If \(G_{14}+G_{25}=G_{15}-G_{24}=0\) and \(\varphi_{3}=a_{33}z_{3}\), then the solutions of the Euler equations, and the solutions of the Euler-Boussinesq equations with \(\rho=\rho(z_{3})\), are
\[A =\begin{pmatrix}\cos(k_{1}\theta)/\sqrt{\theta^{\prime}}&-\sin(k_ {1}\theta)/\sqrt{\theta^{\prime}}&0&\cos(k_{2}\theta)/\sqrt{\theta^{\prime}}&- \sin(k_{2}\theta)/\sqrt{\theta^{\prime}}\\ \sin(k_{1}\theta)/\sqrt{\theta^{\prime}}&\cos(k_{1}\theta)/\sqrt{\theta^{ \prime}}&0&\sin(k_{2}\theta)/\sqrt{\theta^{\prime}}&\cos(k_{2}\theta)/\sqrt{ \theta^{\prime}}\\ 0&0&\theta^{\prime}&0&0\end{pmatrix}\, \tag{5.3}\] \[v =\begin{pmatrix}z_{1},z_{2},z_{3},f^{1}(z_{1},z_{2}),f^{2}(z_{1},z_{2})\end{pmatrix}\,,\] \[\det(d\varphi) =1-|\nabla f|^{2}\neq 0\,\]
where \(f^{1}\) and \(f^{2}\) are an anti-CR pair.
If \(G_{14}=G_{25}=0\), we have
\[A =\begin{pmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}e^{\theta}/\sqrt{\theta^{\prime}}&0&0&0&e^{- \theta}/\sqrt{\theta^{\prime}}\\ 0&e^{-\theta}/\sqrt{\theta^{\prime}}&0&e^{\theta}/\sqrt{\theta^{\prime}}&0\\ 0&0&\theta^{\prime}&0&0\end{pmatrix}\, \tag{5.4}\] \[v =\begin{pmatrix}z_{1},z_{2},z_{3},f^{1}(z_{1}),f^{2}(z_{2}) \end{pmatrix}\,,\] \[\det(d\varphi) =1-(f^{1})^{\prime}(f^{2})^{\prime}\neq 0\.\]
And if \(G_{14}+G_{25}=G_{15}=0\), then we have
\[A =\begin{pmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}0&0&0&1/\sqrt{\theta^{\prime}}&\theta/ \sqrt{\theta^{\prime}}\\ \theta/\sqrt{\theta^{\prime}}&1/\sqrt{\theta^{\prime}}&0&0&0\\ 0&0&\theta^{\prime}&0&0\end{pmatrix}\, \tag{5.5}\] \[v =\begin{pmatrix}z_{1},z_{2},z_{3},f^{1}(z_{1})+z_{2}(f^{2})^{ \prime}(z_{1}),f^{2}(z_{1})\end{pmatrix}\,,\] \[\det(d\varphi) =(f^{1})^{\prime}+z_{2}(f^{2})^{\prime\prime}\neq 0\.\]
In the above three cases \(\varphi\) is of the form (3.1) so we can extend these solutions as shown in Section 3; for example the solution (5.3) can be extended as follows: \(\varphi=Av\), where
\[A=\begin{pmatrix}\cos(k_{1}\theta)/\sqrt{\theta^{\prime}}&-\sin(k_{1}\theta)/ \sqrt{\theta^{\prime}}&0&\cos(k_{2}\theta)/\sqrt{\theta^{\prime}}&-\sin(k_{2} \theta)/\sqrt{\theta^{\prime}}&0&0\\ \sin(k_{1}\theta)/\sqrt{\theta^{\prime}}&\cos(k_{1}\theta)/\sqrt{\theta^{\prime} }&0&\sin(k_{2}\theta)/\sqrt{\theta^{\prime}}&\cos(k_{2}\theta)/\sqrt{\theta^{ \prime}}&0&0\\ 0&0&\theta^{\prime}&0&0&a_{1}&a_{2}\end{pmatrix}\]
and \(v=\begin{pmatrix}z_{1},z_{2},z_{3},f^{1}(z_{1},z_{2}),f^{2}(z_{1},z_{2}),f^{3 }(z_{1},z_{2}),f^{4}(z_{1},z_{2})\end{pmatrix}\), where \(f^{1}\) and \(f^{2}\) are an anti-CR pair, and \(f^{3}\) and \(f^{4}\) are arbitrary. Here \(a_{i}\) are linearly independent solutions of (3.4) with \(a=\theta^{\prime}\).
We can add the same expression to (5.4) and (5.5) as well.
### \(m=6\), case 1
Let \(v=\begin{pmatrix}z_{1},z_{2},z_{3},f^{1}(z_{3}),f^{2}(z_{1},z_{2}),f^{3}(z_{1},z_{2})\end{pmatrix}\), where \(f^{1}\) is arbitrary and \(f^{2}\) and \(f^{3}\) are an anti-CR pair. We studied this case for the homogeneous Euler equations in [13, Section 7] and a large part of the calculations we need in the present case was already shown there. We choose \(\rho=\sum_{i}c_{i}v^{i}\) and look for solutions of the form
\[A=\begin{pmatrix}\cos(\theta)&-\sin(\theta)&0\\ \sin(\theta)&\cos(\theta)&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}b_{11}&b_{12}&\ell_{1}b_{11}+\ell_{2}b_{12}&- \ell_{2}b_{11}+\ell_{1}b_{12}&\ell_{1}b_{15}&-\ell_{2}b_{15}\\ 0&b_{22}&\ell_{2}b_{22}&\ell_{1}b_{22}&\ell_{1}b_{25}&-\ell_{2}b_{25}\\ 0&0&0&0&\ell_{1}b_{35}&-\ell_{2}b_{35}\end{pmatrix}\,\]
where the conservation of volume requires that
\[b_{11}b_{22}b_{35}\big{(}\ell_{1}^{2}+\ell_{2}^{2}\big{)}+1=0. \tag{5.6}\]
In this case
\[\det(d\varphi)=f_{100}^{2}+f_{010}^{2}f_{001}^{1}\neq 0\.\]
We collect the rest of the constraints from the expressions of \(h^{j}\), which are
\[h^{1}= Q_{23}+Q_{24}(f^{1})^{\prime}-(Q_{35}+c_{3}y_{5})f_{01}^{2}-(Q_{45}+c_ {4}y_{5})(f^{1})^{\prime}f_{01}^{2}+(Q_{36}+c_{3}y_{6})f_{10}^{2}+(Q_{46}+c_{4} y_{6})(f^{1})^{\prime}f_{10}^{2}\] \[h^{2}= -Q_{13}-Q_{14}(f^{1})^{\prime}+(Q_{35}+c_{3}y_{5})f_{10}^{2}+(Q_{4 5}+c_{4}y_{5})(f^{1})^{\prime}f_{10}^{2}+(Q_{36}+c_{3}y_{6})f_{01}^{2}+(Q_{46}+ c_{4}y_{6})(f^{1})^{\prime}f_{01}^{2}\] \[h^{3}= Q_{12}+(Q_{15}-Q_{26}+c_{1}y_{5}-c_{2}y_{6})f_{01}^{2}-(Q_{16}+Q_ {25}+c_{1}y_{6}+c_{2}y_{5})f_{10}^{2}-(Q_{56}+c_{5}y_{6}-c_{6}y_{5})|\nabla f^{ 2}|^{2}\.\]
All the equations that were used to prove [13, Lemma 7.4] are also present here. In that Lemma we showed that from these equations it follows that
\[\ell_{1}=k_{1}+\cos(\mu)\,\quad\ell_{2}=\sin(\mu)\,\] \[\mu^{\prime}b_{11}b_{22}=\sqrt{c_{12}^{2}-c_{13}^{2}-(k_{1}c_{12} +c_{23})^{2}}=:1/k_{2}\]
for some constant \(k_{1}\) and function \(\mu\). We also have
\[\mu^{\prime}b_{11}^{2}=(k_{1}c_{12}+c_{23})\cos(\mu)+c_{13}\sin( \mu)-c_{12}\] \[b_{12}=k_{2}\Big{(}(k_{1}c_{12}+c_{23})\sin(\mu)-c_{13}\cos(\mu) \Big{)}b_{22}\,.\]
We further assume that \(k_{1}=0\). Now condition (5.6) gives \(b_{35}=-k_{2}\mu^{\prime}\). Then \(a_{35}=-k_{2}\mu^{\prime}\cos(\mu)\) and \(a_{36}=k_{2}\mu^{\prime}\sin(\mu)\), which we can integrate to obtain \(y_{5}=-k_{2}\sin(\mu)\), \(y_{6}=-k_{2}\cos(\mu)\). Then we find that we must have \(c_{1}=c_{2}=c_{3}=c_{4}=0\), in which case all equations except the one containing \(Q_{56}\) are the same as in the case of Euler equations. Then, as is shown in [13], we have \(b_{15}=b_{25}=0\). The only remaining equations after this are those that contain \(Q_{12}\) and \(Q_{56}\). From them we solve for \(\mu\) and \(\theta\):
\[k_{2}^{2}(\mu^{\prime})^{3}=k_{2}c_{5}\cos(\mu)-k_{2}c_{6}\sin( \mu)+c_{56}\] \[2k_{2}\theta^{\prime}=-\mu^{\prime}/(c_{23}\cos(\mu)+c_{13}\sin( \mu)-c_{12})=-1/b_{11}^{2}\.\]
Thus we have a semi-explicit periodic solution where one ODE needs to be solved numerically. As \(\mu^{\prime}\) must not be zero, \(|c_{56}|\) has to be large enough for the solution to exist for all \(t\). The simplest example is obtained by choosing \(c_{13}=c_{23}=0\), \(c_{12}=-1\), which gives
\[A=\begin{pmatrix}\cos(\theta)/\sqrt{\theta^{\prime}}&-\sin(\theta)/\sqrt{ \theta^{\prime}}&\cos(\theta)/\sqrt{\theta^{\prime}}&\sin(\theta)/\sqrt{ \theta^{\prime}}&0&0\\ \sin(\theta)/\sqrt{\theta^{\prime}}&\cos(\theta)/\sqrt{\theta^{\prime}}&-\sin (\theta)/\sqrt{\theta^{\prime}}&\cos(\theta)/\sqrt{\theta^{\prime}}&0&0\\ 0&0&0&0&2\theta^{\prime}\cos(2\theta)&2\theta^{\prime}\sin(2\theta)\end{pmatrix}\, \tag{5.7}\]
where
\[8(\theta^{\prime})^{3}+c_{5}\cos(2\theta)+c_{6}\sin(2\theta)+c_{56}=0\.\]
One particular periodic particle path in this case is shown in Figure 5.1. This solution is neither stably nor unstably stratified in particular; instead the denser particles are periodically either at the top or at the bottom of the fluid.
### \(m=6\), case 2
Let \(v=\big{(}z_{1},z_{2},z_{3},f^{1}(z_{1}),f^{2}(z_{2}),f^{3}(z_{3})\big{)}\). We look for solutions of the form
\[A=\begin{pmatrix}a_{1}&0&0&0&0&\ell_{1}a_{1}\\ 0&a_{2}&0&\ell_{2}a_{2}&0&0\\ 0&0&a_{3}&0&\ell_{3}a_{3}&0\end{pmatrix}\,\]
where the volume preservation condition requires \(a_{1}a_{2}a_{3}=\ell_{1}\ell_{2}\ell_{3}=1\), giving
\[\det(d\varphi)=1+(f^{1})^{\prime}(f^{2})^{\prime}(f^{3})^{\prime}\neq 0\.\]
Now
\[h=\begin{pmatrix}\ell_{3}^{\prime}a_{3}^{2}(f^{2})^{\prime}(z_{2})+y_{3}\rho_{ 100}-y_{5}(f^{2})^{\prime}(z_{2})\rho_{001}\\ \ell_{1}^{\prime}a_{2}^{2}(f^{3})^{\prime}(z_{3})-y_{3}\rho_{100}\\ \ell_{2}^{\prime}a_{2}^{2}(f^{1})^{\prime}(z_{1})+y_{5}(f^{2})^{\prime}(z_{2}) \rho_{100}\end{pmatrix}\,\]
from which we see that generally we must have \(\rho=c_{1}z_{3}+c_{2}f^{2}(z_{2})\). Thus \(A\) is a solution if the following equations are satisfied:
\[\begin{split} a_{1}a_{2}a_{3}&=1\qquad\qquad\qquad \qquad\ell_{1}\ell_{2}\ell_{3}=1\qquad\ell_{1}^{\prime}a_{1}^{2}=-c_{16}\\ \ell_{2}^{\prime}a_{2}^{2}&=-c_{24}\quad\ell_{3}^{ \prime\prime}a_{3}+2\ell_{3}^{\prime}a_{3}^{\prime}+c_{2}-c_{1}\ell_{3}=0\.\end{split} \tag{5.8}\]
Here one function can be given arbitrarily. We can try to give solutions in terms of \(\ell_{3}\) since that is the only function with second-order dependence. We get
\[a_{3}=\frac{k_{0}+\int\frac{c_{1}\ell_{3}-c_{2}}{2\sqrt{|\ell_{3}^{\prime}|}} \ dt}{\sqrt{|\ell_{3}^{\prime}|}} \tag{5.9}\]
from the last equation of (5.8) and then we can further use the remaining equations of (5.8) to solve for the other functions in terms of \(a_{3}\) and \(\ell_{3}\). Eliminating \(a_{1}\), \(a_{2}\), and \(\ell_{2}\), we obtain the equation
\[\ell_{3}(\ell_{1}^{\prime}/\ell_{1})^{2}+\ell_{3}^{\prime}(\ell_{1}^{\prime}/ \ell_{1})+c_{16}c_{24}\ell_{3}^{2}a_{3}^{2}=0\,\]
which is a quadratic polynomial equation for \(\ell_{1}^{\prime}/\ell_{1}\). Assuming that its discriminant \(\delta=\ell_{3}^{\prime 2}-4c_{16}c_{24}\ell_{3}^{3}a_{3}^{2}\geq 0\), we can solve \(\ell_{1}^{\prime}/\ell_{1}\) and integrate to obtain \(\ell_{1}\) and eventually the rest of the functions. Let \(s^{2}=\delta/(4\ell_{3}^{2})\); then we compute
\[\begin{split}\ell_{1}&=\frac{k_{1}}{\sqrt{|\ell_{3 }|}}\exp\Big{(}\int s(t)\ dt\Big{)}\ \ \ \ \ \ell_{2}=1/(\ell_{1}\ell_{3})\\ a_{1}^{2}&=-c_{16}/\ell_{1}^{\prime}\qquad\qquad \qquad\qquad\qquad a_{2}^{2}=-c_{24}/\ell_{2}^{\prime}\.\end{split} \tag{5.10}\]
\(k_{1}\) may be scaled to 1. Formulas (5.9) and (5.10) give a local solution but unfortunately formula (5.9) is meaningful only when \(\ell_{3}^{\prime}\neq 0\). An example where \(\ell_{3}\) is not monotone can be found by supposing that \(a_{3}=1\), \(c_{1}=-N^{2}\) is negative and also \(c_{16}c_{24}<0\). Then the final equation of (5.8) gives
\[\ell_{3}=-c_{2}/N^{2}+k_{2}\cos(Nt)+k_{3}\sin(Nt). \tag{5.11}\]
Choosing the constants such that \(\ell_{3}\) is always positive, (5.11) along with (5.10) gives a global solution describing a stably stratified fluid.
|
2305.19754 | Sentence Simplification Using Paraphrase Corpus for Initialization | Neural sentence simplification method based on sequence-to-sequence framework
has become the mainstream method for sentence simplification (SS) task.
Unfortunately, these methods are currently limited by the scarcity of parallel
SS corpus. In this paper, we focus on how to reduce the dependence on parallel
corpus by leveraging a careful initialization for neural SS methods from
paraphrase corpus. Our work is motivated by the following two findings: (1)
Paraphrase corpus includes a large proportion of sentence pairs belonging to SS
corpus. (2) We can construct large-scale pseudo parallel SS data by keeping
these sentence pairs with a higher complexity difference. Therefore, we propose
two strategies to initialize neural SS methods using paraphrase corpus. We
train three different neural SS methods with our initialization, which can
obtain substantial improvements on the available WikiLarge data compared with
themselves without initialization. | Kang Liu, Jipeng Qiang | 2023-05-31T11:39:10Z | http://arxiv.org/abs/2305.19754v1 | # Sentence Simplification Using Paraphrase Corpus for Initialization
###### Abstract
Neural sentence simplification method based on sequence-to-sequence framework has become the mainstream method for sentence simplification (SS) task. Unfortunately, these methods are currently limited by the scarcity of parallel SS corpus. In this paper, we focus on how to reduce the dependence on parallel corpus by leveraging a careful initialization for neural SS methods from paraphrase corpus. Our work is motivated by the following two findings: (1) Paraphrase corpus includes a large proportion of sentence pairs belonging to SS corpus. (2) We can construct large-scale pseudo parallel SS data by keeping these sentence pairs with a higher complexity difference. Therefore, we propose two strategies to initialize neural SS methods using paraphrase corpus. We train three different neural SS methods with our initialization, which can obtain substantial improvements on the available WikiLarge data compared with themselves without initialization.
Sentence Simplification, Paraphrase Corpus, Seq2Seq
## I Introduction
The goal of sentence simplification (SS) task is to rephrase a sentence into a form that is easier to read and understand, while still retaining the semantic meaning, which can help people with reading difficulties such as non-native speakers [1, 2], dyslexia [3] or autism [4]. Second language learners [5] and people with low literacy [6] can also benefit from it.
Since the 2010 year, SS task have been addressed as a monolingual machine translation problem translating from complex sentences to simplified sentences. Existing SS methods have changed from statistical sentence simplification methods [7, 8, 9] to neural sentence simplification methods [10, 11, 12, 13]. Neural sentence simplification methods adopt sequence to sequence (Seq2Seq) models. Seq2Seq models work very well only when provided with a massive parallel corpus of complex and simplified sentences. Unfortunately, these approaches are currently limited by the scarcity of parallel corpus. For example, the biggest and widely-used SS training dataset WikiLarge [10] is composed of 296,402 sentence pairs, which align sentences from the 'ordinary' English Wikipedia and the'simple' English Wikipedia. WikiLarge has been criticized recently [14, 15] because they contain a large proportion of noise data, which leads to systems that generalize poorly. Some work [12, 16, 2, 17] foucs on unsupervised SS method for alleviating the need for SS supervised corpora. In this paper, we focus on how to reduce the dependence on parallel corpus by leveraging a careful initialization for neural SS methods.
There are large-scale paraphrase datasets [18, 19] for paraphrase generation whose aim is to generate an output sentence that preserves the meaning of the input sentence but contains variations in word choice and grammar. Comparing with paraphrase dataset, SS dataset highlights that the two sentences of each sentence pair should have difference in text complexity levels. We found that there are a large proportion of sentence pairs in paraphrase dataset that satisfy the expectations of SS task. For example, paraphrase dataset ParaBank [19] was created automatically from bilingual text by pivoting over the non-English language using neural machine translation (NMT) models. NMT models usually tend to generate more high-frequency tokens and less low-frequency tokens [20, 21]. Considering that the higher the word frequency, the simpler the word is, this phenomenon could be beneficial to SS task. Table 1 shows two sentence pairs from paraphrase corpus ParaBank. We can see that the translated target sentence is easier than the source sentence.
In this paper, we will try to utilize paraphrase corpus to initialize neural SS methods, and then fine-tune these methods on real SS dataset. Specifically, we design two strategies for initialization. (1) We directly utilize the whole paraphrase corpus to train an initial SS method. (2) Considering many sentences pairs in paraphrase corpus cannot satisfy the expectations of SS task, we only select these sentence pairs with a higher complexity difference using text readability formula (Flesch reading ease score [22]), which is designed to indicate how difficult a sentence is to understand. Experimental results show that neural SS methods based on our initialization outperform themselves without initialization.
The following sections are organized as follows: Section 2 describes the related work; Section 3 presents how to initialize neural SS methods; Section 4 shows the experimental results; Section 5 summarizes the paper.
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Source** & This **proposal** will be **communicated** to the trader βs credentials. \\ \hline
**Target** & This **plan** will be **sent** to the trader βs creditors \\ \hline \hline
**Source** & It would be **prudent** for you not to be **decived** by your masquerade. \\ \hline
**Target** & It would be **wise** for you not to be **fooled** by your own masquerade. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Two examples in ParaBank paraphrase corpus
## II Related Work
### _Sentence Simplification_
Automatic SS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels. It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems. Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) [23] are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task [7, 24]. TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content.
At the lexical level, lexical simplification often substitutes difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word [25, 26, 27, 28]. Woodsend and Lapata [29] presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. [30] proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. [9] proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output.
Neural machine translation has shown to produce state-of-the-art results [21, 31, 32], which are based on sequence-to-sequence (Seq2Seq) architecture. In recent years, many neural SS models based on Seq2Seq are proposed and achieve good results [10, 11, 12, 13, 33, 34]. The main limitation of the aforementioned neural SS models depended on the parallel ordinary-simplified sentence pairs [15]. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is WikiLarge [10] that only has 296,402 sentence pairs. The dataset is insufficiency for neural SS model if we want to they can obtain the best parameters. Considering paraphrase corpus includes a large number of sentence pairs that satisfy the expectations of SS task. In this paper, we investigate the use of paraphrase data for text simplification. We are the first to show that we can effectively adapt paraphrase data for SS task.
### _Unsupervised Sentence Simplification_
To overcome the scarcity of parallel SS corpus, unsupervised SS methods without using any parallel corpus have attracted much attention. Existing unsupervised SS methods can be divided into two classifications. The first scheme focuses on how to design an unsupervised SS method, and the second scheme concentrates on how to build a parallel SS corpus.
[35] and [36] are the pipeline-based unsupervised framework, where the pipeline of Narayan and Gardent is composed of lexical simplification, sentence splitting, and phrase deletion, the pipeline of Kumar et al. includes deletion, reordering, and lexical simplification. [37] proposed an unsupervised neural text simplification based on a shared encoder and two decoders, which only learn the neural network parameters from simple sentences set and complex sentences set. In other languages, there are unsupervised statistical machine translations for Japanese [38] and back-translation in Spanish and Italian [39]. The performance of the above unsupervised SS methods is however often below their supervised counterparts.
Some work [16, 12] constructed SS corpora by searching the most similar sentences using sentence embedding modeling, and train SS methods using the constructed SS corpora. [16] calculated the similarity between the sentences from English Wikipedia by Word Mover's distance [40]. [12] adopted multilingual sentence embedding modeling LASER [32] to calculate the similarity between the sentences from 1 billion sentences from CCNET [41]. Since the aim of the two works is to find the most similar sentences from a large corpus, they cannot guarantee that the aligned sentences preserve the same meanings. Lv et al. [42] construct large-scale pseudo parallel SS data by taking the pair of the source sentences of translation corpus and the translations of their references in a bridge language.
### _Paraphrase Mining_
Some work has focused on generating paraphrase corpus for neural machine translation (NMT) systems using back-translation, where back-translation [43] is a technique widely used in NMT to enhance the target monolingual data during the training process. Specifically, the back-translation technique is used by translating the non-English side of bitexts back to English [44] and pairing translations with the references. Two large paraphrase corpora (PARANMT [7] and PARABANK [19]) are built based on this idea, and has been proven to have great potential in different translation-core tasks. Round-trip translation is also used in mining paraphrases [45] by translating sentences into another language then translating the result back into the original language. Similar to machine translation, back-translation is used to improve the performance of neural SS methods [38, 39, 46]. [7] trained a paraphrasing model by generating a paraphrase corpus using back-translation, which is used to preprocess source sentences of the low-resource language pairs before feeding into the NMT system.
The above work for building a large paraphrase corpus is to serve for NMT and other tasks, which is not fit for SS task. The difference of sentence complexity between the original sentence and the translated sentence for each sentence pair has not been taken into consideration, which is vitally important for SS task. Therefore, we focus on how to build a sentence simplification corpus, instead of a paraphrase corpus.
## III Method
In this section, we will present how to utilize paraphrase corpus to initialize neural SS models.
### _Relation between Paraphrase Corpus and SS Corpus_
Some work has focused on generating paraphrase corpus for neural machine translation (NMT) systems using back-translation technique, where back-translation [47] is a technique widely used in NMT to enhance the target monolingual data during the training process. Specifically, the back-translation technique is used by translating the non-English side of bitex's back to English [48] and pairing translations with the references. We can see that the two sentences of each sentence pair of paraphrase corpus should preserve the same meaning.
**Hypothesis 1: SS corpus can be regarded as a subset of paraphrase corpus.** Based on the definition of SS task, SS corpus should satisfy the following two requirements: (1) The two sentences of each sentence pair should convey the same meaning. (2) The two sentences of each sentence pair should have difference in text complexity levels. Paraphrase only needs to satisfy the first requirement, and SS corpus needs to satisfy both the requirements.
**Hypothesis 2: Paraphrase corpus includes a large proportion of sentence pairs belonging to SS corpus.** Neural machine translation model usually tends to generate more high-frequency tokens and less low-frequency tokens [20, 21]. The frequency of words is one of the most popular choices by sentence simplification [26, 46]. In general, the higher the frequency, the easier the word. Many empirical results supported the hypothesis, as shown in Table 1.
### _Our Initialization Strategy_
We provide two strategies to initialize neural SS models.
**(1) First Initialization Strategy:** Based on Hypothesis 2, we directly utilize paraphrase corpus to train initial neural SS modeling. Here, we choose ParaBank as our using paraphrase corpus. Due to the memory size of our computer, we only randomly choose 2 million sentence pairs from ParaBank [19]. Finally, we train neural SS modeling on real SS corpus.
**(2) Second Initialization Strategy:** Our second initialization strategy is shown in Figure 1. Different from the first one, we only select these sentence pairs from paraphrase corpus that have difference in text complexity levels. We measure the difference of text complexity using Flesch reading ease score (FRES) [22], which is designed to indicate how difficult a sentence is to understand, and is widely used to evaluate the performance of SS. FRES proposed in 1975 is a classical formula in the field of text assessment, whose coefficients are set by linguists. It is based on text features such as the average sentence length and the average number of syllables per word. A higher score indicates that the sentence is simpler to read. FRES grades the text from 0 to 100. The higher scores indicate the sentences are easier to read. As usual, the difference of one school grade level in FRES is 10, e.g., 5th grade (100.00-90.00) and 6th grade (90.0-80.0). The formula of FRES is,
\[206.835-1.015\left(\frac{\text{\# words}}{\text{\# sentences}}\right)-84.6\left(\frac{\text{\# syllables}}{\text{\# words}}\right) \tag{1}\]
To ensure simplicity, we only keep the sentence pairs with a FRES difference higher than a threshold \(h_{\text{FRES}}\). In our experiments, we set \(h_{\text{FRES}}\) = 10.0, where \(h_{\text{FRES}}=10.0\) means that for each sentence pair, the simplified version should be at least one school level simpler than its complex counterpart.
After obtaining pseudo SS corpus, we first initialize neural SS method using pseudo SS corpus, and train neural SS method on real SS corpus.
**(3) Statistics of our choosing paraphrase corpora:**
We report the statistics of our choosing paraphrase corpora in Table 2. Here, we report the statistics of real SS corpus WikiLarge for a comparison. WikiLarge is the most popular and wildly used SS corpus. Because the SS task is a paraphrase generation task using easier words, the length of the complex sentence and the simple sentence are roughly the same, and the size of the vocabulary in the simple sentence set should be smaller than the complex sentence set. In contrast to the paraphrase corpora, the length of the complex sentence in WikiLarge is longer than the simple sentence, because it focuses on the deletion of content.
## IV Experiments
### _Experimental setup_
**Neural SS methods:** To validate that our two initialization strategies (First and Second) are effective for different neural SS methods, we apply our two initialization on the following three methods:
* **LSTM** that is composed of RNN network and soft attention layer.
* **Transformer** that is based solely on attention mechanisms.
* **Bart1** that is sequence-to-sequence model trained with denoising as pretraining objective
Footnote 1: [https://dl.fhaipublicfiles.com/fairseq/models/bart.base.tar.gz](https://dl.fhaipublicfiles.com/fairseq/models/bart.base.tar.gz)
We implement the above three methods via the opensource toolkit fairseq [49]. We adopt the Adam optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.98\), \(\epsilon=10^{-8}\) and Dropout is set 0.3 for the three methods. The initial learning rate are set to \(1\times 10^{-4}\), \(1\times 10^{-4}\), \(lr=1\times 10^{-5}\) for LSTM-based, Transformer-based and BART-based models, respectively.
**Evaluation Dataset:** We select WikiLarge as the training SS corpus. For evaluating neural SS methods, we select
\begin{table}
\begin{tabular}{l|r|r|r} \hline \hline & **WikiLarge** & **First** & **Second** \\ \hline
**Vocab(complex)** & 169,349 & 282,279 & 96,524 \\
**Vocab(simple)** & 135,607 & 245,447 & 92,156 \\ \hline
**Avg(complex)** & 21.93 & 12.04 & 10.49 \\
**Avg(simple)** & 16.14 & 12.65 & 11.31 \\ \hline
**Total pairs** & 296,402 & 2,000,000 & 321,900 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of our choosing paraphrase corpora in two strategies compared with WikiLarge. Avg(complex) and Avg(simple) are the average numbers of words in the complex sentences and the simpler sentences, respectively.
TurkCorpus [9] as our evaluation benchmark dataset. The corpus consists of 2000 valid sentences and 359 test sentences. In TurkCorpus, each complex sentence has 8 kinds of simplification for reference.
**Evaluation Metrics:** SARI [9] is the main metric to evaluate text simplification models, which calculates the arithmetic mean of the \(n\)-gram F1 scores of three operations (keeping, adding, and deleting) through comparing the generated sentences to multiple simplification references and the original sentences. A Higher SARI score means better simplification performance. We use SS evaluation tools Easse [50] to calculate the SARI metric.
### _Experimental Results_
The final evaluation results are shown in Table 3. We can see that the three neural SS methods (LSTM, Transformer and Bart) with our First initialization strategy outperform themselves without initialization. The results indicate that our first initialization strategy is effective for neural SS methods. As we expected, the Second initialization method with a selector indeed further improves the performance of neural SS methods. With a selector, our paraphrase corpus becomes more suitable for SS task. The selector makes SARI score get 0.68 improved for LSTM, 0.88 improved for Transformer, 0.74 improved for Bart-based compared with themselves without initialization. From the simplified sentences, we found that Second also improves the readability of simplification results in varying degrees compared with the First initialization method without a selector. This indicates that the noise sentences in the paraphrase really harm the model training through the First initialization method. We can conclude that Second is a more reasonable and better method.
Table 4 shows the examples of the simplification result generated by Transformer without initialization and Transformer with our second initialization method. In the first example, we can find that our proposed method replaces 'originally' with 'first' which is the same as the reference while Transformer only repeats the original sentence. In the second example, our method replaces'merged' with 'joined' while Transformer still repeats. This indicates that our initialization method can make more simplification with word replacement compared with the baseline method.
## V Conclusions
Considering the relationship between paraphrase corpus and SS corpus, we propose two strategies to initialize neural sentence simplification (SS) model using paraphrase corpus. Experimental results verify that neural SS methods without our initialization outperform themselves without initialization. In this paper, we use a small version of the paraphrase corpus. In future work, we can use a bigger paraphrase corpus. In
\begin{table}
\begin{tabular}{p{14.2pt} p{142.3pt}} \hline \hline \multirow{2}{*}{Complex Reference} & it was **originally** thought that the debris thrown up by the collision filled in the smaller craters. it was **originally** thought that the debris thrown up by the collision-ion filled in the smaller craters. it was **originally** thought that the debris thrown up by collision-ion filled in the smaller craters. it was **first** thought that the debris thrown up by the collision-ion filled in the smaller craters. \\ \hline Complex Reference & both names became defunct in 2007 when they were **merged** into the national museum of scotland. **merged** into the national museum of scotland. **merged** into the national museum of scotland. \\ \hline \hline \end{tabular}
\end{table}
Table 4: The examples of simplified results generated by Transformer with the second initialization.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline Model & Condition & SARI \\ \hline \multirow{3}{*}{LSTM} & - & 35.77 \\ & First & 36.25 \\ & Second & **36.45** \\ \hline \multirow{3}{*}{Transformer} & - & 37.29 \\ & First & 37.64 \\ & Second & **38.17** \\ \hline \multirow{3}{*}{Bart} & - & 38.03 \\ & First & 38.29 \\ \cline{1-1} & Second & **38.77** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The evaluation result of the experiments.
Figure 1: The overview of our approach for training. A pseudo SS corpus is synthesized by selecting these complex-simple sentence pairs with a higher complexity difference. Then, we first train neural SS method using the pseudo SS corpus, and train neural SS method using the real SS corpus.
addition to that, we can also build a new paraphrase corpus with different kinds of selectors not just FRES selector.
|
2309.13812 | Comparison of Lift and Drag Modulation Control for Ice Giant Aerocapture
Missions | Aerocapture is an orbit insertion technique which uses atmospheric drag from
a single pass to decelerate a spacecraft. Compared to conventional propulsive
insertion, aerocapture can impart large velocity changes to the spacecraft with
almost no propellant. At the far reaches of the outer Solar System, the ice
giants remain the last class of planets to be explored using orbiters. Their
enormous heliocentric distance presents significant mission design challenges,
particularly the large $\Delta$V required for orbit insertion. This makes
aerocapture an attractive method of orbit insertion, but also challenging due
to the comparatively large navigation and atmospheric uncertainties. The
present study performs a comparison of the lift and drag modulation control and
their implications for future missions. Lift modulation provides nearly twice
the entry corridor width as drag modulation, and can thus accommodate larger
uncertainties. Lift modulation offers continuous control throughout the flight
enabling it to adjust the trajectory in response to the actual density profile
encountered. Drag modulation offers much more benign aero-thermal conditions
compared to lift modulation. With drag modulation, there is no control
authority after the drag skirt jettison making the vehicle more susceptible to
exit state errors from density variations encountered after the jettison event. | Athul Pradeepkumar Girija | 2023-09-25T01:42:00Z | http://arxiv.org/abs/2309.13812v1 | # Comparison of Lift and Drag Modulation Control for Ice Giant Aerocapture Missions
###### Abstract
Aerocapture is an orbit insertion technique which uses atmospheric drag from a single pass to decelerate a spacecraft. Compared to conventional propulsive insertion, aerocapture can impart large velocity changes to the spacecraft with almost no propellant. At the far reaches of the outer Solar System, the ice giants remain the last class of planets to be explored using orbiters. Their enormous heliocentric distance presents significant mission design challenges, particularly the large \(\Delta\)V required for orbit insertion. This makes aerocapture an attractive method of orbit insertion, but also challenging due to the comparatively large navigation and atmospheric uncertainties. The present study performs a comparison of the lift and drag modulation control and their implications for future missions. Lift modulation provides nearly twice the entry corridor width as drag modulation, and can thus accommodate larger uncertainties. Lift modulation offers continuous control throughout the flight enabling it to adjust the trajectory in response to the actual density profile encountered. Drag modulation offers much more benign aero-thermal conditions compared to lift modulation. With drag modulation, there is no control authority after the drag skirt jettison making the vehicle more susceptible to exit state errors from density variations encountered after the jettison event.
Lift Modulation, Drag Modulation, Ice Giant, Aerocapture
## I Introduction
Aerocapture is an orbit insertion technique which uses atmospheric drag from a single pass to decelerate a spacecraft [1, 2]. Compared to conventional propulsive insertion, aerocapture can impart large velocity changes to the spacecraft with almost no propellant [3]. At the far reaches of the Solar System, the ice giants remain the last class of planets to be explored using orbiter spacecraft [4, 5, 6]. Their enormous heliocentric distance presents significant mission design challenges, particularly the large \(\Delta\)V required for orbit insertion [7]. This makes aerocapture an attractive method of orbit insertion at the ice giants, Uranus and Neptune [8, 9]. Figure 1 shows an illustration of the aerocapture maneuver with the vehicle entering the atmosphere, reducing its energy, and then exiting the atmosphere. To accommodate the uncertainties in the navigated delivery state, atmospheric, and vehicle aerodynamics and exit the atmosphere with the desired exit state, it is necessary for the vehicle to have aerodynamic control authority during its flight [10]. If the vehicle enters steep and penetrates too deep into the atmosphere, it will bleed too much energy and may not exit. If the vehicle enters shallow, it may not bleed enough energy and may exit without getting captured. Aerodynamic control allows the vehicle to autonomously control the trajectory within the corridor and hence the energy depletion. A recent NASA study has highlighted the need for a comparison of lift and drag modulation control at Uranus and Neptune [11]. The present study uses the Aerocapture Mission Analysis Tool (AMAT) to perform a comparison of the lift and drag modulation control and their implications for ice giant aerocapture missions [12].
Figure 1: Schematic illustration of the aerocapture maneuver.
## II. Lift Modulation
Bank angle modulation (a subset of lift modulation) has been successfully used on the Apollo and MSL missions and is a proven and well understood technique. The only control variable is the bank angle, and by pointing the lift vector up or down, the vehicle can control its descent rate and energy depletion. Early studies of ice giant aerocapture at Neptune in the 2000s had used a mid-L/D (L/D=0.8) aeroshell to accommodate the large uncertainties [13]. However, recent studies have shown that by using high arrival v_inf trajectories, it becomes possible to use low-L/D aeroshells such as MSL (L/D = 0.24) while also enabling shorter flight times [14, 15]. Figure 2 shows the aerocapture trajectories for an MSL-like vehicle entering Uranus at 29 km/s. The target apoapsis is 500,000 km. The aerocapture corridor is [-12, -11] deg, with a width of 1.0 deg. The peak deceleration is in the range of 4-10g, and the peak heat rate is in the range of 1400-1800 W/cm\({}^{2}\). The peak heat rate for aerocapture is considerably less than that for entry probes which enter steeper, and is well within the tested limits of the HEEET thermal protection system [16]. The heat load is in the range of 200-300 kJ/cm\({}^{2}\) which is substantial but also expected to be within the capability of HEEET. Based on empirical relations, the TPS mass fraction is expected to be about 25%, and the structural mass fraction is also expected to be at 25%, leaving about 50% of the arrival mass to be inserted into orbit after aerocapture [17, 18].
Figure 2. Lift modulation aerocapture trajectory at Uranus with an MSL-like aeroshell (L/D=0.24).
## III Drag Modulation
Drag modulation is a simpler control technique that avoids the need for a propellant-fed reaction control thrusters required for bank angle modulation [19]. In its simplest variant, the single-event jettison, the only control variable is the time at which the drag skirt is jettisoned. By adjusting the jettison time, the energy depletion can be controlled. Unlike lift modulation which offers continuous control throughout the atmospheric flight, drag modulation provides no control authority after drag skirt jettison. Drag modulation uses a low ballistic coefficient entry system which enables much lower heating rates compared to lift modulation which uses a high ballistic coefficient rigid aeroshell. The low ballistic coefficient system decelerates much higher up in the atmosphere, keeping the heating rates low. However, the flexible TPS (such as carbon cloth used in ADEPT) can only accommodate smaller heat rates (200-300 W/cm\({}^{2}\)), and thus cannot use high speed arrival trajectories [20, 21]. Figure 3 shows a nominal aerocapture trajectory for a 12-m ADEPT drag modulation vehicle (beta = 30 kg/m\({}^{2}\), BC ratio = 4.14) entering Uranus at 26 km/s [22]. The target apoapsis is 500,000 km. The aerocapture corridor is [-10.71, -10.25] deg, with a width of 0.46 deg. The peak deceleration is 5g, and the peak heat rate is in about 300 W/cm\({}^{2}\). The total heat load is about 77 kJ/cm\({}^{2}\). The estimated fraction of the arrival mass delivered to orbit with is 50%, which is the same as with lift modulation aerocapture [23].
Figure 3: Drag modulation aerodynamic trajectory at Uranus with a 12-m diameter ADEPT.
## 4 Comparison
Table 1 compares lift and drag modulation results at Uranus. The first observation is that lift modulation with the high entry speed provides corridor width that is nearly twice that of drag modulation. This implies lift modulation can accommodate higher navigation and delivery atmospheric uncertainties compared to drag modulation. In addition, lift modulation offers continuous control throughout the flight enabling it to adjust the trajectory in response to the actual density profile encountered. With drag modulation, there is no control authority after the drag skirt jettison making the vehicle susceptible to unexpected density pockets and other variations which may be present in the atmosphere [24].
The second difference is the peak heat rate which is in the range of 1400 - 1800 W/cm\({}^{2}\) for lift modulation, compared to 200-300 W/cm\({}^{2}\) for drag modulation aerocapture. The resulting total heat load is in the range of 200-300 kJ/cm\({}^{2}\) for lift modulation and 40-75 kJ/cm\({}^{2}\) for drag modulation. Hence the low ballistic coefficient system used in drag modulation offers a much more benign aero-thermal environment compared to lift modulation [25, 26].
The third difference is that for lift modulation, even with a high arrival speed, the peak heat rate is well within the tested limits for HEEET. For drag modulation, the peak heat rate is near the upper limit of the carbon cloth TPS which is tested to around 250 W/cm\({}^{2}\). Hence lift modulation architectures can accommodate high arrival speeds which can occur with high energy, short flight time trajectories, while drag modulation architectures tend to be more limited in terms of the maximum arrival speed due to the constraints on the peak heat rate of the carbon cloth TPS [27].
## 5 Conclusions
The study compared lift and drag modulation control techniques and explored their implications for ice giant aerocapture. Lift modulation provides nearly twice the corridor width as drag modulation, and can thus accommodate larger delivery and atmospheric uncertainties. Lift modulation offers continuous control throughout the flight enabling it to adjust the trajectory in response to the actual density profile encountered. Drag modulation offers much more benign aero-thermal conditions for aerocapture compared to lift modulation. With drag modulation, there is no control authority after the drag skirt jettison making the vehicle more susceptible to off-nominal density variations.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Control Method & \begin{tabular}{c} Corridor \\ width, deg \\ \end{tabular} & TPS material & \begin{tabular}{c} Peak heat rate, \\ W/cm\({}^{2}\) \\ \end{tabular} & \begin{tabular}{c} Total heat load, \\ kJ/cm\({}^{2}\) \\ \end{tabular} &
\begin{tabular}{c} Delivered \\ mass \\ fraction, \% \\ \end{tabular} \\ \hline Lift Modulation & 1.00 & HEEET & 1400β1800 & 200β300 & 50 \\ \hline Drag Modulation & 0.46 & Carbon cloth & 200β300 & 40β75 & 50 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of lift and drag modulation aerocapture at Uranus.
## Data Availability
The results presented in the paper can be reproduced using the open-source Aerocapture Mission Analysis Tool (AMAT) v2.2.22. The data and code used to make the study results will be made available by the author upon request.
|
2310.20363 | CAFE: Conflict-Aware Feature-wise Explanations | Feature attribution methods are widely used to explain neural models by
determining the influence of individual input features on the models' outputs.
We propose a novel feature attribution method, CAFE (Conflict-Aware
Feature-wise Explanations), that addresses three limitations of the existing
methods: their disregard for the impact of conflicting features, their lack of
consideration for the influence of bias terms, and an overly high sensitivity
to local variations in the underpinning activation functions. Unlike other
methods, CAFE provides safeguards against overestimating the effects of neuron
inputs and separately traces positive and negative influences of input features
and biases, resulting in enhanced robustness and increased ability to surface
feature conflicts. We show experimentally that CAFE is better able to identify
conflicting features on synthetic tabular data and exhibits the best overall
fidelity on several real-world tabular datasets, while being highly
computationally efficient. | Adam Dejl, Hamed Ayoobi, Matthew Williams, Francesca Toni | 2023-10-31T11:14:26Z | http://arxiv.org/abs/2310.20363v1 | # CAFE: Conflict-Aware Feature-wise Explanations
###### Abstract
Feature attribution methods are widely used to explain neural models by determining the influence of individual input features on the models' outputs. We propose a novel feature attribution method, CAFE (Conflict-Aware Feature-wise Explanations), that addresses three limitations of the existing methods: their disregard for the impact of conflicting features, their lack of consideration for the influence of bias terms, and an overly high sensitivity to local variations in the underpinning activation functions. Unlike other methods, CAFE provides safeguards against overestimating the effects of neuron inputs and separately traces positive and negative influences of input features and biases, resulting in enhanced robustness and increased ability to surface feature conflicts. We show experimentally that CAFE is better able to identify conflicting features on synthetic tabular data and exhibits the best overall fidelity on several real-world tabular datasets, while being highly computationally efficient.
## 1 Introduction
Feature attribution methods, which aim to determine the effects of individual features on the predictions of machine learning models, are popular approaches for post-hoc explainability (Guidotti et al., 2018; Zhang et al., 2021). Such methods include, amongst others, LIME (Ribeiro et al., 2016), SHAP (Lundberg and Lee, 2017), Gradient \(\cdot\) Input (Shrikumar et al., 2016), LRP (Bach et al., 2015), DeepLIFT (Shrikumar et al., 2017), Integrated Gradients (Sundararajan et al., 2017) and SmoothGrad (Smilkov et al., 2017). Feature attribution scores can surface useful insights about models, including bugs and biases (Pezeshkpour et al., 2021; Meng et al., 2022).
It is generally agreed that fidelity of explanations to models is important (Yeh et al., 2019; Sokol and Flach, 2020; Nauta et al., 2023) and, when a model is faced with unclear or partially contradictory inputs, a faithful explanation method should be capable of unearthing such conflicts (Wang and Nvasconcelos, 2019). For example, consider a healthcare AI system predicting a patient's risk of death based on vital signs and previous treatments. When faced with a normal temperature reading, the system may typically predict a lower overall risk, but this line of reasoning may be suppressed when the patient was recently administered an antipyretic drug. A faithful explanation should surface this internal conflict instead of simply concluding that the body temperature reading and antipyretic drug administration had no effect on the prediction.
Existing feature attribution methods often fail to unearth conflicts. For example, consider the neural network model (NN) in Figure 1a, computing the binary XNOR function (which is 0 if exactly one of the two binary inputs is 1 and 1 otherwise), and the input \(\mathbf{x}=(1,1)\)1. By inspection of the weights and neuron activations, it is clear that the input features are in conflict, pushing the pre-activations of both hidden neurons to zero values for which the ReLU function is inactive. This causes all gradient-based feature attribution methods to return zero attribution scores for both input features, thus failing to surface that the features were considered but eventually cancelled out. Additionally, since the input features cancel each other symmetrically and since the NN output is driven by the +1 bias of the output neuron, even methods that are not gradient-based and are explicitly designed to handle conflicts (notably DeepLIFT RevealCancel) return zeros. This simple example also illustrates another common deficiency of existing feature attribution methods -- that they ignore the effect of the bias terms on predictions and may thus provide an incomplete understanding of the NN behaviour.
Footnote 1: In the figure, as in all our examples and experiments, we also consider a reference input (\(\mathbf{x}_{\text{ref}}=(0,0)\) in the figure), but disregard it for methods that do not support such an input (e.g. Gradient \(\cdot\) Input and LRP).
Another desirable property for feature attribution methods is robustness to local variations in the model activations and, by extension, its gradients, especially since NNs and their gradients can be highly irregular (Balduzzi et al., 2017). Instead, many existing feature attribution methods are prone to what we call "attribution score explosion", which can cause the attribution
scores to become unreasonably low or high, far beyond the model's actual output range. For illustration, consider the NN in Figure 0(b), input \(\mathbf{x}=(2,2)\) and reference input \(\mathbf{x}_{\text{ref}}=(1,1)\). Figure 0(c) shows that all methods except for LRP and DeepLIFT RevealCancel assign large negative scores to \(x_{1}\) and large positive scores to \(x_{2}\). While these scores somewhat capture the local behaviour of the GELU activation function, they significantly overstate the effects of each feature. Indeed, since GELU flattens and tends to 0 as its input becomes increasingly negative, the highest amount by which \(x_{2}\) as a negative feature can increase the output from the reference value \(y_{\text{ref}}=-0.16\) is 0.16. The positive attribution scores returned for \(x_{2}\) are thus unreasonably high. Meanwhile, although the scores from LRP and DeepLIFT RevealCancel show that \(x_{1}\) could also have a positive effect, they only illustrate fractions of this hypothetical effect and also mask the actual small negative effect of \(x_{1}\) in the negative range of GELU. Instead, it may be useful to capture both positive and negative effects of features, and to be able to control the degree to which the hypothetical, cancelled features' effects are reflected in the attribution scores.
We introduce a new feature attribution method for NNs, _CAFE (Conflict-Aware Feature-wise Explanations)_, designed to overcome these issues while also (i) allowing users to control how much (if any) of the cancelled effects should be captured in the produced attribution scores (via a _conflict sensitivity_ hyper-parameter) and (ii) being computable in a single pass through the explained NNs. Our experimental results show that CAFE produces the most accurate attribution scores on synthetic tabular data with conflicting features and achieves the overall best fidelity when applied to four datasets and models from the OpenXAI benchmark (Agarwal et al., 2022) as well as a mortality prediction model trained on (a subset of) the MIMIC-IV medical database (Johnson et al., 2023).
## 2 Related Work
Most relevant to our work are feature attribution methods, which quantify the importance of each input feature with respect to the output of a machine learning model. Amongst these, simple gradients (Simonyan et al., 2013) or gradients multiplied with the inputs (Shrikumar et al., 2016) capture NNs' behavior in a small vicinity around the input, which may not represent the overall behavior when the models represent irregular or saturated functions. As an alternative to using raw gradients, several enhanced gradient-based attribution methods have been proposed, including LRP (Bach et al., 2015; Montavon et al., 2019), DeepLIFT Rescale (Shrikumar et al., 2017) and Integrated Gradients (Sundararajan et al., 2017). These methods may not reflect the effects of conflicting features and also exhibit other limitations, as illustrated in Section 1. CAFE borrows from DeepLIFT and Integrated Gradients the use of a specified _reference input_ to compute explanations.
Some feature attribution methods are model-agnostic. For instance, LIME (Ribeiro et al., 2016) produces feature attribution scores by approximating any model using an interpretable linear model. Meanwhile, Shapley Value Sampling (Strumbelj and Kononenko, 2010) and SHAP (Lundberg and Lee, 2017) leverage Shapley values (Shapley, 1953) from game theory to assess feature importance. These methods, however, often necessitate extensive sampling and multiple model evaluations, making them computationally demanding.
Two attribution methods are particularly relevant to our goals of accounting for conflicts and biases.
Figure 1: Illustrations of the issues addressed by CAFE (where 0.0 and 1.0 in brackets reflect _conflict sensitivity_, see Section 4), with Gradient \(\cdot\) Input (G \(\cdot\) I), LRP, DeepLIFT Rescale (DL-R), DeepLIFT RevealCancel (DL-RC) and Integrated Gradients (IG) as baselines. For the XNOR NN (Figure 0(a)), all baselines disregard conflicting features and biases, returning zero feature attributions. Meanwhile, CAFE (1.0) returns +1/-1 for both features and 1 for the bias. For the GELU NN (Figure 0(b)), Figure 0(c) shows that G \(\cdot\) I, DL-R and IG return erroneous scores for both features, significantly overestimating their possible effect on the output; LRP and DL-RC do not exhibit the attribution explosion issue here, but their scores still do not fully capture conflicting features.
DeepLIFT RevealCancel (Shrikumar et al., 2017) uses an approximation of Shapley values for surfacing cancelled features, to some extent, but suffers from other limitations, as illustrated in Section 1. Bias Back-propagation (Wang et al., 2019) attributes the effects of biases to input features, differing from CAFE, which computes separate attribution scores for input features and biases, thus distinguishing between the two.
CAFE aims to better understand the reasoning of NNs, by unearthing conflicts between features and the role of biases. Similarly, existing work on deliberative explanations (Wang and Nvasconcelos, 2019) emphasized the importance of capturing insecurities in NNs, though it focused exclusively on images and produced sets of potentially ambiguous input regions instead of attribution scores, making it orthogonal to our approach. Also related to internal model deliberations are contrastive explanations with pertinent negatives (Dhurandhar et al., 2018), which highlight the missing parts of inputs that could cause the model to predict different classes, making them closer in spirit to counterfactual explanations. Finally, SpArX (Ayoobi et al., 2023) aims at tracking the full reasoning of NNs through their sparsification. However, the produced explanations are considerably more complex than attribution scores and may not scale to larger NNs.
Evaluating AI model explanations, including feature attributions, remains an open research area, with a general lack of standardisation and a variety of discordant viewpoints on what constitutes a more desirable explanation (Zhou et al., 2021; Chen et al., 2022; Rahnama, 2023; Nauta et al., 2023; Le et al., 2023). Several properties have been proposed and studied in the literature (e.g. see (Sokol and Flach, 2020; Nauta et al., 2023) for some overviews). Amongst these, _fidelity_ (also referred to as correctness, faithfulness or descriptive accuracy (Nauta et al., 2023)) is widely regarded as crucial (Yeh et al., 2019), as it amounts to explanations being truthful to the model they aim to explain: we will use this measure for comparison between CAFE and several baselines. CAFE is also designed to satisfy the commonly enforced _completeness_ property (Sundararajan et al., 2017) (also known as "summation-to-delta" (Shrikumar et al., 2017) or "sensitivity-N" (Ancona et al., 2018)), requiring that the sum of the attribution scores should be equal to the difference between the model outputs for the reference input and for the actual input. Finally, we consider properties of _missingness_ from (Lundberg and Lee, 2017) and _linearity_ from Sundararajan et al. (2017).
## 3 Preliminaries
Our goal is to explain an NN \(\mathcal{M}:\mathbb{R}^{\dim(0)}\rightarrow\mathbb{R}^{\dim(N)}\) with \(N\) layers, taking inputs of dimension \(\dim(0)\) and returning outputs of dimension \(\dim(N)\). We view \(\mathcal{M}\) as composed of layers \(L^{(1)},\ldots,L^{(N)}\), where \(L^{(1)}\) is the _input layer_ and \(L^{(N)}\) is the _output layer_. When referring to the vector of activation values of the neurons in layer \(L^{(n)}\) (\(1\!\leq\!n\!\leq\!N\)), we use the notation \(\mathbf{a}^{(n)}\).
In order to simplify our definitions, we assume that linear transformations and applications of activation functions are performed by distinct layers.2 A _linear layer_\(L^{(n+1)}\) computes the operation \(\mathbf{a}^{(n+1)}=\mathbf{W}^{(n+1)\top}\mathbf{a}^{(n)}+\mathbf{b}^{(n+1)}\) where \(\mathbf{a}^{(n)}\in\mathbb{R}^{\dim(n)}\) is the output of the previous layer, \(\mathbf{a}^{(n+1)}\in\mathbb{R}^{\dim(n+1)}\) is the layer output, \(\mathbf{W}^{(n+1)}\in\mathbb{R}^{\dim(n)\times\dim(n+1)}\) is the weight matrix and \(\mathbf{b}^{(n+1)}\in\mathbb{R}^{\dim(n+1)}\) is the bias vector. As conventional, we refer to the individual elements of \(\mathbf{W}^{(n+1)}\) and \(\mathbf{b}^{(n+1)}\) as \(W^{(n+1)}_{i,j}\) and \(b^{(n+1)}_{j}\), respectively, where \(i\) and \(j\) are indexes to neurons in layer \(L^{(n)}\) and \(L^{(n+1)}\), respectively. An _activation layer_\(L^{(n+1)}\) computes the operation \(\mathbf{a}^{(n+1)}=\phi^{(n+1)}(\mathbf{a}^{(n)})\) for some activation function \(\phi^{(n+1)}\). The input and output dimensions, \(\dim(n)\) and \(\dim(n+1)\) respectively, of an activation layer are always identical. When dealing with classification models, we disregard the final softmax layer, as this has been argued to result in more intuitive model explanations (Shrikumar et al., 2017).
Footnote 2: For example, in a NN with input layer \(L^{(1)}\), \(L^{(2)}\) would typically be a linear layer applied to the NN inputs, while \(L^{(3)}\) would be an activation layer applied to the outputs of \(L^{(2)}\). This distinction between linear and activation layers is also employed, e.g. in PyTorch ([https://pytorch.org/](https://pytorch.org/)). We present versions of the NNs from Figure 1 using these conventions in the supplement.
Our problem of interest is computing the _positive attribution scores_\(\mathbf{S}^{+}\in\mathbb{R}^{(\dim(0)+1)\times\dim(N)}\) and the _negative attribution scores_\(\mathbf{S}^{-}\in\mathbb{R}^{(\dim(0)+1)\times\dim(N)}\) (with \(\dim(0)\)_feature_ attribution scores and one extra _bias_ attribution score for each output neuron) for \(\mathcal{M}(\mathbf{x})\), given some _input_\(\mathbf{x}\in\mathbb{R}^{\dim(0)}\) and a _reference input_\(\mathbf{x}_{\text{ref}}\in\mathbb{R}^{\dim(0)}\). The latter can be any suitable baseline in the given context, e.g. mean or median feature values, zeroes or random noise. We will use the notation \(x_{f}\) and \(x_{\text{ref},f}\) to refer to the individual _features_ of the actual input and the reference input, respectively, with \(f\in\{1,2,\ldots,\dim(0)\}\) acting as the feature index. We will also consider the joint scores \(\mathbf{S}^{*}\!=\!\mathbf{S}^{+}\!-\!\mathbf{S}^{-}\). When computing the attribution scores, we will consider a version of \(\mathcal{M}\), denoted \(\mathcal{M}_{\text{ref}}\), with all biases ablated (i.e. set to \(0\)). We will refer to \(\mathcal{M}_{\text{ref}}\)'s activation values at layer \(L^{(n)}\) when applied to \(\mathbf{x}_{\text{ref}}\) as \(\mathbf{a}_{\text{ref}}^{(n)}\).
Finally, to refer to the intermediate (positive or negative) feature attribution scores computed at layer \(L^{(n)}\) for feature \(f\) and neuron \(i\), we will use the notation \(S^{(n),+}_{f,i}\) and \(S^{(n),-}_{f,i}\), respectively. Similarly, we will refer to the intermediate bias attribution scores as \(S^{(n),+}_{\text{bias},i}\) and \(S^{(n),-}_{\text{bias},i}\). We will also use sign variables \(\sigma\) and \(\tau\) to refer to a value from \(\{+,-\}\). In a slight abuse of notation, we will sometimes employ \(\sigma\) and \(\tau\) as operators that are ignored if referring to the \(+\) sign and that flip the sign of the operand if referring to the \(-\) sign, e.g. \(\sigma(5)=-5\) if \(\sigma=-\) and \(\tau(-5)=-5\) if \(\tau=+\).
## 4 Explaining NNs With CAFE
CAFE aims to quantify how much each input feature contributes to the NN's output and (optionally) the effects these features could have on the output if they were not in conflict with each other. Thus, unlike other methods, CAFE returns two separate scores for each feature -- separately capturing its overall positive and negative effects. This is crucial for uncovering conflicts between features, including cases in which a single feature is "controversial", i.e., it affects the output both positively and negatively. In addition to scores for the inputs, CAFE also returns aggregated scores for the bias terms, indicating how much these biases affected the output compared to the input features. This may highlight cases where predictions are primarily driven by the NN biases rather than the input. We introduce the CAFE rules for the individual NN components below, while providing examples in the supplement.
### Input Layer Rule
This rule simply requires that the scores amount to the absolute difference between each actual input feature and reference input feature, as captured below.
**Definition 1** (Attribution Scores (Input Layer Rule)).: _The input layer attribution scores for input \(\mathbf{x}\) and reference input \(\mathbf{x}_{\text{ref}}\) are defined as follows (for \(f\in\{1,2,\dots,\text{dim}(0)\}\) the feature index, \(i\in\{1,2,\dots,\text{dim}(1)\}\) the index of a neuron in the input layer \(L^{(1)}\) and \(\sigma\in\{+,-\}\) as in Section 3):_
\[S_{f,i}^{(1),\sigma}=\begin{cases}\max(\sigma(x_{f}-x_{\text{ ref},f}),0)&\text{if }f=i\\ 0&\text{otherwise}\end{cases}\] \[S_{\text{bias},i}^{(1),\sigma}=0\]
Note that, as there is not yet any interaction between features, all scores are set to 0 except for the feature attribution scores of input neurons, which are set to the corresponding (positive or negative) differences between the reference inputs and the actual inputs.
### Linear Layer Rule
The propagation of scores through a linear layer is similar to performing a standard forward pass -- we simply multiply the attribution scores by the edge weights and sum them together, adding the bias term in the end. However, to distinguish between the positive and negative effects, we consider the positive and negative edge weights and biases separately, as follows.
**Definition 2** (Attribution Scores (Linear Layer Rule)).: _The attribution scores for the \(j\)-th neuron in a linear layer \(L^{(n+1)}\) are:_
\[S_{f,j}^{(n+1),\sigma}=\sum_{i}^{|L^{(n)}|} \Big{(}\max(\sigma(W_{i,j}^{(n+1)}),0)S_{f,i}^{(n),+}\] \[\qquad\qquad+\max(\sigma(-W_{i,j}^{(n+1)}),0)S_{f,i}^{(n),-}\Big{)}\] \[S_{\text{bias},j}^{(n+1),\sigma}=\sum_{i}^{|L^{(n)}|} \Big{(}\max(\sigma(W_{i,j}^{(n+1)}),0)S_{\text{bias},i}^{(n),+}\] \[\qquad\qquad+\max(\sigma(-W_{i,j}^{(n+1)}),0)S_{\text{bias},i}^{ (n),-}\Big{)}\] \[+\max(\sigma(b_{j}^{(n+1)}),0)\]
### Activation Rule
Defining the rule for propagating scores through non-linear activations is a considerably greater challenge, as the effects of the non-linearities cannot be precisely captured by linear scores, forcing us to approximate. A possible approach is to compute attribution scores as for the linear layers, by considering a linear approximation of the activation function, with a slope identical to the mean slope on the interval between the reference activation (for \(\mathbf{x}_{\text{ref}}\)) and the actual activation (for \(\mathbf{x}\)). This approach, similar in spirit to DeepLIFT Rescale and Integrated Gradients, is insufficient for achieving our goals, as illustrated in Section 1.
We choose instead to additionally consider the behaviour of the activation function on the wider range spanned by the competing positive and negative effects at the given neuron, so as to estimate the hypothetical effect each feature could have if it was not cancelled as a result of the interaction with the other features. Additionally, this approach allows us to ensure that attribution scores do not become excessively high or low for extreme inputs (see Definition 2 and the associated text for details). In order to use this strategy, we first define several intermediate notions, eventually leading to Definition 8. We first define the notion of positive, negative and combined input effects on a given neuron:
**Definition 3** (Input Effects (Activation Rule)).: _The input effects at the \(j\)-th neuron in activation layer \(L^{(n+1)}\) are defined as follows:_
\[e_{j}^{(n+1),\sigma} =\sigma\left(S_{\text{bias},j}^{(n),\sigma}+\sum_{f}^{|\mathbf{x}|}S_ {f,j}^{(n),\sigma}\right)\] \[e_{j}^{(n+1),*} =e_{j}^{(n+1),+}+e_{j}^{(n+1),-}\]
_with \(e_{j}^{(n+1),\sigma}\) the positive/negative input effect for \(\sigma=+\)/-, respectively, and \(e_{j}^{(n+1),*}\) the combined effect._
Intuitively, the positive and negative effects capture the total positive and negative deviations from the reference pre-activation (i.e. pre-activation for \(\mathbf{x}_{\text{ref}}\) when
all biases are ablated) at the given neuron. We can then specify how much the activation function values change on intervals of interest, as follows:
**Definition 4** (Rectified Activation Deltas (Activation Rule)).: _Let \(L^{(n+1)}\) be an activation layer applying an activation function \(\phi^{(n+1)}\), and let:_
\[a_{j}^{(n+1),\sigma} =\phi^{(n+1)}(a_{\text{ref},j}^{(n)}+e_{j}^{(n+1),\sigma})\] \[a_{j}^{(n+1),*} =\phi^{(n+1)}(a_{\text{ref},j}^{(n)}+e_{j}^{(n+1),*})\] \[a_{j}^{(n+1),\text{ref}} =a_{\text{ref},j}^{(n+1)}=\phi^{(n+1)}(a_{\text{ref},j}^{(n)})\]
_Further, let the auxiliary rectified activation deltas be:_
\[\Delta_{j}^{(n+1),(\circ\to\bullet),d}\!=\!\begin{cases}\max(a_{j}^{(n+1), \bullet}\!-\!a_{j}^{(n+1),\circ},0)&\text{if }d\!=\!\nearrow\\ \max(a_{j}^{(n+1),\circ}\!-\!a_{j}^{(n+1),\bullet},0)&\text{if }d\!=\! \searrow\end{cases}\]
_where \(\circ,\bullet\in\{+,-,*,\text{ref}\}\) specify the activation delta boundary points while \(d\in\{\nearrow,\searrow\}\) denotes the positive/negative slope along which the activation delta is computed. Then, the rectified activation deltas are:_
\[\Delta_{j}^{(n+1),*,d} =\begin{cases}\Delta_{j}^{(n+1),(\text{ref}\to\bullet),d}&\text{if }e_{j}^{(n+1),*}\geq 0\\ \Delta_{j}^{(n+1),(\star\to\text{ref}),d}&\text{otherwise}\end{cases}\] \[\Delta_{j}^{(n+1),+,d} =\begin{cases}\Delta_{j}^{(n+1),(\star\to\star),d}&\text{if }e_{j}^{(n+1),*}\geq 0\\ \Delta_{j}^{(n+1),(\text{ref}\to\!+),d}&\text{otherwise}\end{cases}\] \[\Delta_{j}^{(n+1),-,d} =\begin{cases}\Delta_{j}^{(n+1),(\!-\!\to\!\!\to\!\!\!\to\!\!\! \to\!\!\!\to\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\! \!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\! \!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\!\to\!\!\! \!\!\to\!\!\!\!\to\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\to\!\!\!\!\! \to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\! \to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\! \!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\! \!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\! \!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\! \!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\! \!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\! \!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\! \to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\! \to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\! \to\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\to\!\!\!\!\!\!\!\!\!\to\!\!\!
The last auxiliary notion we need before defining attribution scores are attribution multipliers:
**Definition 7** (Attribution Multipliers (Activation Rule)).: _The attribution multipliers for the \(j\)-th neuron in activation layer \(L^{(n)}\) are:_
\[m_{j}^{(n),(\sigma\rightarrow\tau)}=\frac{(1-c^{(n)})l_{j}^{(n),(\sigma \rightarrow\tau)}+c^{(n)}p_{j}^{(n),(\sigma\rightarrow\tau)}}{|e_{j}^{(n), \sigma}|+\epsilon}\]
_where \(\epsilon\) is a small positive stabiliser enforcing \(\frac{0}{0}\!=\!0\) and \(c^{(n)}\) is the conflict sensitivity constant, \(0\leq c^{(n)}\leq 1\), specifying how much to capture the cancelled effects of conflicting features. We refer to \(m_{j}^{(n),(\sigma\rightarrow\tau)}\) as the multiplier from \(\sigma\)-signed to \(\tau\)-signed scores._
The attribution multipliers compute a weighted average of the corresponding capped linear attribution flows and the peaked attribution flows. The weights of the two components are customisable by the user-provided constant \(c^{(n)}\), which can differ for every activation layer. This enables the end-users to decide how much to reflect the hypothetical effects of conflicting features in the resulting feature attribution scores. Values of \(c^{(n)}\) closer to \(0\) typically result in more focused attribution scores while the values closer to \(1\) encourage greater sensitivity to conflicts between the individual features. The multipliers are also normalised by the corresponding positive/negative input effects. This ensures that the total attribution flows reflected in the peak and linear flows are redistributed between the individual features proportionally to their contribution to the total positive/negative input effects.
Finally, the attribution scores for activation layers are:
**Definition 8** (Attribution Scores (Activation Rule)).: _The attribution scores for \(j\)-th neuron and \(f\)-th feature from input \(\mathbf{x}\) at activation layer \(L^{(n+1)}\) are:_
\[S_{\mathrm{idx},j}^{(n+1),\sigma}=m_{j}^{(n+1),(\rightarrow\sigma)}S_{\mathrm{ idx},j}^{(n),+}+m_{j}^{(n+1),(\rightarrow\sigma)}S_{\mathrm{idx},j}^{(n),-}\]
_where \(\mathit{idx}\!\in\!\{1,\ldots,\mathit{dim}(0),\mathit{bias}\}\)._
## 5 Evaluation
Here, we focus on the following aspects: theoretical properties providing guarantees on CAFE's behaviour for any NN, computational complexity, ability to produce correct attribution scores on synthetic data with conflicting features and fidelity of CAFE's attribution scores for models trained on real-world datasets.
### Theoretical Analysis
We prove that CAFE satisfies (adapted variants) three desirable properties from the literature. _Missingness_, considered by Lundberg and Lee (2017) for SHAP, requires that missing features are always assigned a zero attribution score. _Linearity_, one of the axioms for Integrated Gradients (Sundararajan et al., 2017), requires that the attribution scores preserve any linear behaviour, that is, for a model formed as a linear combination of two other models, the attribution scores should be the result of applying the same linear combination to the scores for the two constituent models. Finally, _completeness_ requires that the attribution scores exactly account for all the changes to the model output caused by the input features.
**Theorem 1**.: _CAFE satisfies missingness, linearity and completeness for any choice of conflict sensitivity constants \(c^{(n)}\) for the individual layers._
The proof as well as the precise definitions of the properties are provided in the supplement. Note that we adapted the original definitions so that they are applicable to CAFE with its unique properties.
In the supplement, we also show that the time complexity of CAFE is \(\mathcal{O}(C\cdot\mathrm{dim}(0))\), where \(C\) is the cost of the forward function associated with the explained NN. We show that, with batching, the runtimes of CAFE are comparable to gradient-based methods and significantly better than methods requiring sampling.
### Experiments with Synthetic Data
In these experiments, we aim to empirically test the ability of CAFE and various other baselines to correctly identify the effects of conflicting features. To this end, we construct several synthetic datasets using a controlled data generation process, which enables us to establish the expected "ground-truth" attribution scores for the different features. On a high level, our procedure generates samples with \(D\) continuous features and the same number of binary cancellation features, which counteract the effect of the corresponding continuous features when positive. The likelihood of each cancellation feature being positive is given by a conflict likelihood \(l\). The label and the expected feature attribution scores for each sample are then derived using a set of weights, which specify the effects of the continuous features that are not cancelled by a cancellation feature. We compute the attribution scores for CAFE and the baseline methods3 (using their implementations in Captum (Kokhlikyan et al., 2020)), and compare them with the expected attribution scores using the RMSE metric. CAFE's positive and negative scores are combined into a single joint score for a fair comparison with the baselines. We ensure that the explained NNs achieve low error on our test data, giving us confidence that their reasoning is aligned with the data generation process. The details of our experimental strategy are given in the supplement.
Footnote 3: Our experimental comparison does not include DeepLIFT RevealCancel (DL-RC) as the only available software implementation is for Tensorflow v1, which is incompatible with our software tooling.
The results are summarised in Table 1. Variants of CAFE using larger cancellation sensitivity constants consistently outperform all the considered baselines. CAFE (\(c=1.00\)) achieves the best performance in most experiments, except for the two larger NNs with GELU activations, for which it is outperformed by CAFE (\(c=0.75\)). A possible explanation is that larger NNs
with more complex activation functions may not exactly match the underlying data generation process, making "ground-truth" scores comparison a less reliable metric. Alternately, it is possible that CAFE (\(c=1.00\)) scores may be more noisy, overestimating some of the feature conflicts. Overall, the results suggest that CAFE with higher values of the cancellation sensitivity constant is highly capable of attributing conflicting features, even when compared to time-consuming perturbation methods.
### Experiments with Real Data
We use four datasets and pre-trained NNs from the OpenXAI benchmark (Agarwal et al., 2022) -- COMPAS (Angwin et al., 2016), Home Equity Line of Credit (HELOC) (FICO, 2022), Adult Income (Yeh and hui Lien, 2009) and German Credit (Hofmann, 1994). Additionally, we train NNs for mortality prediction on a subset of the MIMIC-IV database (Johnson et al., 2023). Details of the data construction and model training for MIMIC-IV are in the supplement. For all datasets, we
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{1}{c}{\multirow{2}{*}{**COMPAS**}} & \multicolumn{6}{c}{**HELOC**} & \multicolumn{6}{c}{**Adult**} & \multicolumn{6}{c}{**German**} & \multicolumn{6}{c}{**MIMIC-IV**} \\ \cline{2-13} & & **S** & **L** & **S** & **L** & **S** & **L** & **S** & **L** & **S** & **L** \\ \hline Gradient - Input & 11.74 & 29.93 & 14.34 & 38.10 & 27.48 & 51.67 & **0.024** & **0.051** & 2.29 & 4.97 \\ LRP & 11.74 & 29.93 & 14.34 & 38.10 & 27.48 & 51.67 & **0.024** & **0.051** & 2.29 & 4.97 \\ DeepLIFT Rescale & 10.84 & 27.51 & 13.60 & 35.89 & 27.51 & 51.75 & **0.024** & **0.051** & 2.20 & 4.80 \\ Integrated Gradients & 10.84 & 27.51 & 13.60 & 35.89 & 27.51 & 51.75 & **0.024** & **0.051** & 2.21 & 4.82 \\ SmoothGrad & 29.14 & 69.64 & 16.49 & 41.65 & 28.05 & 52.45 & 0.054 & 0.110 & 3.14 & 6.26 \\ Gradient SHAP & 10.86 & 27.55 & 13.95 & 36.73 & **27.27** & **51.80** & **0.024** & 0.052 & 2.28 & 4.92 \\ Kernel SHAP & 14.45 & 35.64 & 16.65 & 42.38 & 31.54 & 59.27 & 0.054 & 0.108 & 2.75 & 5.65 \\ Shapley Value Sampling & 10.75 & 27.20 & 12.23 & 31.34 & 27.49 & 51.76 & 0.045 & 0.094 & 2.19 & 4.76 \\ LIME & 17.95 & 42.67 & 16.18 & 41.24 & 28.52 & 53.62 & 0.060 & 0.124 & 2.64 & 5.48 \\ \hline CAFE (\(c=0.00\)) & 11.74 & 29.93 & 14.34 & 38.10 & 27.48 & 51.67 & **0.024** & **0.051** & 2.29 & 4.97 \\ CAFE (\(c=0.25\)) & 10.99 & 27.87 & 11.27 & 32.16 & 27.48 & 51.66 & **0.024** & **0.051** & **2.08** & 4.49 \\ CAFE (\(c=0.50\)) & **10.73** & 27.01 & **11.64** & 29.62 & 27.47 & 51.66 & **0.024** & **0.051** & 2.13 & **4.44** \\ CAFE (\(c=0.75\)) & **10.73** & **26.79** & 11.68 & **29.09** & 27.47 & 51.66 & **0.024** & **0.051** & 2.30 & 4.64 \\ CAFE (\(c=1.00\)) & 10.88 & 26.94 & 11.90 & 29.25 & 27.48 & 51.66 & **0.024** & **0.051** & 2.48 & 4.91 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Infidelity of the different attribution methods applied to NNs trained on real data for smaller (S, with Gaussian noise standard deviation 0.5 and categorical resampling probability 0.1) and larger (L, standard deviation 0.75, categorical resampling probability 0.2) perturbations. For the evaluation on the COMPAS, HELOC, Adult and German datasets, we used the corresponding pretrained models from the OpenXAI benchmark. For MIMIC-IV, we trained five differently initialised models and averaged the results (the standard deviations are reported in the supplement).
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{\(D\)} & \multirow{2}{*}{\(l\)} & \multirow{2}{*}{\(H\)} & \multirow{2}{*}{\(\phi\)} & \multirow{2}{*}{**VE**} & \multirow{2}{*}{**TE**} & \multicolumn{6}{c}{**Attribution RMSE (\(\downarrow\))**} \\ \cline{6-14} & & & & & G - 1 & LRP & DLR & IG & SG & GS & KS & SVS & LIME \\ \hline
2 & 0.30 & 16 & & 0.03 & 0.07 & 3.27 & 3.27 & 3.19 & 3.18 & 4.86 & 3.19 & 1.81 & 1.59 & 1.69 \\
3 & 0.25 & 24 & ReLU & 0.02 & 0.04 & 2.66 & 2.66 & 2.57 & 2.59 & 3.96 & 2.59 & 1.60 & 1.27 & 1.49 \\
4 & 0.20 & 32 & & 0.10 & 0.08 & 2.57 & 2.57 & 2.46 & 2.47 & 4.33 & 2.49 & 1.66 & 1.29 & 1.51 \\
5 & 0.15 & 40 & & 0.09 & 0.11 & 2.13 & 2.13 & 2.04 & 2.06 & 4.25 & 2.07 & 1.53 & 1.12 & 1.33 \\ \hline
2 & 0.30 & 16 & & 0.02 & 0.06 & 3.28 & 3.14 & 3.07 & 3.07 & 3.07 & 4.89 & 3.09 & 1.81 & 1.59 & 1.69 \\
3 & 0.25 & 24 & & 0.03 & 0.04 & 2.68 & 2.56 & 2.50 & 2.52 & 3.94 & 2.53 & 1.60 & 1.27 & 1.48 \\
4 & 0.20 & 32 & & 0.09 & 0.09 & 2.55 & 2.37 & 2.24 & 2.28 & 4.27 & 2.30 & 1.66 & 1.29 & 1.51 \\
5 & 0.15 & 40 & & 0.09 & 0.12 & 2.14 & 2.05 & 2.02 & 2.03 & 4.26 & 2.04 & 1.53 & 1.13 & 1.33 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of different attribution methods in our synthetic data experiments for different continuous data dimensions \(D\), conflict likelihoods \(l\), hidden layer dimensions \(H\) and activation functions \(\phi\). We report the RMSE errors on the validation (VE) and test (TE) sets. Results in each row are averaged over five differently initialised datasets and NNs (the standard deviations and statistical measures are reported in the supplement). The methods are Gradient \(\cdot\) Input (G - I), LRP, DeepLIFT Rescale (DL-R), Integrated Gradients (IG), SmoothGrad (SG), Gradient SHAP (GS), Kernel SHAP (KS), Dshape Value Sampling (SVS), LIME, and CAFE (with different values of \(c^{(n)}\) for all \(n\)). |
2309.13806 | Cohomological Arithmetic Statistics for Principally Polarized Abelian
Varieties over Finite Fields | There is a natural probability measure on the set of isomorphism classes of
principally polarized Abelian varieties of dimension $g$ over $\mathbb{F}_q$,
weighted by the number of automorphisms. The distributions of the number of
$\mathbb{F}_q$-rational points are related to the cohomology of fiber powers of
the universal family of principally polarized Abelian varieties. To that end we
compute the cohomology $H^i(\mathcal{X}^{\times n}_g,\mathbb{Q}_\ell)$ for
$g=1$ using results of Eichler-Shimura and for $g=2$ using results of
Lee-Weintraub and Petersen, and we compute the compactly supported Euler
characteristics $e_\mathrm{c}(\mathcal{X}^{\times n}_g,\mathbb{Q}_\ell)$ for
$g=3$ using results of Hain and conjectures of Bergstr\"om-Faber-van der Geer.
In each of these cases we identify the range in which the point counts
$\#\mathcal{X}^{\times n}_g(\mathbb{F}_q)$ are polynomial in $q$. Using results
of Borel and Grushevsky-Hulek-Tommasi on cohomological stability, we adapt
arguments of Achter-Erman-Kedlaya-Wood-Zureick-Brown to pose a conjecture about
the asymptotics of the point counts $\#\mathcal{X}^{\times n}_g(\mathbb{F}_q)$
in the limit $g\rightarrow\infty$. | Aleksander Shmakov | 2023-09-25T01:18:16Z | http://arxiv.org/abs/2309.13806v1 | # Cohomological Arithmetic Statistics for Principally Polarized Abelian Varieties over Finite Fields
###### Abstract
There is a natural probability measure on the set of isomorphism classes of principally polarized Abelian varieties of dimension \(g\) over \(\mathbb{F}_{q}\), weighted by the number of automorphisms. The distributions of the number of \(\mathbb{F}_{q}\)-rational points are related to the cohomology of fiber powers of the universal family of principally polarized Abelian varieties. To that end we compute the cohomology \(H^{i}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) for \(g=1\) using results of Eichler-Shimura and for \(g=2\) using results of Lee-Weintraub and Petersen, and we compute the compactly supported Euler characteristics \(e_{c}(\mathcal{X}_{q}^{\times n},\mathbb{Q}_{\ell})\) for \(g=3\) using results of Hain and conjectures of Bergstrom-Faber-van der Geer. In each of these cases we identify the range in which the point counts \(\#\mathcal{X}_{q}^{\times n}(\mathbb{F}_{q})\) are polynomial in \(q\). Using results of Borel and Grushevsky-Hulek-Tommasi on cohomological stability, we adapt arguments of Achter-Erman-Kedlaya-Wood-Zureick-Brown to pose a conjecture about the asymptotics of the point counts \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) in the limit \(g\to\infty\).
## Introduction
Let \([\mathcal{A}_{g}(\mathbb{F}_{q})]\) be the set of isomorphism classes of principally polarized Abelian varieties of dimension \(g\) over \(\mathbb{F}_{q}\). The cardinality \(\#[\mathcal{A}_{g}(\mathbb{F}_{q})]\) is finite; of course, for each \([A,\lambda]\in[\mathcal{A}_{g}(\mathbb{F}_{q})]\) the cardinality \(\#A(\mathbb{F}_{q})\) is finite, and is constant in its isogeny class. One would like to understand how the point counts of principally polarized Abelian varieties over \(\mathbb{F}_{q}\) distribute.
Experience informs us that such point counting problems are better behaved when weighted by the number of automorphisms. To that end let \(\mathcal{A}_{g}(\mathbb{F}_{q})\) be the groupoid of principally polarized Abelian varieties of dimension \(g\) over \(\mathbb{F}_{q}\). Consider the groupoid cardinality
\[\#\mathcal{A}_{g}(\mathbb{F}_{q})=\sum_{[A,\lambda]\in[\mathcal{A}_{g}( \mathbb{F}_{q})]}\frac{1}{\#\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)}\]
For example, one has (classically for \(g=1\), by Lee-Weintraub [40, Corollary 5.2.3] for \(g=2\) and by Hain [30, Theorem 1] for \(g=3\)):
\[\#\mathcal{A}_{1}(\mathbb{F}_{q}) =q\] \[\#\mathcal{A}_{2}(\mathbb{F}_{q}) =q^{3}+q^{2}\] \[\#\mathcal{A}_{3}(\mathbb{F}_{q}) =q^{6}+q^{5}+q^{4}+q^{3}+1\]
Consider the natural probability measure \(\mu_{\mathcal{A}_{g}(\mathbb{F}_{q})}\) on \([\mathcal{A}_{g}(\mathbb{F}_{q})]\) such that \([A,\lambda]\in[\mathcal{A}_{g}(\mathbb{F}_{q})]\) has mass weighted by the number of automorphisms:
\[\mu_{\mathcal{A}_{g}(\mathbb{F}_{q})}([A,\lambda])=\frac{1}{\#\mathcal{A}_{g}( \mathbb{F}_{q})\#\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)}\]
On the discrete probability space \(([\mathcal{A}_{g}(\mathbb{F}_{q})],2^{[\mathcal{A}_{g}(\mathbb{F}_{q})]},\mu_{ \mathcal{A}_{g}(\mathbb{F}_{q})})\) consider the random variable \(\#A_{g}(\mathbb{F}_{q}):[\mathcal{A}_{g}(\mathbb{F}_{q})]\to\mathbb{Z}\) assigning to \([A,\lambda]\in[\mathcal{A}_{g}(\mathbb{F}_{q})]\) the point count \(\#A(\mathbb{F}_{q})\). Our goal is to understand, among other things, the expected values \(\mathbb{E}(\#A_{g}(\mathbb{F}_{q}))\), and more generally the higher moments \(\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})\) with respect to the natural probability measure \(\mu_{\mathcal{A}_{g}(\mathbb{F}_{q})}\).
For example, one has the expected values (classically for \(g=1\), by Lee [39, Corollary 1.4] for \(g=2\), and by 5.3 for \(g=3\)):
\[\mathbb{E}(\#A_{1}(\mathbb{F}_{q})) =q+1\] \[\mathbb{E}(\#A_{2}(\mathbb{F}_{q})) =q^{2}+q+1-\frac{1}{q^{3}+q^{2}}\] \[\mathbb{E}(\#A_{3}(\mathbb{F}_{q})) =q^{3}+q^{2}+q+1-\frac{q^{2}+q}{q^{6}+q^{5}+q^{4}+q^{3}+1}\]
and one has the expected values (classically for \(g=1\), by Lee [39, Corollary 1.5] for \(g=2\), and by 5.3 for \(g=3\)):
\[\mathbb{E}(\#A_{1}(\mathbb{F}_{q})^{2}) =q^{2}+3q+1-\frac{1}{q}\] \[\mathbb{E}(\#A_{2}(\mathbb{F}_{q})^{2}) =q^{4}+3q^{3}+6q^{2}+3q-\frac{5q^{2}+5q+3}{q^{3}+q^{2}}\] \[\mathbb{E}(\#A_{3}(\mathbb{F}_{q})^{2}) =q^{6}+3q^{5}+6q^{4}+10q^{3}+6q^{2}+2q-2-\frac{8q^{5}+14q^{4}+12q ^{3}+7q^{2}-2q-7}{q^{6}+q^{5}+q^{4}+q^{3}+1}\]
Many more expected values are computed and displayed in 3.3, 4.3, and 5.4 later in the paper.
The above expected values are obtained by applying the Grothendieck-Lefschetz trace formula to the \(\ell\)-adic cohomology of the universal family of principally polarized Abelian varieties in order to produce the required point counts over finite fields. Let \(\mathcal{A}_{g}\) be the moduli of principally polarized Abelian varieties of dimension \(g\) and let \(\pi:\mathcal{X}_{g}\to\mathcal{A}_{g}\) be the universal family of Abelian varieties over \(\mathcal{A}_{g}\). Consider the \(n\)-fold fiber product:
\[\pi^{n}:\mathcal{X}_{g}^{\times n}:=\underbrace{\mathcal{X}_{g}\times_{ \mathcal{A}_{g}}\ldots\times_{\mathcal{A}_{g}}\mathcal{X}_{g}}_{n}\to\mathcal{ A}_{g}\]
Then the expected value \(\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})\) is related to the groupoid cardinality \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\):
\[\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})=\frac{\#\mathcal{X}_{g}^{\times n}( \mathbb{F}_{q})}{\#\mathcal{A}_{g}(\mathbb{F}_{q})}\]
In order to compute the groupoid cardinalities \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) it is enough to compute the compactly supported Euler characteristic \(e_{c}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell}):=\sum_{i\geq 0}(-1)^{i}H_{c} ^{i}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) as an element of the Grothendieck group of \(\ell\)-adic Galois representations, in which case by applying the Grothendieck-Lefschetz trace formula we have:
\[\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})=\mathrm{tr}(\mathrm{Frob}_{q}|e_ {c}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})):=\sum_{i\geq 0}\mathrm{tr}( \mathrm{Frob}_{q}|H_{c}^{i}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell}))\]
Note that since \(\mathcal{X}_{g}^{\times n}\) is the complement of a normal crossings divisor of a smooth proper Deligne-Mumford stack over \(\mathbb{Z}\) (see [20, Chapter VI, Theorem 1.1]), the \(\ell\)-adic etale cohomology \(H^{i}(\mathcal{X}_{g,\overline{\mathbb{Q}}}^{\times n},\mathbb{Q}_{\ell})\)
is unramified for all primes \(p\neq\ell\) (so that the action of \(\operatorname{Frob}_{p}\) is well-defined) and is isomorphic to the \(\ell\)-adic etale cohomology \(H^{i}(\mathcal{X}_{g,\overline{\mathbb{F}}_{p}}^{\times n},\mathbb{Q}_{\ell})\) as a representation of \(\operatorname{Gal}(\overline{\mathbb{F}}_{p}/\mathbb{F}_{p})\), with the action of \(\operatorname{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})\subseteq \operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) factoring through the surjection \(\operatorname{Gal}(\overline{\mathbb{Q}}_{p}/\mathbb{Q}_{p})\to\operatorname{ Gal}(\overline{\mathbb{F}}_{p}/\mathbb{F}_{p})\). Consequently we will use the cohomology over \(\overline{\mathbb{Q}}\) and the cohomology over \(\overline{\mathbb{F}}_{p}\) somewhat interchangeably, dropping either of these fields from the subscript whenever stating results which are true for both of these situations, as we have done above.
The computation requires three results: the first result 1.3, due to Deligne, involves the degeneration of the Leray spectal sequence computing \(H^{*}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) in terms of the cohomology of the \(\ell\)-adic local systems \(\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}\) on \(\mathcal{A}_{g}\), the second result 1.5 expresses the local systems \(\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}\) in terms of the local systems \(\mathbb{V}_{\lambda}\) on \(\mathcal{A}_{g}\) corresponding to the irreducible representation of \(\operatorname{Sp}_{2g}\) of highest weight \(\lambda\), and the third result (3.1 for \(g=1\) due to Eichler-Shimura, 4.1 for \(g=2\) due to Lee-Weintraub and Petersen, and 5.1 for \(g=3\) due to Hain and Bergstrom-Faber-van der Geer) computes the cohomology of \(\ell\)-adic cohomology of the local systems \(\mathbb{V}_{\lambda}\) on \(\mathcal{A}_{g}\). These results about the cohomology of local systems relies on the work of many people and results of the Langlands program as input.
Indeed, the expected values displayed so far might give the impression that the compactly supported Euler characteristics \(e_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) are Tate type, so that the point counts \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) are polynomial in \(q\). This is not true in general: the compactly supported Euler characteristics \(e_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) in general involve \(\ell\)-adic Galois representations attached to vector-valued Siegel modular forms for \(\operatorname{Sp}_{2g}(\mathbb{Z})\), so that the point counts \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) in general involve traces of Hecke operators on spaces of vector-valued Siegel modular forms. The relation between traces of Frobenius and traces of Hecke operators is ultimately obtained by the Langlands-Kottwitz method by comparing the Grothendieck-Lefschetz trace formula to the stabilization of the Arthur-Selberg trace formula [37]; while this strategy is overly sophisticated in the case \(g=1\), it is the strategy used in the work of Petersen [46] in the case \(g=2\) and by unpublished work of Taibi [49] in the case \(g\geq 3\).
Summary of ResultsFor \(g=1,2\) we know enough about the cohomology of local systems on \(\mathcal{A}_{g}\) to compute \(H^{i}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) as an \(\ell\)-adic Galois representation (up to semisimplification). In the case \(g=1\) a classical result of Eichler-Shimura (see for example [8, Theorem 2.3]) implies the following result:
**Theorem**.: 3.2 The cohomology \(H^{i}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) is Tate type for all \(i\) and all \(1\leq n\leq 9\). The cohomology \(H^{i}(\mathcal{X}_{1}^{\times 10},\mathbb{Q}_{\ell})\) is Tate type for all \(i\neq 11\), whereas for \(i=11\) we have
\[H^{11}(\mathcal{X}_{1}^{\times 10},\mathbb{Q}_{\ell})=\mathbb{S}_{\Gamma(1)}[12]+ \mathbb{L}^{11}+99\mathbb{L}^{10}+1925\mathbb{L}^{9}+12375\mathbb{L}^{8}+2970 0\mathbb{L}^{7}\]
where \(\mathbb{S}_{\Gamma(1)}[12]\) is the \(2\)-dimensional \(\ell\)-adic Galois representation attached to the weight \(12\) cusp form \(\Delta\in S_{12}(\Gamma(1))\). In particular the compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 10\).
In the case \(g=2\) results of Lee-Weintraub [40, Corollary 5.2.3] and Petersen [46, Theorem 2.1] imply following result:
**Theorem**.: 4.2 The cohomology \(H^{i}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) is Tate type for all \(i\) and all \(1\leq n\leq 6\). The cohomology \(H^{i}(\mathcal{X}_{2}^{\times 7},\mathbb{Q}_{\ell})\) is Tate type for all \(i\neq 17\), whereas for \(i=17\) we have
\[H^{17}(\mathcal{X}_{2}^{\times 7},\mathbb{Q}_{\ell})=\mathbb{S}_{\Gamma(1)}[18]+ \mathbb{L}^{17}+1176\mathbb{L}^{15}+63700\mathbb{L}^{13}+6860\mathbb{L}^{12}+3 21048\mathbb{L}^{11}+294440\mathbb{L}^{10}+\mathbb{L}^{9}\]
where \(\mathbb{S}_{\Gamma(1)}[18]\) is the \(2\)-dimensional \(\ell\)-adic Galois representation attached to the weight \(18\) cusp form \(f_{18}=\Delta E_{6}\in S_{18}(\Gamma(1))\). In particular the compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 7\).
The cohomology groups \(H^{i}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) for \(1\leq n\leq 10\) and \(H^{i}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) for \(1\leq n\leq 7\) are displayed in the tables 1 and 2 at the end of the paper. The Euler characteristics \(e_{\mathrm{c}}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) for \(1\leq n\leq 10\) and \(e_{\mathrm{c}}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) for \(1\leq n\leq 7\) are displayed along with these theorems later in the paper.
In the case \(g=3\) there are precise conjectures of Bergstrom-Faber-van der Geer [8, Conjecture 7.1] about the compactly supported Euler characteristics of local systems on \(\mathcal{A}_{3}\) as an element of the Grothendieck group of \(\ell\)-adic Galois representations. These conjectures are now known at least for small highest weight \(\lambda\) using dimension formulas for spaces of vector-valued Siegel modular forms for \(\mathrm{Sp}_{6}(\mathbb{Z})\) obtained by Taibi [48]. These conjectures, along with a result of Hain [30, Theorem 1] implies the following result:
**Theorem**.: 5.3 Assume conjectures 5.1 and 5.2. Then the Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) is Tate type for all \(1\leq n\leq 5\). The compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 6},\mathbb{Q}_{\ell})\) is given by:
\[e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 6},\mathbb{Q}_{\ell}) =(\mathbb{L}^{6}+21\mathbb{L}^{5}+120\mathbb{L}^{4}+280\mathbb{L}^ {3}+309\mathbb{L}^{2}+161\mathbb{L}+32)\mathbb{S}_{\Gamma(1)}[0,10]\] \[+\mathbb{L}^{24}+22\mathbb{L}^{23}+253\mathbb{L}^{22}+2024 \mathbb{L}^{21}+11362\mathbb{L}^{20}+46613\mathbb{L}^{19}\] \[+146665\mathbb{L}^{18}+364262\mathbb{L}^{17}+720246\mathbb{L}^{16 }+1084698\mathbb{L}^{15}+1036149\mathbb{L}^{14}+38201\mathbb{L}^{13}\] \[-1876517\mathbb{L}^{12}-3672164\mathbb{L}^{11}-4024657\mathbb{L}^ {10}-2554079\mathbb{L}^{9}+101830\mathbb{L}^{8}+2028655\mathbb{L}^{7}\] \[+2921857\mathbb{L}^{6}+2536864\mathbb{L}^{5}+1553198\mathbb{L}^{4 }+687157\mathbb{L}^{3}+215631\mathbb{L}^{2}+45035\mathbb{L}+4930\]
where \(\mathbb{S}_{\Gamma(1)}[0,10]=\mathbb{S}_{\Gamma(1)}[18]+\mathbb{L}^{9}+ \mathbb{L}^{8}\) is the \(4\)-dimensional \(\ell\)-adic Galois representation attached to the Saito-Kurokawa lift \(\chi_{10}\in S_{0,10}(\Gamma(1))\) of the weight \(18\) cusp form \(f_{18}=\Delta E_{6}\in S_{18}(\Gamma(1))\). In particular the compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 6\).
The Euler characteristics \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) for \(1\leq n\leq 6\) are displayed along with these theorems later in the paper. In view of [12, Theorem 1.9], using the classification results of Chevevier-Taibi [16], these computations are unconditional for \(1\leq n\leq 3\) on the basis of point counts.
We have continued these computations until reaching the first modular contributions: in the case \(g=1\) the contribution is through the discriminant cusp form \(\Delta\in S_{12}(\Gamma(1))\) which contributes the irreducible \(2\)-dimensional \(\ell\)-adic Galois representation \(\mathbb{S}_{\Gamma(1)}[12]\), and in the case \(g=2\) and \(g=3\) the contributions are through the Saito-Kurokawa lift \(\chi_{10}\in S_{0,10}(\Gamma(1))\) which contributes the irreducible \(2\)-dimensonal \(\ell\)-adic Galois representation \(\mathbb{S}_{\Gamma(1)}[18]\). One can continue further, where for \(g=2\), in the case \(n=11\) we have contributions from the vector-valued Siegel modular forms \(\chi_{6,8}\in S_{6,8}(\Gamma(1))\) and \(\chi_{4,10}\in S_{4,10}(\Gamma(1))\) of general type (see [23, Section 25] for the relevant dimensions), which contribute the irreducible \(4\)-dimensional \(\ell\)-adic Galois representations \(\mathbb{S}_{\Gamma(1)}[6,8]\) and \(\mathbb{S}_{\Gamma(1)}[4,10]\) (see [51, Theorem I, Theorem II]). For \(g=3\), in the case \(n=9\) we have a contribution from an \(8\)-dimensional \(\ell\)-adic Galois representation \(\mathbb{S}_{\Gamma(1)}[3,3,7]\) which decomposes into a \(1\)-dimensional \(\ell\)-adic Galois representation of Tate type and an irreducible \(7\)-dimensional \(\ell\)-adic Galois representation (see [8, Example 9.1]), which is explained by a functorial lift from the exceptional group \(\mathrm{G}_{2}\) predicted by [26]. This is to say that if one continues a bit further, one encounters more complicated \(\ell\)-adic Galois representations in cohomology governing these arithmetic statistics. We end up using each of these contributions to deduce that \(e_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type above a certain range.
Nevertheless, it is reasonable to conjecture that these modular contributions to arithmetic statistics are negligible. As explained in [2], random matrix heuristics plausibly apply in the limit \(g\to\infty\) to the Frobenius eigenvalues of \(e_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) not explained by the existence of algebraic cycles, and bounding the traces of these matrices with high probability leads one to the heuristic that only certain Tate classes contribute to the Grothendieck-Lefschetz trace formula asymptotically.
Following this strategy, we pose the following conjecture about the distributions of the point counts \(\#A_{g}(\mathbb{F}_{q})\) in the limit \(g\to\infty\):
**Conjecture**.: 2.1 (compare to [2, Conjecture 1]) Let \(\lambda=1+\frac{1}{q}+\frac{1}{q(q-1)}=\frac{1}{1-q^{-1}}\). For all \(n\geq 1\) we have
\[\lim_{g\to\infty}q^{-ng}\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})=\lambda^{\frac {n(n+1)}{2}}\]
We pose a second conjecture 2.4 about the negligible contribution of certain to these point counts (compare to [2, Heuristic 2]) and show that this implies the first conjecture. The computations done in the cases \(g=1\), \(g=2\), and \(g=3\) are consistent with this conjecture in their respective stability ranges.
Relation to Other WorkMuch work has been done regarding the cohomology of local systems on \(\mathcal{M}_{g,n}\) and its compactification (see [45] for a survey, and for example [4], [5], [6], [7], [11], [12], [13], [14], [25], [41]), and likewise for \(\mathcal{A}_{g}\) and its compactifications (see [32] for a survey, and for example [8][15], [27], [28], [30], [33], [34], [40], [46]).
The method we have used to investigate arithmetic statistics for varieties over finite fields is hardly new: it is explained very clearly by Lee [39] in the case \(g=2\), where the computations of \(H^{i}(\mathcal{X}_{2},\mathbb{Q}_{\ell})\) and \(H^{i}(\mathcal{X}_{2}^{\times 2},\mathbb{Q}_{\ell})\) appear. The computations in the case \(g=3\) are new, but use the same method. The theme of identifying in which range modular contributions appear in the cohomology of fiber powers of the universal Abelian variety represents a departure from this previous work.
The work of Achter-Erman-Kedlaya-Wood-Zureick-Brown [2] concerns the point counts \(\#\mathcal{M}_{g,n}(\mathbb{F}_{q})\) in the limit \(g\to\infty\), and uses results of Madsen-Weiss [41] on cohomological stability for \(\mathcal{M}_{g,n}\) to show that the distributions of the point counts \(\#C_{g}(\mathbb{F}_{q})\) are asymptotically Poisson with mean \(q\lambda=q+1+\frac{1}{q-1}=\frac{1}{q-1}\), assuming a conjecture on the negligible contribution of non-tautological classes to point counts. We have used the same method to study the point counts \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) in the limit \(g\to\infty\), using results of Borel [9], [10] and Grushevsky-Hulek-Tommasi [28] on cohomological stability for \(\mathcal{X}_{g}^{\times n}\) to study the asymptotics of the distributions of the point counts \(\#A_{g}(\mathbb{F}_{q})\), assuming an analogous conjecture on the negligible contribution of unstable classes to point counts.
The work of Achter-Altrug-Garcia-Gordon [1] takes a rather different approach to the study arithmetic statistics for principally polarized Abelian varieties over \(\mathbb{F}_{q}\), starting from a theorem of Kottwitz relating masses of isogeny classes to volumes of tori and twisted orbital integrals, and then relating these to a product of local factors \(\nu_{v}([A,\lambda],\mathbb{F}_{q})\) over all places \(v\) of \(\mathbb{Q}\). By contrast, almost every result we have used about the Galois action on the \(\ell\)-adic cohomology of local systems on \(\mathcal{A}_{g}\) relies on the Langlands-Kottwitz method relating traces of Frobenius to traces of Hecke operators, starting from the same theorem of Kottwitz and ultimately relating this to the stabilization of the Arthur-Selberg trace formula. It may be interesting to relate these two approaches, for instance by reexamining the computations in this paper in terms of explicit computations of twisted orbital integrals.
AcknowledgmentsMy deepest gratitude goes to Seraphina Lee for providing an early draft of her paper [39] and a Sage program on which these computations are based, and for her continued interest and discussions relevant to this work, in particular for catching some errors in earlier drafts. I also thank Jonas Bergstrom for helpful discussions regarding the range in which the conjectures on the cohomology of local systems on \(\mathcal{A}_{3}\) are unconditional.
I would also like to thank Jim Arthur for his support, and Julia Gordon for giving a talk at the Fields Institute Beyond Endoscopy Mini-Conference which so clearly emphasized to me the connection between arithmetic statistics for Abelian varieties and results of Langlands and Kottwitz.
Finally I would like to thank Benson Farb and Dan Petersen for encouraging this work in the beginning, and Daniel Litt for encouraging me to finally finish it.
## 1 Arithmetic Statistics and Cohomology of Moduli Stacks
We now explain the method we use to study point counts of Abelian varieties over finite fields in terms of the \(\ell\)-adic cohomology of their moduli stacks, following Lee [39].
Moduli of Abelian VarietiesLet \(\mathcal{A}_{g}\) be the moduli stack of principally polarized Abelian varieties of dimension \(g\) which is a smooth Deligne-Mumford stack of dimension \(\dim(\mathcal{A}_{g})=\frac{g(g+1)}{2}\) over \(\mathbb{Z}\) (and hence over any \(\mathbb{F}_{q}\) by base change) and let \(\mathcal{A}_{g}(\mathbb{F}_{q})\) be the groupoid of principally polarized Abelian varieties of dimension \(g\) over \(\mathbb{F}_{q}\). Let \(\pi:\mathcal{X}_{g}\to\mathcal{A}_{g}\) be the universal family of Abelian varieties over \(\mathcal{A}_{g}\). For \(n\geq 1\) consider the \(n\)-th fiber power of the universal family
\[\pi^{n}:\mathcal{X}_{g}^{\times n}:=\underbrace{\mathcal{X}_{g}\times_{ \mathcal{A}_{g}}\ldots\times_{\mathcal{A}_{g}}\mathcal{X}_{g}}_{n}\to\mathcal{ A}_{g}\]
which is a smooth Deligne-Mumford stack of dimension \(\dim(\mathcal{X}_{g}^{\times n})=\frac{g(g+1)}{2}+ng\) over \(\mathbb{Z}\) (and hence over any \(\mathbb{F}_{q}\) by base change). The fiber of \(\pi^{n}:\mathcal{X}_{g}^{\times n}\to\mathcal{A}_{g}\) over a point \([A,\lambda]\in\mathcal{A}_{g}\) is the product \(A^{n}\), so the point counts \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) encode the point counts \(\#A(\mathbb{F}_{q})^{n}\) averaged over their moduli and weighted by the number of automorphisms.
By definition the expected value \(\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})\) of the random variable \(\#A_{g}(\mathbb{F}_{q})^{n}\) with respect the probability measure \(\mu_{\mathcal{A}_{g}(\mathbb{F}_{q})}\) defined in the introduction is given
\[\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})=\sum_{[A,\lambda]\in[\mathcal{A}_{g}( \mathbb{F}_{q})]}\frac{\#A(\mathbb{F}_{q})^{n}}{\#\mathcal{A}_{g}(\mathbb{F}_ {q})\#\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)}\]
which are related to the groupoid cardinality \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) as follows:
**Proposition 1.1**.: (Compare to [39, Lemma 6.8]) The expected value \(\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})\) is given
\[\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n}):=\frac{\#\mathcal{X}_{g}^{\times n}( \mathbb{F}_{q})}{\#\mathcal{A}_{g}(\mathbb{F}_{q})}\]
Proof.: Let \([A,\lambda]\in[\mathcal{A}_{g}(\mathbb{F}_{q})]\) and consider the action of \(\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)\) on \(A^{n}\). Consider the action groupoid \([A(\mathbb{F}_{q})^{n}]:=A(\mathbb{F}_{q})^{n}/\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)\). For \(\underline{x}\in A(\mathbb{F}_{q})^{n}\) let \(\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda;\underline{x})\subseteq\mathrm{Aut}_{ \mathbb{F}_{q}}(A,\lambda)\) be the
subgroup stabilizing \(\underline{x}\), and let \(\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)\cdot\underline{x}\) be the \(\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)\)-orbit of \(\underline{x}\). By the orbit-stabilizer theorem we have
\[\sum_{[\underline{x}]\in[A(\mathbb{F}_{q})^{n}]}\frac{1}{\#\mathrm{Aut}(A, \lambda;\underline{x})}=\sum_{[\underline{x}]\in[A(\mathbb{F}_{q})^{n}]} \frac{\#(\mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)\cdot\underline{x})}{\# \mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)}=\frac{\#A(\mathbb{F}_{q})^{n}}{\# \mathrm{Aut}_{\mathbb{F}_{q}}(A,\lambda)}\]
It follows that
\[\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n}) =\sum_{[A,\lambda]\in[\mathcal{A}_{g}(\mathbb{F}_{q})]}\frac{\#A (\mathbb{F}_{q})^{n}}{\#\mathcal{A}_{g}(\mathbb{F}_{q})\#\mathrm{Aut}_{ \mathbb{F}_{q}}(A,\lambda)}\] \[=\frac{1}{\#\mathcal{A}_{g}(\mathbb{F}_{q})}\sum_{[A,\lambda; \underline{x}]\in[\mathcal{A}_{g}(\mathbb{F}_{q})]}\frac{1}{\#\mathrm{Aut}_{ \mathbb{F}_{q}}(A,\lambda;\underline{x})}=\frac{\#\mathcal{X}_{g}^{\times n}( \mathbb{F}_{q})}{\#\mathcal{A}_{g}(\mathbb{F}_{q})}\qed\]
We will consider the moment generating function
\[M_{\#A_{g}(\mathbb{F}_{q})}(t):=\sum_{n\geq 0}\mathbb{E}(\#A_{g}(\mathbb{F}_{q} )^{n})\frac{t^{n}}{n!}=\sum_{n\geq 0}\frac{\#\mathcal{X}_{g}^{\times n}( \mathbb{F}_{q})}{\#\mathcal{A}_{g}(\mathbb{F}_{q})}\frac{t^{n}}{n!}\]
and we will consider the following normalization of the moment generating function
\[\widetilde{M}_{\#A_{g}(\mathbb{F}_{q})}(t):=M_{\#A_{g}(\mathbb{F}_{q})}(q^{-g} t)=\sum_{n\geq 0}q^{-ng}\frac{\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})}{\# \mathcal{A}_{g}(\mathbb{F}_{q})}\frac{t^{n}}{n!}\]
which behaves better in the limit \(g\to\infty\).
Grothendieck-Lefschetz Trace FormulaNow let \(\mathcal{X}\) be a Deligne-Mumford stack of finite type over \(\mathbb{F}_{q}\), and fix a prime \(\ell\) not dividing \(q\). For \(\mathbb{V}\) an etale \(\mathbb{Q}_{\ell}\)-sheaf on \(\mathcal{X}\) along with a choice of \(\mathbb{Z}_{\ell}\)-lattice \(\mathbb{V}_{0}\) write \(H^{i}(\mathcal{X},\mathbb{V})\) for the \(\ell\)-adic etale cohomology \(H^{i}_{\mathrm{et}}(\mathcal{X}_{\overline{\mathbb{F}}_{q}},\mathbb{V})= \varprojlim_{n}H^{i}_{\mathrm{et}}(\mathcal{X}_{\overline{\mathbb{F}}_{q}}, \mathbb{V}_{0}/\ell^{n})\otimes_{\mathbb{Z}_{\ell}}\mathbb{Q}_{\ell}\) and write \(\phi_{q}:H^{i}(\mathcal{X},\mathbb{V})\to H^{i}(\mathcal{X},\mathbb{V})\) for the arithmetic Frobenius. Similarly, write \(H^{i}_{\mathrm{c}}(\mathcal{X},\mathbb{V})\) for the compactly supported \(\ell\)-adic etale cohomology \(H^{i}_{\mathrm{c}}(\mathcal{X}_{\overline{\mathbb{F}}_{q}},\mathbb{V})= \varprojlim_{n}H^{i}_{\mathrm{c,et}}(\mathcal{X}_{\overline{\mathbb{F}}_{q}}, \mathbb{V}_{0}/\ell^{n})\otimes_{\mathbb{Z}_{\ell}}\mathbb{Q}_{\ell}\) and write \(\mathrm{Frob}_{q}:H^{i}(\mathcal{X},\mathbb{V})\to H^{i}(\mathcal{X},\mathbb{V})\) for the geometric Frobenius.
When \(\mathcal{X}\) is smooth and has constant dimension the groupoid cardinality \(\#\mathcal{X}(\mathbb{F}_{q})\) can be computed by a Grothendieck-Lefschetz trace formula as the alternating sum of traces of arithmetic (geometric) Frobenius on the (compactly supported) \(\ell\)-adic cohomology of \(\mathcal{X}\):
**Proposition 1.2**.: Let \(\mathcal{X}\) be a smooth Deligne-Mumford stack of finite type and constant dimension \(d\) over \(\mathbb{F}_{q}\). Then we have
\[\#\mathcal{X}(\mathbb{F}_{q})=q^{d}\sum_{i\geq 0}(-1)^{i}\mathrm{tr}(\phi_{q}|H^ {i}(\mathcal{X},\mathbb{Q}_{\ell}))=\sum_{i\geq 0}(-1)^{i}\mathrm{tr}( \mathrm{Frob}_{q}|H^{i}_{\mathrm{c}}(\mathcal{X},\mathbb{Q}_{\ell}))\]
Proof.: The first equality follows by [3, Theorem 2.4.5], noting that the etale cohomology of Deligne-Mumford stacks agrees with the smooth cohomology used in this theorem. The second equality follows by Poincare duality (see [52, Proposition 2.30] for the case of Deligne-Mumford stacks), noting that \(q^{d}\mathrm{tr}(\phi_{q}|H^{i}(\mathcal{X},\mathbb{Q}_{\ell}))=\mathrm{tr}( \mathrm{Frob}_{q}|H^{2d-i}_{\mathrm{c}}(\mathcal{X},\mathbb{Q}_{\ell}))\).
It follows that we have
\[\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})=\frac{\mathrm{tr}(\mathrm{Frob}_{q}|e_{ \mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell}))}{\mathrm{tr}( \mathrm{Frob}_{q}|e_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{Q}_{\ell}))}:=\frac{ \sum_{i\geq 0}(-1)^{i}\mathrm{tr}(\mathrm{Frob}_{q}|H^{i}_{\mathrm{c}}( \mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell}))}{\sum_{i\geq 0}(-1)^{i} \mathrm{tr}(\mathrm{Frob}_{q}|H^{i}_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{Q}_{ \ell}))}\]
It remains to compute the Euler characteristics \(e(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell}):=\sum_{i\geq 0}(-1)^{i}H^{i}( \mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\), or Poincare dually the compactly supported Euler characteristics \(e_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell}):=\sum_{i\geq 0}(-1 )^{i}H^{i}_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\), as elements of the Grothendieck group of \(\ell\)-adic Galois representations.
Leray Spectral SequenceNow we would like to compute the cohomology of \(\mathcal{X}_{g}^{\times n}\) in terms of the cohomology of local systems on \(\mathcal{A}_{g}\). We observe that the Leray spectral sequence for the morphism \(\pi^{n}:\mathcal{X}_{g}^{\times n}\to\mathcal{A}_{g}\) degenerates at the \(E_{2}\)-page, as it does for smooth projective morphisms of schemes:
**Proposition 1.3**.: (Compare to [39, Proposition 2.8]) We have a spectral sequence
\[E_{2}^{i,j}=H^{i}(\mathcal{A}_{g},\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}) \Rightarrow H^{i+j}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\]
which degenerates at the \(E_{2}\)-page, and we have a spectral sequence
\[E_{2}^{i,j}=H^{i}_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{R}^{j}\pi_{*}^{n} \mathbb{Q}_{\ell})\Rightarrow H^{i+j}_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\]
which degenerates at the \(E_{2}\)-page.
Proof.: Let \(N\geq 3\) and let \(\mathcal{A}_{g}[N]\) be the moduli stack of principally polarized Abelian varieties of dimension \(g\) with full level \(N\) structure, which is a smooth quasi-projective scheme over \(\mathbb{Z}[\frac{1}{N}]\) (and hence over \(\mathbb{Q}\) or over any \(\mathbb{F}_{q}\) for \(q=p^{k}\) with \(p\nmid N\) by base change). Let \(\pi:\mathcal{X}_{g}[N]\to\mathcal{A}_{g}[N]\) be the universal family of Abelian varieties over \(\mathcal{A}_{g}[N]\). For \(n\geq 1\) consider the \(n\)-th fiber power of the universal family
\[\pi^{n}:\mathcal{X}_{g}[N]^{\times n}:=\underbrace{\mathcal{X}_{g}[N]\times_{ \mathcal{A}_{g}}\ldots\times_{\mathcal{A}_{g}}\mathcal{X}_{g}[N]}_{n}\to \mathcal{A}_{g}[N]\]
which is a smooth quasi-projective scheme over \(\mathbb{Z}[\frac{1}{N}]\) (and hence over \(\mathbb{Q}\) or over any \(\mathbb{F}_{q}\) for \(q=p^{k}\) with \(p\nmid N\) by base change). Since \(\pi^{n}:\mathcal{X}_{g}[N]^{\times n}\to\mathcal{A}_{g}[N]\) is a smooth projective morphism, the Leray spectral sequence
\[E_{2}^{i,j}=H^{i}(\mathcal{A}_{g}[N],\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{ \ell})\Rightarrow H^{i+j}(\mathcal{X}_{g}[N]^{\times n},\mathbb{Q}_{\ell})\]
degenerates at the \(E_{2}\)-page (see for example [18, Proposition 2.4] and [19, Theorem 4.1.1]), so we have an isomorphism
\[\bigoplus_{i+j=k}H^{i}(\mathcal{A}_{g}[N],\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_ {\ell})\simeq H^{k}(\mathcal{X}_{g}[N]^{\times n},\mathbb{Q}_{\ell})\]
of \(\ell\)-adic Galois representations up to semisimplification. Now by the Hochschild-Serre spectral sequence [42, Theorem 2.20] for the \(\mathrm{Sp}_{2g}(\mathbb{Z}/N\mathbb{Z})\)-quotient \(\mathcal{A}_{g}[N]\to\mathcal{A}_{g}\) we have
\[H^{i}(\mathcal{A}_{g}[N],\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell})^{\mathrm{ Sp}_{2g}(\mathbb{Z}/N\mathbb{Z})}\simeq H^{i}(\mathcal{A}_{g},\mathbb{R}^{j} \pi_{*}^{n}\mathbb{Q}_{\ell})\]
and by the Hochschild-Serre spectral sequence for the \(\operatorname{Sp}_{2g}(\mathbb{Z}/N\mathbb{Z})\)-quotient \(\mathcal{X}_{g}[N]^{\times n}\to\mathcal{X}_{g}^{\times n}\) (with \(\operatorname{Sp}_{2g}(\mathbb{Z}/N\mathbb{Z})\) acting diagonally) we have
\[\bigoplus_{i+j=k}H^{i}(\mathcal{A}_{g}[N],\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_ {\ell})^{\operatorname{Sp}_{2g}(\mathbb{Z}/N\mathbb{Z})}\simeq H^{k}(\mathcal{ X}_{g}[N]^{\times n},\mathbb{Q}_{\ell})^{\operatorname{Sp}_{2g}(\mathbb{Z}/N \mathbb{Z})}\simeq H^{k}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\]
so by naturality of the Leray spectral sequence we can take \(\operatorname{Sp}_{2g}(\mathbb{Z}/N\mathbb{Z})\)-invariants and it follows that the Leray spectral sequence
\[E_{2}^{i,j}=H^{i}(\mathcal{A}_{g},\mathbb{R}^{j}\pi_{*}^{n} \mathbb{Q}_{\ell})\Rightarrow H^{i+j}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_ {\ell})\]
degenerates at the \(E_{2}\)-page. The proof for the Leray spectral sequence for compactly supported cohomology is similar, and follows by Poincare duality, noting that \(\mathbb{R}^{j}\pi_{!}^{n}\mathbb{Q}_{\ell}\simeq\mathbb{R}^{j}\pi_{*}^{n} \mathbb{Q}_{\ell}\) since \(\pi^{n}\) is proper.
**Corollary 1.4**.: We have
\[e(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})=\sum_{j\geq 0}(-1)^{j}e( \mathcal{A}_{g},\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell})\]
and we have
\[e_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})= \sum_{j\geq 0}(-1)^{j}e_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{R}^{j}\pi_{*}^{n} \mathbb{Q}_{\ell})\]
as an element of the Grothendieck group of \(\ell\)-adic Galois representations.
Kunneth FormulaWe can make one further simplification by using the Kunneth formula to express the \(\ell\)-adic sheaves \(\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}\) in terms of the \(\ell\)-adic sheaves \(\mathbb{R}^{j}\pi_{*}\mathbb{Q}_{\ell}\):
**Proposition 1.5**.: We have an isomorphism
\[\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}\simeq\bigoplus_{ \begin{subarray}{c}\lambda\vdash j\\ \lambda=(1^{j_{1}}\ldots n^{j_{n}})\end{subarray}}\bigotimes_{1\leq i\leq n} \wedge^{j_{i}}\mathbb{V}\]
where \(\mathbb{V}=\mathbb{R}^{1}\pi_{*}\mathbb{Q}_{\ell}\) is the \(\ell\)-adic local system on \(\mathcal{A}_{g}\) whose fiber over \([A,\lambda]\in\mathcal{A}_{g}\) is \(H^{1}(A,\mathbb{Q}_{\ell})\) corresponding to the standard representation of \(\operatorname{Sp}_{2g}\).
Proof.: By the Kunneth formula (see [52, Corollary 2.20] for the case of Deligne-Mumford stacks) we have have an isomorphism \(\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}\simeq\bigoplus_{j_{1}+j_{2}=j}( \mathbb{R}^{j_{1}}\pi_{*}^{n-1}\mathbb{Q}_{\ell})\otimes(\mathbb{R}^{j_{2}} \pi_{*}\mathbb{Q}_{\ell})\), so by induction on \(n\) it follows that
\[\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}\simeq\bigoplus_{ \begin{subarray}{c}\lambda\vdash j\\ \lambda=(1^{j_{1}}\ldots n^{j_{n}})\end{subarray}}\bigotimes_{1\leq i\leq n} \mathbb{R}^{j_{i}}\pi_{*}\mathbb{Q}_{\ell}\]
Now the result follows since \(\mathbb{R}^{j}\pi_{*}\mathbb{Q}_{\ell}\simeq\wedge^{j}\mathbb{V}\) is the \(\ell\)-adic local sytem on \(\mathcal{A}_{g}\) whose fiber over \([A,\lambda]\in\mathcal{A}_{g}\) is \(H^{j}(A,\mathbb{Q}_{\ell})\simeq\wedge^{j}H^{1}(A,\mathbb{Q}_{\ell})\)
For \(\lambda=(\lambda_{1}\geq\ldots\geq\lambda_{g}\geq 0)\) a highest weight for \(\mathrm{Sp}_{2g}\) let \(\mathbb{V}_{\lambda}\) be the \(\ell\)-adic local system on \(\mathcal{A}_{g}\) occurring in \(\mathrm{Sym}^{\lambda_{1}-\lambda_{2}}(\mathbb{V})\otimes\ldots\otimes\mathrm{ Sym}^{\lambda_{g-1}-\lambda_{g}}(\wedge^{g-1}\mathbb{V})\otimes\mathrm{Sym}^{ \lambda_{g}}(\wedge^{g}\mathbb{V})\) corresponding to the irreducible highest weight representation \(V_{\lambda}\) of \(\mathrm{Sp}_{2g}\). The tensor product of highest weight representations decomposes as a direct sum of highest weight representations with multiplicities
\[\mathbb{V}_{\lambda}\otimes\mathbb{V}_{\lambda^{\prime}}\simeq\bigoplus_{ \lambda^{\prime\prime}}m_{\lambda,\lambda^{\prime},\lambda^{\prime\prime}} \mathbb{V}_{\lambda^{\prime\prime}}\]
where the multiplicities \(m_{\lambda,\lambda^{\prime},\lambda^{\prime\prime}}\) can be computed in terms of Littlewood-Richardson coefficients and the image of the specialization morphism from the universal character ring (see [35, Theorem 3.1] and [36, Section 2.2], though we will not use this description in later computations).
It follows that we have a decomposition
\[\mathbb{R}^{j}\pi_{*}^{n}\mathbb{Q}_{\ell}\simeq\bigoplus_{\lambda}\mathbb{V }_{\lambda}(\tfrac{|\lambda|-j}{2})^{\oplus m_{\lambda}^{j,n}}\]
where the \(\mathbb{V}_{\lambda}\) are irreducible \(\ell\)-adic local systems on \(\mathcal{A}_{g}\) with multiplicity \(m_{\lambda}^{j,n}\geq 0\) determined by Newell-Littlewood numbers, and where \(|\lambda|=\lambda_{1}+\ldots+\lambda_{g}\). Then we have
\[e_{\mathrm{c}}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})=\sum_{j\geq 0}(-1 )^{j}\sum_{\lambda}m_{\lambda}^{j,n}e_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{V}_ {\lambda})(\tfrac{|\lambda|-j}{2})=\sum_{\lambda}f_{\lambda}^{n}(\mathbb{L})e_ {\mathrm{c}}(\mathcal{A}_{g},\mathbb{V}_{\lambda})\]
as elements of the Grothendieck group of \(\ell\)-adic Galois representations, where \(f_{\lambda}^{n}(\mathbb{L})=\sum_{j\geq 0}(-1)^{j}m_{\lambda}^{j,n}\mathbb{L} \tfrac{j-|\lambda|}{2}\) is a polynomial in the Lefschetz motive \(\mathbb{L}=\mathbb{Q}_{\ell}(-1)\), in which case by applying the Grothendieck-Lefschetz trace formula we obtain
\[\mathbb{E}(\#A(\mathbb{F}_{q})^{n})=\frac{\sum_{\lambda}\mathrm{tr}(\mathrm{ Frob}_{q}|f_{\lambda}^{n}(\mathbb{L})e_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{V}_ {\lambda}))}{\mathrm{tr}(\mathrm{Frob}_{q}|e_{\mathrm{c}}(\mathcal{A}_{g}, \mathbb{Q}_{\ell}))}=\frac{\sum_{\lambda}f_{\lambda}^{n}(q)\mathrm{tr}( \mathrm{Frob}_{q}|e_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{Q}_{\ell}))}{\mathrm{ tr}(\mathrm{Frob}_{q}|e_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{Q}_{\ell}))}\]
We have reduced the problem of computing the moments \(E(\#A(\mathbb{F}_{q})^{n})\) to the problem of computing the multiplicities \(m_{\lambda}^{j,n}\), and to the problem of computing the Euler characteristics \(e_{\mathrm{c}}(\mathcal{A}_{g},\mathbb{V}_{\lambda})\) as elements of the Grothendieck group of \(\ell\)-adic Galois representations. The first problem is straightforward, although it is perhaps not so easy to produce clean expressions for these multiplicities except for small \(g\). The second problem is more difficult: explicit computations are only known for \(g=1\) by results of Eichler-Shimura, for \(g=2\) by results of Lee-Weintraub [40] and Petersen [46], and for \(g=3\) by results of Hain [30] and conjectures of Bergstrom-Faber-van der Geer [8]. We will summarize these computations at the end of the paper.
## 2 Conjectures on Point Counts as \(g\to\infty\)
We now consider the asymptotics of the distributions of the point counts \(\#A_{g}(\mathbb{F}_{q})\) in the limit \(g\to\infty\). Following the strategy of [2], we pose the following conjecture:
**Conjecture 2.1**.: (compare to [2, Conjecture 1]) Let \(\lambda=1+\frac{1}{q}+\frac{1}{q(q-1)}=\frac{1}{1-q^{-1}}\). For all \(n\geq 1\) we have
\[\lim_{g\to\infty}q^{-ng}\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{n})=\lim_{g\to \infty}q^{-ng}\frac{\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})}{\#\mathcal{A} _{g}(\mathbb{F}_{q})}=\lambda^{\frac{n(n+1)}{2}}\]
In other words, for \(P(\lambda)\) the distribution with moment generating function \(M_{P(\lambda)}(t)=\sum_{n\geq 0}\lambda^{\frac{n(n+1)}{2}}\frac{t^{n}}{n!}\), the conjecture predicts
\[\lim_{g\to\infty}\widetilde{M}_{\#A_{g}(\mathbb{F}_{q})}(t)=M_{P( \lambda)}(t)\]
so that the distributions of the normalized point counts \(q^{-g}\#A_{g}(\mathbb{F}_{q})\) converge to the distribution \(P(\lambda)\) in the limit \(g\to\infty\).
**Remark 2.2**.: Let \(\mathcal{M}_{g}\) be the moduli stack of genus \(g\) curves and let \(\mathcal{M}_{g,n}\) be the moduli stack of genus \(g\) curves with \(n\) marked points, which are smooth Deligne-Mumford stacks over \(\mathbb{Z}\) (and hence over any \(\mathbb{F}_{q}\) by base change). On the discrete probability space \(([\mathcal{M}_{g}(\mathbb{F}_{q})],2^{[\mathcal{M}_{g}(\mathbb{F}_{q})]},\mu_ {\mathcal{M}_{g}(\mathbb{F}_{q})})\) consider the random variable \(\#C_{g}:[\mathcal{M}_{g}(\mathbb{F}_{q})]\to\mathbb{Z}\) assigning to \([C]\in[\mathcal{M}_{g}(\mathbb{F}_{q})]\) the point count \(\#C(\mathbb{F}_{q})\). With the above normalization and with the same \(\lambda\) as above, [2, Conjecture 1] reads
\[\lim_{g\to\infty}q^{-n}\mathbb{E}(\#C_{g}(\mathbb{F}_{q})_{n})= \lim_{g\to\infty}q^{-n}\frac{\#\mathcal{M}_{g,n}(\mathbb{F}_{q})}{\#\mathcal{M }_{g}(\mathbb{F}_{q})}=\lambda^{n}\]
where \(X_{n}=X(X-1)\dots(X-n+1)\) is the falling factorial. In other words, for \(\mathrm{Pois}(\lambda)\) the Poisson distribution with mean \(\lambda\) and with falling moment generating function \(\underline{M}_{\mathrm{Pois}(\lambda)}(t)=\sum_{n\geq 0}\lambda^{n}\frac{t^{n}}{n!}\), the conjecture predicts
\[\lim_{g\to\infty}\widetilde{M}_{\#C_{g}(\mathbb{F}_{q})}(t)= \underline{M}_{\mathrm{Pois}(\lambda)}(t)\]
where \(\underline{\widetilde{M}}_{\#C_{g}(\mathbb{F}_{q})}(t):=\underline{M}_{\#C_{ g}(\mathbb{F}_{q})}(q^{-1}t)=\sum_{n\geq 0}q^{-n}\frac{\#\mathcal{M}_{g,n}( \mathbb{F}_{q})}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\frac{t^{n}}{n!}\) is the normalization of the falling moment generating function \(\underline{M}_{\#C_{g}(\mathbb{F}_{q})}(t):=\sum_{n\geq 0}\mathbb{E}(\#C_{g}( \mathbb{F}_{q})_{n})\frac{t^{n}}{n!}=\sum_{n\geq 0}\frac{\#\mathcal{M}_{g,n}( \mathbb{F}_{q})}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\frac{t^{n}}{n!}\), so that the distributions of the normalized point counts \(q^{-1}\#C_{g}(\mathbb{F}_{q})\) converge to a Poisson distribution with mean \(\lambda\) in the limit \(g\to\infty\). It would be interesting to give a conceptual explanation for why the same \(\lambda\) appears.
Cohomological StabilityWe now review some results on cohomological stability for \(\mathcal{A}_{g}\) and \(\mathcal{X}_{g}^{\times n}\). Consider the product morphism
\[\mathcal{A}_{g_{1}}(\mathbb{C})\times\mathcal{A}_{g_{2}}(\mathbb{ C}) \to\mathcal{A}_{g_{1}+g_{2}}(\mathbb{C})\] \[([A_{1}],[A_{2}]) \mapsto[A_{1}\times A_{2}]\]
Choosing an elliptic curve \([E]\in\mathcal{A}_{1}(\mathbb{C})\) we obtain a morphism
\[\mathcal{A}_{g}(\mathbb{C}) \to\mathcal{A}_{g+1}(\mathbb{C})\] \[[A] \mapsto[A\times E]\]
such that induced morphism on cohomology \(H^{*}(\mathcal{A}_{g+1}(\mathbb{C}),\mathbb{Q})\to H^{*}(\mathcal{A}_{g}( \mathbb{C}),\mathbb{Q})\) does not depend on the choice of elliptic curve \(E\), since any two elliptic curves over \(\mathbb{C}\) are homotopy equivalent. Similarly we obtain a morphism
\[\mathcal{X}_{g}^{\times n}(\mathbb{C}) \to\mathcal{X}_{g+1}^{\times n}(\mathbb{C})\] \[[A;x_{1},\dots,x_{n}] \mapsto[A\times E;(x_{1},0),\dots,(x_{n},0)]\]
such that the induced morphism on cohomology \(H^{*}(\mathcal{X}_{g+1}^{\times n}(\mathbb{C}),\mathbb{Q})\to H^{*}(\mathcal{X}_{g }^{\times n}(\mathbb{C}),\mathbb{Q})\) does not depend on the choice of elliptic curve \(E\) for the same reason as above.
By [9, Theorem 7.5] and [10, Theorem 4.4] (and by [29, Theorem 3.2] making the stability range explicit), the cohomology \(H^{i}(\mathcal{A}_{g}(\mathbb{C}),\mathbb{Q})\) stabilizes in degrees \(0\leq i\leq g-1\), where it agrees with the inverse limit \(H^{i}(\mathcal{A}_{\infty}(\mathbb{C}),\mathbb{Q})=\varprojlim_{g}H^{*}( \mathcal{A}_{g}(\mathbb{C}),\mathbb{Q})\). The stable cohomology \(H^{*}(\mathcal{A}_{\infty}(\mathbb{C}),\mathbb{Q})\) is a free graded \(\mathbb{Q}\)-algebra, which has the following description.
Consider the graded \(\mathbb{Q}\)-algebra \(S^{*}=\mathbb{Q}[\lambda_{i}]_{i\geq 1\text{ odd}}\) where \(\deg(\lambda_{i})=2i\). We have an isomorphism of graded \(\mathbb{Q}\)-algebras
\[S^{*} \xrightarrow{\sim}H^{*}(\mathcal{A}_{\infty}(\mathbb{C}), \mathbb{Q})\] \[\lambda_{i} \mapsto\pi_{*}u_{i}\]
where \(u_{i}=c_{i}(\Omega_{\mathcal{X}_{g}/\mathcal{A}_{g}})\) is the \(i\)-th Chern class of the relative canonical bundle of the universal family \(\pi:\mathcal{X}_{g}\to\mathcal{A}_{g}\). In particular we have an isomorphism \(S^{i}\xrightarrow{\sim}H^{i}(\mathcal{A}_{g}(\mathbb{C}),\mathbb{Q})\) for all \(0\leq i\leq g-1\).
More generally by [28, Theorem 6.1] the cohomology \(H^{i}(\mathcal{X}_{g}^{\times n}(\mathbb{C}),\mathbb{Q})\) stabilizes in degrees \(0\leq i\leq g-1\), where it agrees with the inverse limit \(H^{i}(\mathcal{X}_{\infty}^{\times n}(\mathbb{C}),\mathbb{Q})=\varprojlim_{g} H^{i}(\mathcal{X}_{g}^{\times n}(\mathbb{C}),\mathbb{Q})\). The stable cohomology \(H^{*}(\mathcal{X}_{\infty}^{\times n}(\mathbb{C}),\mathbb{Q})\) is a free \(H^{*}(\mathcal{A}_{\infty}(\mathbb{C}),\mathbb{Q})\)-algebra, which has the following description.
Consider the graded \(\mathbb{Q}\)-algebra \(S^{*}_{n}=S^{*}[T_{i}]_{1\leq i\leq n}[P_{i,j}]_{1\leq i<j\leq n}\) where \(\deg(T_{i})=\deg(P_{i,j})=2\). We have an isomorphism of graded \(S^{*}\simeq H^{*}(\mathcal{A}_{\infty}(\mathbb{C}),\mathbb{Q})\)-algebras
\[S^{*}_{n} \xrightarrow{\sim}H^{*}(\mathcal{X}_{\infty}^{\times n}(\mathbb{ C}),\mathbb{Q})\] \[\lambda_{i} \mapsto\pi_{*}u_{i}\] \[T_{i} \mapsto\pi_{i}^{*}\Theta\] \[P_{i,j} \mapsto\pi_{i,j}^{*}P\]
where \(\Theta\in H^{2}(\mathcal{X}_{g}(\mathbb{C}),\mathbb{Q})\) is the class of the universal theta divisor trivialized along the zero section and \(\pi_{i}:\mathcal{X}_{g}^{\times n}\to\mathcal{X}_{g}\) is the \(i\)-th projection, and where \(P\in H^{2}(\mathcal{X}_{g}^{\times 2}(\mathbb{C}),\mathbb{Q})\) is the class of the universal Poincare divisor trivialized along the zero section and \(\pi_{i,j}:\mathcal{X}_{g}^{\times n}\to\mathcal{X}_{g}^{\times 2}\) is the \((i,j)\)-th projection. In particular we have an isomorphism \(S^{i}_{n}\xrightarrow{\sim}H^{i}(\mathcal{X}_{g}^{\times n}(\mathbb{C}), \mathbb{Q})\) for all \(0\leq i\leq g-1\).
We now consider the action of Frobenius on \(\ell\)-adic cohomology. Consider the graded \(\mathbb{Q}_{\ell}\)-algebra \(S^{*}_{n,\ell}=S^{*}_{n}\otimes_{\mathbb{Q}}\mathbb{Q}_{\ell}\) with endomorphism \(\operatorname{Frob}_{q}\) given by \(\operatorname{Frob}_{q}(\lambda_{i})=q^{i}\lambda_{i}\) and \(\operatorname{Frob}_{q}(T_{i})=qT_{i}\), and \(\operatorname{Frob}_{q}(P_{i,j})=qP_{i,j}\). We have a morphism of graded \(\mathbb{Q}_{\ell}\)-algebras \(S^{*}_{n,\ell}\to H^{*}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n}, \mathbb{Q}_{\ell})\) defined the same way as the morphism of graded \(\mathbb{Q}\)-algebras \(S^{*}_{n}\to H^{*}(\mathcal{X}_{g}^{\times n}(\mathbb{C}),\mathbb{Q})\) obtained from the above construction. The stable classes \(\pi_{*}u_{i}\) and \(\pi_{i}^{*}\Theta\) and \(\pi_{i,j}^{*}P\) are Tate type since they are formed through pullback and pushforward of Chern classes, in particular the above morphism is \(\operatorname{Frob}_{q}\)-equivariant.
**Proposition 2.3**.: A choice of embedding \(\overline{\mathbb{Q}}_{p}\hookrightarrow\mathbb{C}\) induces a sequence of functorial isomorphisms
\[H^{i}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell}) \xrightarrow{\sim}H^{i}(\mathcal{X}_{g,\mathbb{C}}^{\times n},\mathbb{Q}_{ \ell})\xrightarrow{\sim}H^{i}(\mathcal{X}_{g}^{\times n}(\mathbb{C}),\mathbb{Q} _{\ell})\]
under which the classes \(\pi^{*}u_{i}\) and \(\pi_{i}^{*}\Theta\) and \(\pi_{i,j}^{*}P\) map to the same classes by functoriality.
Proof.: We employ [2, Lemma 8]: Let \(\overline{X}\) be a smooth proper scheme over \(\mathbb{Z}_{p}\), let \(D\) be a relative normal crossings divisor on \(\overline{X}\), let \(G\) be a finite group acting on \(\overline{X}\) and on \(D\), let \(X=\overline{X}-D\), and
let \(\mathcal{X}=[X/G]\) be the corresponding stack quotient. Then a choice of embedding \(\overline{\mathbb{Q}}_{p}\hookrightarrow\mathbb{C}\) induces a sequence of functorial isomorphisms \(H^{i}(\mathcal{X}_{\overline{\mathbb{F}}_{q}},\mathbb{Q}_{\ell})\xrightarrow{ \sim}H^{i}(\mathcal{X}_{\mathbb{C}},\mathbb{Q}_{\ell})\xrightarrow{\sim}H^{i} (\mathcal{X}(\mathbb{C}),\mathbb{Q}_{\ell})\).
Now let \(N\geq 3\) and for \(n\geq 1\) consider the \(n\)-th fiber power of the universal family \(X=\mathcal{X}_{g}[N]^{\times n}\) over \(\mathcal{A}_{g}[N]\) which is a smooth quasi-projective scheme over \(\mathbb{Z}_{p}\) for \(p\nmid N\). Consider the toroidal compactification \(\overline{X}=(\mathcal{X}_{g}[N]^{\times n})^{\mathrm{tor}}\): by [20, Chapter VI, Theorem 1.1] (or more generally by [38, Theorem 2.15(1)]) this is a smooth projective algebraic space over \(\mathbb{Z}_{p}\) for \(p\nmid N\) such that the complement \(D=\overline{X}-X\) is a relative (simple) normal crossings divisor. The natural action of the finite group \(G=\mathrm{Sp}_{2g}(\mathbb{Z}/N\mathbb{Z})\) on \(X\) extends to an action on \(\overline{X}\) and on \(D\), and the corresponding stack quotient is given \(\mathcal{X}=[X/G]=\mathcal{X}_{g}^{\times n}\). Now the result follows, noting that [2, Lemma 8] still applies for algebraic spaces (when \(G\) is trivial the first isomorphism in the lemma follows from [43, Proposition 4.3] and the second isomorphism in the lemma follows from the comparison isomorphism [22, Theorem I.11.6], and in general the lemma follows from the Hochschild-Serre spectral sequence [42, Theorem 2.20], and all of these still apply for algebraic spaces).
By composition with the morphism of graded \(\mathbb{Q}_{\ell}\)-algebras \(S^{*}_{n,\ell}\to H^{*}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n}, \mathbb{Q}_{\ell})\) we obtain an isomorphism \(S^{i}_{n,\ell}\xrightarrow{\sim}H^{i}(\mathcal{X}_{g}^{\times n}(\mathbb{C}),\mathbb{Q}_{\ell})\) for all \(0\leq i\leq g-1\), obtained by tensoring the isomorphism \(S^{i}_{n}\xrightarrow{\sim}H^{i}(\mathcal{X}_{g}^{\times n}(\mathbb{C}), \mathbb{Q})\) over \(\mathbb{Q}\) with \(\mathbb{Q}_{\ell}\), in particular this does not depend on the choice of embedding \(\overline{\mathbb{Q}}_{p}\hookrightarrow\mathbb{C}\). It follows that we have an isomorphism \(S^{*}_{n,\ell}\xrightarrow{\sim}H^{*}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q }}^{\times n},\mathbb{Q}_{\ell})\) for all \(0\leq i\leq g-1\). In particular for \(0\leq i\leq g-1\) odd we have \(H^{2\dim(\mathcal{X}_{g}^{\times n})-i}_{\mathrm{c}}(\mathcal{X}_{g, \overline{\mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell})=0\), and for \(0\leq i\leq g-1\) even we have \(H^{2\dim(\mathcal{X}^{\times n})-i}_{\mathrm{c}}(\mathcal{X}_{g,\overline{ \mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell})=\dim_{\mathbb{Q}_{\ell}}(S^{i} _{n,\ell})\mathbb{L}^{\dim(\mathcal{X}_{g}^{\times n})-\frac{i}{2}}\), by Poincare duality.
Negligible Contributions to Point Counts as \(g\to\infty\)Let \(R^{*}_{\mathrm{c}}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n}, \mathbb{Q}_{\ell})\) be the subring of \(H^{*}_{\mathrm{c}}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n}, \mathbb{Q}_{\ell})\) generated by the image of \(S^{*}_{n,\ell}\), and let \(B^{*}_{\mathrm{c}}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n}, \mathbb{Q}_{\ell})=H^{*}_{\mathrm{c}}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q }}^{\times n},\mathbb{Q}_{\ell})/R^{*}_{\mathrm{c}}(\mathcal{X}_{g,\overline{ \mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell})\). We conjecture that the traces of Frobenius on the classes not in the image of \(S^{*}_{n,\ell}\) should be negligible in the limit \(g\to\infty\):
**Conjecture 2.4**.: (compare to [2, Heuristic 2]) For all \(n\geq 0\) we have
\[\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}\sum_{0\leq i\leq 2\dim( \mathcal{X}_{g}^{\times n})-g}(-1)^{i}\mathrm{tr}(\mathrm{Frob}_{q}|B^{i}_{ \mathrm{c}}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{ \ell}))=0\]
We now show that 2.4 implies 2.1, following the same strategy as in [2, Theorem 3]. We first review some results on cohomological stability, following [32, Section 7].
Now we break up the point count \(\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})\) into stable, unstable, and negligible contributions:
\[T^{\mathrm{stable}}_{g,n,q} :=\sum_{0\leq i\leq g-1}(-1)^{i}\mathrm{tr}(\mathrm{Frob}_{q}|H^{ 2\dim(\mathcal{X}_{g}^{\times n})-i}_{\mathrm{c}}(\mathcal{X}_{g,\overline{ \mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell}))\] \[=\sum_{0\leq i\leq g-1}(-1)^{i}\mathrm{tr}(\mathrm{Frob}_{q}|R^{ 2\dim(\mathcal{X}_{g}^{\times n})-i}_{\mathrm{c}}(\mathcal{X}_{g,\overline{ \mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell}))\] \[T^{\mathrm{unstable}}_{g,n,q} :=\sum_{g\leq i\leq 2\dim(\mathcal{X}_{g}^{\times n})}(-1)^{i} \mathrm{tr}(\mathrm{Frob}_{q}|R^{2\dim(\mathcal{X}_{g}^{\times n})-i}_{\mathrm{c} }(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell}))\] \[N_{g,n,q} :=\sum_{g\leq i\leq 2\dim(\mathcal{X}_{g}^{\times n})}(-1)^{i} \mathrm{tr}(\mathrm{Frob}_{q}|B^{2\dim(\mathcal{X}_{g}^{\times n})-i}_{\mathrm{c} }(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell}))\]
Then by definition we have
\[\#\mathcal{X}_{g}^{\times n}(\mathbb{F}_{q})=T_{g,n,q}^{\rm stable}+T_{g,n,q}^{ \rm unstable}+N_{g,n,q}\]
and the second conjecture is equivalent to the assertion that
\[\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}N_{g,n,q}=0\]
for all \(n\geq 0\). Consider the Hilbert-Poincare series
\[{\rm HS}_{S_{n}^{*}}(z):=\sum_{i\geq 0}\dim_{\mathbb{Q}}(S_{n}^{i})z^{i}=\prod_{1 \leq i\leq n}\frac{1}{1-z^{2}}\prod_{1\leq i<j\leq n}\frac{1}{1-z^{2}}\prod_{i \geq 1\text{ odd}}\frac{1}{1-z^{2i}}\]
Now since \(R_{n,\ell}^{i}=R_{n}^{i}\otimes_{\mathbb{Q}}\mathbb{Q}_{\ell}\simeq R_{\rm c}^ {2\dim(\mathcal{X}_{g}^{\times n})-i}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q} }^{\times n},\mathbb{Q}_{\ell})\) we have
\[\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}T_{g,n,q}^{ \rm stable} =\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}\sum_{0 \leq i\leq g-1}(-1)^{i}{\rm tr}({\rm Frob}_{q}|R_{\rm c}^{2\dim(\mathcal{X}_{g }^{\times n})-i}(\mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n}, \mathbb{Q}_{\ell}))\] \[=\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}\sum_{0 \leq i\leq g-1}(-1)^{i}{\rm tr}({\rm Frob}_{q}|S_{n,\ell}^{i})\] \[=\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}\sum_{0 \leq i\leq g-1}q^{-i}\dim_{\mathbb{Q}}(S_{n}^{2i})\] \[=\sum_{i\geq 0}q^{-i}\dim_{\mathbb{Q}}(S_{n}^{2i})={\rm HS}_{S_{n} ^{*}}(q^{-\frac{1}{2}})\]
Let \(P_{\rm odd}(z)=\sum_{i\geq 0}p_{\rm odd}(i)z^{i}\) be the generating function for the odd partition numbers \(p_{\rm odd}(i)\) (the number of partitions of \(\{1,\ldots,i\}\) into odd parts), and let \(Q_{n}(z)=\sum_{i\geq 0}{n+i-1\choose i}z^{i}\) be the generating function for the binomial coefficients \({n+i-1\choose i}\) (the number of multisets with cardinality \(n\) and weighted cardinality \(i\)). Then we have \({\rm HS}_{S_{n}^{*}}(z)=Q_{\frac{n(n+1)}{2}}(z^{2})P_{\rm odd}(z^{2})\).
For the partition numbers \(p(i)\) (the number of partitions of \(\{1,\ldots,n\}\)) one has the exponential bound \(p_{\rm odd}(i)\leq p(i)\leq\exp(c\sqrt{i})\) for some constant \(c\) not depending on \(i\). In particular we have
\[\dim_{\mathbb{Q}}(S_{n}^{2i})=\sum_{0\leq j\leq i}{\frac{n(n+1)}{2}+j-i\choose j -1}p_{\rm odd}(i-j)\leq\exp(c_{n}\sqrt{i})\]
for some constant \(c_{n}\) not depending on \(i\). Since \(R_{\rm c}^{*}(\mathcal{X}_{g}^{\times n},\mathbb{Q}_{\ell})\) is defined in terms of the image of a morphism from \(S_{n}^{*}\) to cohomology we have \(\dim_{\mathbb{Q}_{\ell}}(R_{\rm c}^{2\dim(\mathcal{X}_{g}^{\times n})-2i}( \mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell}))\leq \dim_{\mathbb{Q}}(S_{n}^{2i})\), in particular we have \(\dim_{\mathbb{Q}_{\ell}}(R_{\rm c}^{2\dim(\mathcal{X}_{g}^{\times n})-2i}( \mathcal{X}_{g,\overline{\mathbb{F}}_{q}}^{\times n},\mathbb{Q}_{\ell}))\leq \exp(c_{n}\sqrt{i})\). Now we have
\[\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}T_{g,n,q}^{ \rm unstable} =\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}\sum_{g \leq i\leq 2\dim(\mathcal{X}_{g}^{\times n})}(-1)^{i}{\rm tr}({\rm Frob}_{q}|R_{ \rm c}^{2\dim(\mathcal{X}_{g}^{\times n})-i}(\mathcal{X}_{g,\overline{\mathbb{ F}}_{q}}^{\times n},\mathbb{Q}_{\ell}))\] \[\leq\lim_{g\to\infty}\sum_{g\leq i\leq 2\dim(\mathcal{X}_{g}^{\times n})}(- 1)^{i}q^{-\frac{i}{2}}\dim_{\mathbb{Q}}(S_{n}^{i})\] \[\leq\lim_{g\to\infty}\sum_{g\leq i\leq 2\dim(\mathcal{X}_{g}^{\times n})}(- 1)^{i}q^{-\frac{i}{2}}\exp(c_{n}\sqrt{i})=0\]
Now suppose that \(\lim_{g\to\infty}q^{-\dim(\mathcal{X}_{g}^{\times n})}N_{g,n,q}=0\). Then we have
\[\lim_{g\to\infty}q^{-ng}\frac{\#\mathcal{X}_{g}^{\times n}(\mathbb{F }_{q})}{\#\mathcal{A}_{g}(\mathbb{F}_{q})} =\lim_{g\to\infty}q^{-ng}\frac{T^{\text{stable}}_{g,n,q}+T^{ \text{unstable}}_{g,n,q}+N_{g,n,q}}{T^{\text{stable}}_{g,0,q}+T^{\text{ unstable}}_{g,0,q}+N_{g,0,q}}\] \[=\frac{\operatorname{HS}_{S_{n}^{*}}(q^{-\frac{1}{2}})}{ \operatorname{HS}_{S^{*}}(q^{-\frac{1}{2}})}=\prod_{1\leq i\leq n}\frac{1}{1-q ^{-1}}\prod_{1\leq i<j\leq n}\frac{1}{1-q^{-1}}=\lambda^{\frac{n(n+1)}{2}}\]
so it follows that the second conjecture 2.4 on the negligible contribution of non-Tate classes to point counts implies the first conjecture 2.1 on the asymptotics of the distribution.
Expanded around \(q=\infty\), the conjecture 2.1 predicts the following leading terms for the expected values \(\mathbb{E}(\#A(\mathbb{F}_{q})^{n})\) in the limit \(g\to\infty\):
\[\lim_{g\to\infty}q^{-g}\mathbb{E}(\#A_{g}(\mathbb{F}_{q})) =1+q^{-1}+q^{-2}+q^{-3}+q^{-4}+\ldots\] \[\lim_{g\to\infty}q^{-2g}\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{2}) =1+3q^{-1}+6q^{-2}+10q^{-3}+15q^{-4}+\ldots\] \[\lim_{g\to\infty}q^{-3g}\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{3}) =1+6q^{-1}+21q^{-2}+56q^{-3}+126q^{-4}+\ldots\] \[\lim_{g\to\infty}q^{-4g}\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{4}) =1+10q^{-1}+55q^{-2}+220q^{-3}+715q^{-4}+\ldots\] \[\lim_{g\to\infty}q^{-5g}\mathbb{E}(\#A_{g}(\mathbb{F}_{q})^{5}) =1+15q^{-1}+120q^{-2}+680q^{-3}+3060q^{-4}+\ldots\]
## 3 Computations for \(g=1\)
Let \(\mathcal{A}_{1}\) be the moduli stack of elliptic curves, which is a smooth Deligne-Mumford stack of dimension \(1\) over \(\mathbb{Z}\). Let \(\pi:\mathcal{X}_{1}\to\mathcal{A}_{1}\) be the universal elliptic curve over \(\mathcal{A}_{1}\) and let \(\mathbb{V}=\mathbb{R}^{1}\pi_{*}\mathbb{Q}_{\ell}\) be the \(\ell\)-adic local system on \(\mathcal{A}_{1}\) corresponding to the standard representation of \(\operatorname{SL}_{2}\). For \(\lambda\geq 0\) an integer let \(\mathbb{V}_{\lambda}=\operatorname{Sym}^{\lambda}(\mathbb{V})\) be the \(\ell\)-adic local system on \(\mathcal{A}_{1}\) corresponding to the irreducible \(\lambda+1\)-dimensional representation of \(\operatorname{SL}_{2}\). For \(\lambda\) odd we have \(H^{*}(\mathcal{A}_{1},\mathbb{V}_{\lambda})=0\) since \(-\mathrm{id}\in\operatorname{SL}_{2}(\mathbb{Z})\) acts by multiplication by \((-1)^{\lambda}\) on the stalks of \(\mathbb{V}_{\lambda}\).
Let \(\mathbb{S}_{\Gamma(1)}[\lambda+2]=\bigoplus_{f}\rho_{f}\) be the \(\ell\)-adic Galois representation corresponding to cusp forms of weight \(\lambda+2\) for \(\Gamma(1)=\operatorname{SL}_{2}(\mathbb{Z})\): for each eigenform \(f\in S_{\lambda+2}(\Gamma(1))\) we have a \(2\)-dimensional \(\ell\)-adic Galois representation \(\rho_{f}\), and we have
\[\operatorname{tr}(\operatorname{Frob}_{p}|\mathbb{S}_{\Gamma(1)}[\lambda+2])= \operatorname{tr}(T_{p}|S_{\lambda+2}(\Gamma(1)))\]
for every prime \(p\), which determines \(\mathbb{S}_{\Gamma(1)}[\lambda+1]\) as an element of the Grothendieck group of \(\ell\)-adic Galois representations. The \(\ell\)-adic Galois representation \(\rho_{f}\) is irreducible as a representation of \(\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) and of \(\operatorname{Gal}(\overline{\mathbb{F}}_{p}/\mathbb{F}_{p})\).
By work of Eichler-Shimura and Deligne we have the following:
**Proposition 3.1**.: [8, Theorem 2.3] For \(\lambda>0\) even we have
\[e_{c}(\mathcal{A}_{1},\mathbb{V}_{\lambda})=-H_{c}^{1}(\mathcal{A}_{1}, \mathbb{V}_{\lambda})=-\mathbb{S}_{\Gamma(1)}[\lambda+2]-1\]
as an element of the Grothendieck group of \(\ell\)-adic Galois representations.
This remains true for \(\lambda=0\) if we set \(\mathbb{S}_{\Gamma(1)}[2]:=-\mathbb{L}-1\): we have
\[e_{\mathrm{c}}(\mathcal{A}_{1},\mathbb{Q}_{\ell})=H_{\mathrm{c}}^{2}(\mathcal{A }_{1},\mathbb{Q}_{\ell})=\mathbb{L}\]
We will use the following values for the Euler characteristics \(e_{\mathrm{c}}(\mathcal{A}_{1},\mathbb{V}_{\lambda})\), which are obtained by combining 3.1 with the vanishing of the spaces \(S_{\lambda+2}(\Gamma(1))\) for all \(\lambda\geq 0\) with \(\lambda\leq 9\):
\[\begin{array}{|c|c|}\hline\lambda&e_{\mathrm{c}}(\mathcal{A}_{1},\mathbb{V} _{\lambda})\\ \hline\hline 0&\mathbb{L}\\ \hline 2&-1\\ \hline 4&-1\\ \hline\end{array}\quad\begin{array}{|c|c|}\hline\lambda&e_{\mathrm{c}}( \mathcal{A}_{1},\mathbb{V}_{\lambda})\\ \hline\hline 6&-1\\ \hline 8&-1\\ \hline 10&-\mathbb{S}_{\Gamma(1)}[12]-1\\ \hline\end{array}\]
The space \(S_{12}(\Gamma(1))\) is spanned by the discriminant cusp form
\[\Delta=\sum_{n\geq 1}\tau(n)q^{n}=q-24q^{2}+252q^{3}-1472q^{4}+\ldots\]
which contributes an irreducible \(2\)-dimensional \(\ell\)-adic Galois representation \(\mathbb{S}_{\Gamma(1)}[12]\) to \(H^{1}(\mathcal{A}_{1},\mathbb{V}_{10})\), with the property that \(\mathrm{tr}(\mathrm{Frob}_{p}|\mathbb{S}_{\Gamma(1)}[12])=\tau(p)\), which is not polynomial in \(p\).
We obtain the following result (compare to the tables at the end of [25]):
**Theorem 3.2**.: The cohomology \(H^{i}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) is Tate type for all \(i\) and all \(1\leq n\leq 9\) (see table 1). In this range the compactly supported Euler characteristics are given by:
\[e_{\mathrm{c}}(\mathcal{X}_{1},\mathbb{Q}_{\ell}) =\mathbb{L}^{2}+\mathbb{L}\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 2},\mathbb{Q}_{\ell}) =\mathbb{L}^{3}+3\mathbb{L}^{2}+\mathbb{L}-1\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 3},\mathbb{Q}_{\ell}) =\mathbb{L}^{4}+6\mathbb{L}^{3}+6\mathbb{L}^{2}-2\mathbb{L}-3\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 4},\mathbb{Q}_{\ell}) =\mathbb{L}^{5}+10\mathbb{L}^{4}+20\mathbb{L}^{3}+4\mathbb{L}^{2} -14\mathbb{L}-7\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 5},\mathbb{Q}_{\ell}) =\mathbb{L}^{6}+15\mathbb{L}^{5}+50\mathbb{L}^{4}+40\mathbb{L}^{3 }-30\mathbb{L}^{2}-49\mathbb{L}-15\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 6},\mathbb{Q}_{\ell}) =\mathbb{L}^{7}+21\mathbb{L}^{6}+105\mathbb{L}^{5}+160\mathbb{L}^ {4}-183\mathbb{L}^{2}-139\mathbb{L}-31\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 7},\mathbb{Q}_{\ell}) =\mathbb{L}^{8}+28\mathbb{L}^{7}+196\mathbb{L}^{6}+469\mathbb{L}^ {5}+280\mathbb{L}^{4}-427\mathbb{L}^{3}-700\mathbb{L}^{2}-356\mathbb{L}-63\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 8},\mathbb{Q}_{\ell}) =\mathbb{L}^{9}+36\mathbb{L}^{8}+336\mathbb{L}^{7}+1148\mathbb{L }^{6}+1386\mathbb{L}^{5}-406\mathbb{L}^{4}-2436\mathbb{L}^{3}-2224\mathbb{L}^{ 2}-860\mathbb{L}-127\] \[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 9},\mathbb{Q}_{\ell}) =\mathbb{L}^{10}+45\mathbb{L}^{9}+540\mathbb{L}^{8}+2484 \mathbb{L}^{7}+4662\mathbb{L}^{6}+1764\mathbb{L}^{5}-6090\mathbb{L}^{4}-9804 \mathbb{L}^{3}-6372\mathbb{L}^{2}-2003\mathbb{L}-255\]
The cohomology \(H^{i}(\mathcal{X}_{1}^{\times 10},\mathbb{Q}_{\ell})\) is Tate type for all \(i\neq 11\) (see table 1), whereas for \(i=11\) we have
\[H^{11}(\mathcal{X}_{1}^{\times 10},\mathbb{Q}_{\ell})=\mathbb{S}_{\Gamma(1)}[12]+ \mathbb{L}^{11}+99\mathbb{L}^{10}+1925\mathbb{L}^{9}+12375\mathbb{L}^{8}+2970 \mathbb{L}^{7}\]
where \(\mathbb{S}_{\Gamma(1)}[12]\) is the \(2\)-dimensional Galois representation attached to the weight \(12\) cusp form \(\Delta\in S_{12}(\Gamma(1))\). In this case the compactly supported Euler characteristic is given by:
\[e_{\mathrm{c}}(\mathcal{X}_{1}^{\times 10},\mathbb{Q}_{\ell}) =-\mathbb{S}_{\Gamma(1)}[12]\] \[+\mathbb{L}^{11}+55\mathbb{L}^{10}+825\mathbb{L}^{9}+4905\mathbb{L }^{8}+12870\mathbb{L}^{7}+12264\mathbb{L}^{6}\] \[-9240\mathbb{L}^{5}-33210\mathbb{L}^{4}-33495\mathbb{L}^{3}-17095 \mathbb{L}^{2}-4553\mathbb{L}-511\]
In particular the compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 10\).
Proof.: Follows by combining 1.3 and 1.5 with 3.1. In this case the multiplicities \(m_{\lambda}^{j,n}\) are easily computed using the fact that
\[\mathbb{V}_{\lambda_{1}}\otimes\mathbb{V}_{\lambda_{2}}=\mathbb{V}_{\lambda_{1}+ \lambda_{2}}\oplus\mathbb{V}_{\lambda_{1}+\lambda_{2}-2}\oplus\ldots\oplus \mathbb{V}_{|\lambda_{1}-\lambda_{2}|}\]
To argue that \(e_{\mathrm{c}}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 10\) note that \(H^{11}(\mathcal{X}_{1}^{\times 10},\mathbb{Q}_{\ell})\) (which is not Tate type, owing to the irreducible \(2\)-dimensional contribution \(\mathbb{S}_{\Gamma(1)}[12]\) to \(H^{1}(\mathcal{A}_{1},\mathbb{V}_{10})\)) appears as a summand in \(H^{11}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) for all \(n\geq 10\) by the Kunneth formula. This contribution cannot be cancelled in the Euler characteristic: since the contribution occurs in \(H^{i}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) odd, any contribution leading to cancellation would have to occur in \(H^{i}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) even. Since \(H^{*}(\mathcal{A}_{1},\mathbb{V}_{\lambda})=0\) for \(\lambda>0\) odd, any contribution to \(H^{i}(\mathcal{X}_{1}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) even would have to come from a contribution to \(H^{0}(\mathcal{A}_{1},\mathbb{V}_{\lambda})\) (since \(H^{2}(\mathcal{A}_{1},\mathbb{V}_{\lambda})=0\) for all \(\lambda\geq 0\)), but there are no irreducible \(2\)-dimensional contributions in this case: the only irreducible \(2\)-dimensional contributions come from the contribution \(\mathbb{S}_{\Gamma(1)}[\lambda+2]\) to \(H^{1}(\mathcal{A}_{1},\mathbb{V}_{\lambda})\).
We obtain the following corollary:
**Corollary 3.3**.: The first \(9\) terms of the moment generating function \(M_{\#A_{1}(\mathbb{F}_{q})}(t)\) are rational functions in \(q\):
\[1 +(\mathbf{q}+\mathbf{1})t\] \[+(\mathbf{q}^{2}+\mathbf{3}\mathbf{q}+1-\tfrac{1}{q})\tfrac{t^{2} }{2!}\] \[+(\mathbf{q}^{3}+\mathbf{6}\mathbf{q}^{2}+6q-2-\tfrac{3}{q}) \tfrac{t^{3}}{3!}\] \[+(\mathbf{q}^{4}+\mathbf{10}\mathbf{q}^{3}+20q^{2}+4q-14-\tfrac {7}{q})\tfrac{t^{4}}{4!}\] \[+(\mathbf{q}^{5}+\mathbf{15}\mathbf{q}^{4}+50q^{3}+40q^{2}-30q-49 -\tfrac{15}{q})\tfrac{t^{5}}{5!}\] \[+(\mathbf{q}^{6}+\mathbf{21}\mathbf{q}^{5}+105q^{4}+160q^{3}-183q -139-\tfrac{1}{q})\tfrac{t^{6}}{6!}\] \[+(\mathbf{q}^{7}+\mathbf{28}\mathbf{q}^{6}+196q^{5}+469q^{4}+280q ^{3}-427q^{2}-700q-356-\tfrac{63}{q})\tfrac{t^{7}}{7!}\] \[+(\mathbf{q}^{8}+\mathbf{36}\mathbf{q}^{7}+336q^{6}+1148q^{5}+138 6q^{4}-406q^{3}-2436q^{2}-2224q-860-\tfrac{127}{q})\tfrac{t^{8}}{8!}\] \[+(\mathbf{q}^{9}+\mathbf{45}\mathbf{q}^{8}+540q^{7}+2484q^{6}+466 2q^{5}+1764q^{4}-6090q^{3}-9804q^{2}-6372q-2003-\tfrac{255}{q})\tfrac{t^{9}}{9!}\]
Note that the first \(2\) coefficients in each of these terms (in bold) are consistent with 2.1.
## 4 Computations for \(g=2\)
Let \(\mathcal{A}_{2}\) be the moduli stack of principally polarized Abelian surfaces, which is a smooth Deligne-Mumford stack of dimension \(3\) over \(\mathbb{Z}\). Let \(\pi:\mathcal{X}_{2}\to\mathcal{A}_{2}\) be the universal Abelian surface over \(\mathcal{A}_{2}\) and let \(\mathbb{V}=\mathbb{R}^{1}\pi_{*}\mathbb{Q}_{\ell}\) be the \(\ell\)-adic local system on \(\mathcal{A}_{2}\) corresponding to the standard representation of \(\mathrm{Sp}_{4}\). For \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq 0)\) a dominant integral highest weight for \(\mathrm{Sp}_{4}\) let \(\mathbb{V}_{\lambda}\) be the \(\ell\)-adic local system on \(\mathcal{A}_{2}\) corresponding to the irreducible representation of \(\mathrm{Sp}_{4}\) of highest weight \(\lambda\), occurring in \(\mathrm{Sym}^{\lambda_{1}-\lambda_{2}}(\mathbb{V})\otimes\mathrm{Sym}^{ \lambda_{2}}(\wedge^{2}\mathbb{V})\). For \(\lambda_{1}+\lambda_{2}\) odd we have \(H^{*}(\mathcal{A}_{2},\mathbb{V}_{\lambda})=0\) since \(-\mathrm{id}\in\mathrm{Sp}_{4}(\mathbb{Z})\) acts by multiplication by \((-1)^{\lambda_{1}+\lambda_{2}}\) on the stalks of \(\mathbb{V}_{\lambda_{1},\lambda_{2}}\).
Let \(\mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2},\lambda_{2}+3]=\bigoplus_{F}\rho_{F}\) be the \(\ell\)-adic Galois representation corresponding to vector-valued Siegel cusp forms of weight \((\lambda_{1}-\lambda_{2},\lambda_{2}+3)\) for \(\Gamma(1)=\mathrm{Sp}_{4}(\mathbb{Z})\): for each eigenform \(F\in S_{\lambda_{1}-\lambda_{2},\lambda_{2}+3}(\Gamma(1))\) we have a \(4\)-dimensional \(\ell\)-adic Galois representation \(\rho_{F}\), and we have
\[\mathrm{tr}(\mathrm{Frob}_{p}|\mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2}, \lambda_{2}+3])=\mathrm{tr}(T_{p}|S_{\lambda_{1}-\lambda_{2},\lambda_{2}+3}( \Gamma(1)))\]
for every prime \(p\), which determines \(\mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2},\lambda_{2}+3]\) as an element of the Grothendieck group of \(\ell\)-adic Galois representations.
As a representation of \(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) the \(\ell\)-adic Galois representation \(\rho_{F}\) need not be irreducible: it is reducible for instance when \(F\in S_{0,k}(\Gamma(1))\) is the Saito-Kurokawa lift of a cusp form \(f\in S_{2k-2}(\Gamma(1))\) (see [23, Theorem 21.1] for a description of the Saito-Kurokawa lift), in which case \(\rho_{F}\simeq\rho_{f}+\mathbb{L}^{k-1}+\mathbb{L}^{k-2}\) up to semisimplification. On the other hand if \(F\in S_{\lambda_{1}-\lambda_{2},\lambda_{2}+3}(\Gamma(1))\) is a vector-valued Siegel modular form of general type, the \(\ell\)-adic Galois representation \(\rho_{F}\) is irreducible as a representation of \(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) and of \(\mathrm{Gal}(\overline{\mathbb{F}}_{p}/\mathbb{F}_{p})\) (see [51, Theorem I, Theorem III]). Write \(\mathbb{S}_{\Gamma(1)}^{\mathrm{gen}}[\lambda_{1}-\lambda_{2},\lambda_{2}+3]\) for the \(\ell\)-adic Galois representation corresponding to vector-valued Siegel cusp forms of general type.
By work of Petersen, using work of Harder [31] and Flicker [21] as input, we have the following:
**Proposition 4.1**.: [46, Theorem 2.1] (compare to [8, Conjecture 6.3]) for \(\lambda_{1}\geq\lambda_{2}\geq 0\) with \(\lambda_{1}+\lambda_{2}>0\) even we have
\[e_{\mathrm{c}}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})=- \mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2},\lambda_{2}+3]+e_{\mathrm{c}, \mathrm{extr}}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\]
as an element of the Grothendieck group of \(\ell\)-adic Galois representations, where \(e_{\mathrm{c},\mathrm{extr}}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_ {2}})\) is given by
\[e_{\mathrm{c},\mathrm{extr}}(\mathcal{A}_{2},\mathbb{V}_{\lambda _{1},\lambda_{2}}) =-s_{\Gamma(1)}[\lambda_{1}+\lambda_{2}+4]\mathbb{S}_{\Gamma(1)}[ \lambda_{1}-\lambda_{2}+2]\mathbb{L}^{\lambda_{2}+1}\] \[+s_{\Gamma(1)}[\lambda_{1}-\lambda_{2}+2]-s_{\Gamma(1)}[\lambda_{ 1}+\lambda_{2}+4]\mathbb{L}^{\lambda_{2}+1}\] \[+\begin{cases}\mathbb{S}_{\Gamma(1)}[\lambda_{2}+2]+1&\lambda_{1} \text{ even}\\ -\mathbb{S}_{\Gamma(1)}[\lambda_{1}+3]&\lambda_{1}\text{ odd}\end{cases}\]
where \(s_{\Gamma(1)}[k]\) is the dimension of the space of cusp forms of weight \(k\) for \(\Gamma(1)=\mathrm{SL}_{2}(\mathbb{Z})\) (where we set \(\mathbb{S}_{\Gamma(1)}[2]:=-\mathbb{L}-1\) and \(s_{\Gamma(1)}[2]:=-1\)).
This remains true for \((\lambda_{1},\lambda_{2})=(0,0)\) if we set \(\mathbb{S}_{\Gamma(1)}[0,3]:=-\mathbb{L}^{3}-\mathbb{L}^{2}-\mathbb{L}-1\): by [40, Corollary 5.2.3] we have
\[e_{\mathrm{c}}(\mathcal{A}_{2},\mathbb{Q}_{\ell})=\mathbb{L}^{3}+\mathbb{L}^{2}\]
We will use the following values for the Euler characteristics \(e_{\mathrm{c}}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\), which are obtained by combining 4.1 with the vanishing of the spaces \(S_{\lambda_{1}-\lambda_{2},\lambda_{2}+3}(\Gamma(1))\) for all \(\lambda_{1}\geq\lambda_{2}\geq 0\) with \(\lambda_{1},\lambda_{2}\leq 7\) except for \(\lambda_{1}=\lambda_{2}=7\):
\begin{tabular}{|c|c|} \hline \((\lambda_{1},\lambda_{2})\) & \(e_{\rm c}({\cal A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) \\ \hline \hline \((0,0)\) & \(\mathbb{L}^{3}+\mathbb{L}^{2}\) \\ \hline \((2,0)\) & \(-\mathbb{L}\) \\ \((1,1)\) & \(-1\) \\ \hline \((4,0)\) & \(-\mathbb{L}\) \\ \((3,1)\) & \(0\) \\ \((2,2)\) & \(0\) \\ \hline \((6,0)\) & \(-\mathbb{L}\) \\ \((5,1)\) & \(0\) \\ \((4,2)\) & \(1\) \\ \((3,3)\) & \(-1\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline \((\lambda_{1},\lambda_{2})\) & \(e_{\rm c}({\cal A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) \\ \hline \hline \((7,1)\) & \(-\mathbb{L}^{2}\) \\ \((6,2)\) & \(-\mathbb{L}^{3}+1\) \\ \((5,3)\) & \(-\mathbb{L}^{4}\) \\ \((4,4)\) & \(\mathbb{L}^{6}\) \\ \hline \((7,3)\) & \(0\) \\ \((6,4)\) & \(1\) \\ \((5,5)\) & \(-1\) \\ \hline \((7,5)\) & \(-\mathbb{L}^{6}\) \\ \((6,6)\) & \(\mathbb{L}^{8}\) \\ \hline \((7,7)\) & \(-\mathbb{S}_{\Gamma(1)}[18]-\mathbb{L}^{8}-1\) \\ \hline \end{tabular}
The space \(S_{0,10}(\Gamma(1))\) is spanned by the Igusa cusp form (see [44]):
\[\chi_{10} =(q^{-1}-2+q)q_{1}q_{2}-(2q^{-2}+16q^{-1}-36+16q+2q^{2})(q_{1}^{2} q_{2}+q_{1}q_{2}^{2})\] \[+(q^{-3}+36q^{-2}+99q^{-1}-272+99q+36q^{2}+q^{3})(q_{1}^{3}q_{2}+ q_{1}q_{2}^{3})\] \[+(4q^{-3}+72q^{-2}+252q^{-1}-656+252q+72q^{2}+4q^{3})q_{1}^{2}q_{2 }^{2}+\ldots\]
which is a Saito-Kurokawa lift of the weight \(18\) cusp form \(f_{18}=\Delta E_{6}\in S_{18}(\Gamma(1))\) and contributes an irreducible \(2\)-dimensional \(\ell\)-adic Galois representation \(\mathbb{S}_{\Gamma(1)}[18]\) to \(H^{3}({\cal A}_{2},\mathbb{V}_{7,7})\) (see for example [46, 4.3.5]) with the property that \(\operatorname{tr}(\operatorname{Frob}_{p}|\mathbb{S}_{\Gamma(1)}[18])=\lambda _{p}(f_{18})\) (the eigenvalue of the Hecke operator \(T_{p}\) on \(f_{18}\)), which is not polynomial in \(p\); the remaining summands \(\mathbb{L}^{9}\) and \(\mathbb{L}^{8}\) of the \(4\)-dimensional \(\ell\)-adic Galois representation \(\mathbb{S}_{\Gamma(1)}[0,10]=\mathbb{S}_{\Gamma(1)}[18]+\mathbb{L}^{9}+ \mathbb{L}^{8}\) do not contribute to \(H^{3}({\cal A}_{2},\mathbb{V}_{7,7})\).
We will use another contribution which does not appear in the above table but which was mentioned in the introduction. The space \(S_{6,8}(\Gamma(1))\) is spanned by the vector-valued cusp form (see [17, Section 8])
\[\chi_{6,8} =\begin{pmatrix}0\\ 0\\ q^{-1}-2+q\\ 2(q-q^{-1})\\ q^{-1}-2+q\\ 0\end{pmatrix}q_{1}q_{2}+\begin{pmatrix}0\\ -2(q^{-2}+8q^{-1}-18+8q+q^{2})\\ 8(q^{-2}+4q^{-1}-4q^{2})\\ -2(7q^{-2}-4q^{-1}-6-4q+7q^{2})\\ 12(q^{-2}-2q^{-1}+2q^{-2})\\ -4(q^{-2}-2q^{-1}+6-4q+q^{2})\end{pmatrix}q_{1}q_{2}^{2}\] \[+\begin{pmatrix}-4(q^{-2}-4q^{-1}+6-4q+q^{2})\\ 12(q^{-2}-2q^{-1}+2q-q^{2})\\ -2(7q^{-2}-4q^{-1}-6-4q+q^{2})\\ -4(q^{-2}-4q^{-1}+6-4q+q^{2})\end{pmatrix}q_{1}^{2}q_{2}+\begin{pmatrix}16(q^{-3}- 9q^{-1}+16-9q+q^{3})\\ -72(q^{-3}-3q^{-1}+3q-q^{3})\\ 128(q^{-3}-2+q^{3})\\ -144(q^{-3}+5q^{-1}-5q-q^{3})\\ 128(q^{-3}-2+q^{3})\\ -72(q^{-3}-3q^{-1}+3q-q^{3})\\ 16(q^{-3}-9q^{-1}+16-9q+q^{3})\end{pmatrix}q_{1}^{2}q_{2}^{2}+\ldots\]
which is of general type and contributes an irreducible \(4\)-dimensional \(\ell\)-adic Galois representation \(\mathbb{S}_{\Gamma(1)}[6,8]\) to \(H^{3}_{\rm c}({\cal A}_{2},\mathbb{V}_{11,5})\) (see for example [46, 4.3.1]) with the property that \(\operatorname{tr}(\operatorname{Frob}_{p}|\mathbb{S}_{\Gamma(1)}[6,8])=\lambda _{p}(\chi_{6,8})\) (the eigenvalue of the Hecke operator \(T_{p}\) acting on \(\chi_{6,8}\)) which is not polynomial in \(p\).
We obtain the following result:
**Theorem 4.2**.: The cohomology \(H^{i}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) is Tate type for all \(i\) and all \(1\leq n\leq 6\) (see table 2). In this range the compactly supported Euler characteristics are given by:
\[e_{\mathrm{c}}(\mathcal{X}_{2},\mathbb{Q}_{\ell}) =\mathbb{L}^{5}+2\mathbb{L}^{4}+2\mathbb{L}^{3}+\mathbb{L}^{2}-1\] \[e_{\mathrm{c}}(\mathcal{X}_{2}^{\times 2},\mathbb{Q}_{\ell}) =\mathbb{L}^{7}+4\mathbb{L}^{6}+9\mathbb{L}^{5}+9\mathbb{L}^{4}+3 \mathbb{L}^{3}-5\mathbb{L}^{2}-5\mathbb{L}-3\] \[e_{\mathrm{c}}(\mathcal{X}_{2}^{\times 3},\mathbb{Q}_{\ell}) =\mathbb{L}^{9}+7\mathbb{L}^{8}+27\mathbb{L}^{7}+49\mathbb{L}^{6} +46\mathbb{L}^{5}+3\mathbb{L}^{4}-42\mathbb{L}^{3}-53\mathbb{L}^{2}-24 \mathbb{L}-7\] \[e_{\mathrm{c}}(\mathcal{X}_{2}^{\times 4},\mathbb{Q}_{\ell}) =\mathbb{L}^{11}+11\mathbb{L}^{10}+65\mathbb{L}^{9}+191\mathbb{L }^{8}+320\mathbb{L}^{7}+257\mathbb{L}^{6}\] \[-65\mathbb{L}^{5}-425\mathbb{L}^{4}-474\mathbb{L}^{3}-273 \mathbb{L}^{2}-73\mathbb{L}-14\] \[e_{\mathrm{c}}(\mathcal{X}_{2}^{\times 5},\mathbb{Q}_{\ell}) =\mathbb{L}^{13}+16\mathbb{L}^{12}+135\mathbb{L}^{11}+590\mathbb{ L}^{10}+1525\mathbb{L}^{9}+2292\mathbb{L}^{8}+1527\mathbb{L}^{7}\] \[-1285\mathbb{L}^{6}-4219\mathbb{L}^{5}-4730\mathbb{L}^{4}-2814 \mathbb{L}^{3}-923\mathbb{L}^{2}-135\mathbb{L}-21\] \[e_{\mathrm{c}}(\mathcal{X}_{2}^{\times 6},\mathbb{Q}_{\ell}) =\mathbb{L}^{15}+22\mathbb{L}^{14}+252\mathbb{L}^{13}+1540 \mathbb{L}^{12}+5683\mathbb{L}^{11}+13035\mathbb{L}^{10}+17779\mathbb{L}^{9}+8 660\mathbb{L}^{8}\] \[-17614\mathbb{L}^{7}-44408\mathbb{L}^{6}-48770\mathbb{L}^{5}-3066 7\mathbb{L}^{4}-10437\mathbb{L}^{3}-1391\mathbb{L}^{2}+142\mathbb{L}+2\]
The cohomology \(H^{i}(\mathcal{X}_{2}^{\times 7},\mathbb{Q}_{\ell})\) is Tate type for all \(i\neq 17\) (see table 2), whereas for \(i=17\) we have
\[H^{17}(\mathcal{X}_{2}^{\times 7},\mathbb{Q}_{\ell}) =\mathbb{S}_{\Gamma(1)}[18]+\mathbb{L}^{17}+1176\mathbb{L}^{15}+63 700\mathbb{L}^{13}+6860\mathbb{L}^{12}+321048\mathbb{L}^{11}+294440\mathbb{L }^{10}+\mathbb{L}^{9}\]
where \(\mathbb{S}_{\Gamma(1)}[18]\) is the \(2\)-dimensional \(\ell\)-adic Galois representation attached to the weight \(18\) cusp form \(f_{18}=\Delta E_{6}\in S_{18}(\Gamma(1))\). In this case the compactly supported Euler characteristic is given by:
\[e_{\mathrm{c}}(\mathcal{X}_{2}^{\times 7},\mathbb{Q}_{\ell}) =-\mathbb{S}_{\Gamma(1)}[18]\] \[+\mathbb{L}^{17}+29\mathbb{L}^{16}+434\mathbb{L}^{15}+3542 \mathbb{L}^{14}+17717\mathbb{L}^{13}+56924\mathbb{L}^{12}+118692\mathbb{L}^{1 1}+145567\mathbb{L}^{10}+37850\mathbb{L}^{9}\] \[-226570\mathbb{L}^{8}-487150\mathbb{L}^{7}-529851\mathbb{L}^{6}-3 42930\mathbb{L}^{5}-121324\mathbb{L}^{4}-9491\mathbb{L}^{3}+9018\mathbb{L}^{ 2}+3164\mathbb{L}+223\]
In particular the compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 7\).
Proof.: Follows by combining 1.3 and 1.5 with 4.1. In this case we computed the multiplicities \(m_{\lambda}^{j,n}\) with a SAGE program (available on request).
To argue that \(e_{\mathrm{c}}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 7\) note that \(H^{17}(\mathcal{X}_{2}^{\times 7},\mathbb{Q}_{\ell})\) (which is not Tate type, owing to the irreducible \(2\)-dimensional contribution \(\mathbb{S}_{\Gamma(1)}[18]\) to \(H^{3}(\mathcal{A}_{2},\mathbb{V}_{7,7})\)) appears as a summand in \(H^{17}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) for all \(n\geq 7\) by the Kunneth formula. This contribution cannot be cancelled in the Euler characteristic, at least for \(7\leq n\leq 15\): since the contribution occurs in \(H^{i}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) odd, any contribution leading to cancellation would have to occur in \(H^{i}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) even. Since \(H^{*}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})=0\) for \(\lambda_{1}+\lambda_{2}>0\) odd, any contribution to \(H^{i}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) even would have to come from a contribution to \(H^{j}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) for \(j=0,2,4\) (since \(H^{6}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})=0\) for all \(\lambda_{1}\geq\lambda_{2}\geq 0\)). The only irreducible \(2\)-dimensional contributions that occur in this way come from the contribution \(\mathbb{S}_{\Gamma(1)}[\lambda_{2}+2]\mathbb{L}^{\lambda_{1}+2}\) to \(H^{4}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) (Poincare dual to the contribution \(\mathbb{S}_{\Gamma(1)}[\lambda_{2}+2]\) to \(H^{2}_{\mathrm{c}}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) in [46, Theorem 2.1]), which would require \(\lambda_{2}=16\) for cancellation.
Now note that \(H^{19}(\mathcal{X}_{2}^{\times 11},\mathbb{Q}_{\ell})\) (which is not Tate type, owing to the irreducible \(4\)-dimensional contribution \(\mathbb{S}_{\Gamma(1)}[6,8]\) to \(H^{3}(\mathcal{A}_{2},\mathbb{V}_{11,5})\)) appears as a summand in \(H^{19}(\mathcal{X}_{2}^{\times n},\mathbb{Q}_{\ell})\) for all \(n\geq 11\) by the Kunneth formula. This contribution cannot be cancelled in the Euler characteristic: by the same reasoning as above any contribution leading to cancellation would have to come from a contribution to \(H^{j}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) for \(j=0,2,4\), but there are no irreducible \(4\)-dimensional contributions in this case: the only irreducible \(4\)-dimensional contributions come from the contribution \(\mathbb{S}_{\Gamma(1)}^{\mathrm{gen}}[\lambda_{1}-\lambda_{2},\lambda_{2}+3]\) to \(H^{3}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) (Poincare dual to the contribution \(\mathbb{S}_{\Gamma(1)}^{\mathrm{gen}}[\lambda_{1}-\lambda_{2},\lambda_{2}+3]\) to \(H^{3}_{\mathrm{c}}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1},\lambda_{2}})\) in [46, Theorem 2.1]).
Note that the contribution \(\mathbb{S}_{\Gamma(1)}[18]\) should always persist, but we cannot argue this without estimates on the multiplicities \(m_{\lambda}^{j,n}\).
We obtain the following corollary:
**Corollary 4.3**.: The first \(6\) terms of the moment generating function \(M_{\#A_{2}(\mathbb{F}_{q})}(t)\) are rational functions in \(q\):
\[1 +(\mathbf{q^{2}}+\mathbf{q}+\mathbf{1}-\frac{1}{q^{3}+q^{2}})t\] \[+(\mathbf{q^{4}}+\mathbf{3q^{3}}+\mathbf{6q^{2}}+3q-\frac{5q^{2} +5q+3}{q^{3}+q^{2}})\frac{t^{2}}{2!}\] \[+(\mathbf{q^{6}}+\mathbf{6q^{5}}+\mathbf{21q^{4}}+28q^{3}-\frac{ 26q^{2}+24q+7}{q^{3}+q^{2}})\frac{t^{3}}{3!}\] \[+(\mathbf{q^{8}}+\mathbf{10q^{7}}+\mathbf{55q^{6}}+136q^{5}+184q ^{4}-\frac{86q^{2}+73q+14}{q^{3}+q^{2}})\frac{t^{4}}{4!}\] \[+(\mathbf{q^{10}}+\mathbf{15q^{9}}+\mathbf{120q^{8}}+470q^{7}+10 55q^{6}+1237q^{5}-\frac{195q^{2}+135q+21}{q^{3}+q^{2}})\frac{t^{5}}{5!}\] \[+(\mathbf{q^{12}}+\mathbf{21q^{11}}+\mathbf{231q^{10}}+1309q^{9} +4374q^{8}+8661q^{7}+9118q^{6}-\frac{103q^{2}-142q-2}{q^{3}+q^{2}})\frac{t^{6} }{6!}\] \[-458q^{5}-17156q^{4}-27252q^{3}-21518q^{2}-9149q-1288\]
Note that the first \(3\) coefficients in each of these terms (in bold) are consistent with 2.1.
## 5 Computations for \(g=3\)
Let \(\mathcal{A}_{3}\) be the moduli stack of principally polarized Abelian threefolds, which is a smooth Deligne-Mumford stack of dimension \(6\) over \(\mathbb{Z}\). Let \(\pi:\mathcal{X}_{3}\to\mathcal{A}_{3}\) be the universal Abelian threefold over \(\mathcal{A}_{3}\) and let \(\mathbb{V}=\mathbb{R}^{1}\pi_{*}\mathbb{Q}_{\ell}\) be the \(\ell\)-adic local system on \(\mathcal{A}_{3}\) corresponding to the standard representation of \(\mathrm{Sp}_{6}\). For \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq 0)\) a dominant integral highest weight for \(\mathrm{Sp}_{6}\) let \(\mathbb{V}_{\lambda}\) be the \(\ell\)-adic local system on \(\mathcal{A}_{3}\) corresponding to the irreducible representation of \(\mathrm{Sp}_{6}\) of highest weight \(\lambda\), occurring in \(\mathrm{Sym}^{\lambda_{1}-\lambda_{2}}(\mathbb{V})\otimes\mathrm{Sym}^{ \lambda_{2}-\lambda_{3}}(\wedge^{2}\mathbb{V})\otimes\mathrm{Sym}^{\lambda_{3 }}(\wedge^{3}\mathbb{V})\). For \(\lambda_{1}+\lambda_{2}+\lambda_{3}\) odd we have \(H^{*}(\mathcal{A}_{3},\mathbb{V}_{\lambda})=0\) since \(-\mathrm{id}\in\mathrm{Sp}_{6}(\mathbb{Z})\) acts by multiplication by \((-1)^{\lambda_{1}+\lambda_{2}+\lambda_{3}}\) on the stalks of \(\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}}\).
Let \(\mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3}, \lambda_{3}+4]=\bigoplus_{F}\rho_{F}\) be the \(\ell\)-adic Galois representation corresponding to vector-valued Siegel cusp forms of weight \((\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3},\lambda_{3}+4)\) for \(\Gamma(1)=\mathrm{Sp}_{6}(\mathbb{Z})\): for each eigenform \(F\in S_{\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3},\lambda_{3}+4}(\Gamma (1))\) we have an \(8\)-dimensional \(\ell\)-adic Galois representation \(\rho_{F}\), and we have
\[\mathrm{tr}(\mathrm{Frob}_{p}|\mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2}, \lambda_{2}-\lambda_{3},\lambda_{3}+4])=\mathrm{tr}(T_{p}|S_{\lambda_{1}- \lambda_{2},\lambda_{2}-\lambda_{3},\lambda_{3}+4}(\Gamma(1)))\]
for every prime \(p\), which determines \(\mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3}, \lambda_{3}+4]\) as an element of the Grothendieck group of \(\ell\)-adic Galois representations.
As a representation of \(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) the \(\ell\)-adic Galois representation \(\rho_{F}\) need not be irreducible, for example if \(F\) is one of the lifts predicted by [8, Conjecture 7.7]. On the other hand if \(F\in S_{\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3},\lambda_{3}+4}(\Gamma (1))\) is a vector-valued Siegel cusp form of general type, the \(\ell\)-adic Galois representation \(\rho_{F}\) is predicted to be irreducible as a representation of \(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\) and of \(\mathrm{Gal}(\overline{\mathbb{F}}_{p}/\mathbb{F}_{p})\).
Write \(\mathbb{S}^{\rm gen}_{\Gamma(1)}[\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3}, \lambda_{3}+4]\) for the \(\ell\)-adic Galois representation corresponding to vector-valued Siegel cusp forms of general type.
By work of Bergstrom-Faber-van der Geer, one conjectures the following:
**Conjecture 5.1**.: [8, Conjecture 7.1] For \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\) with \(\lambda_{1}+\lambda_{2}+\lambda_{3}>0\) even we have
\[e_{\rm c}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}})= \mathbb{S}_{\Gamma(1)}[\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3}, \lambda_{3}+4]+e_{\rm c,extr}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_ {2},\lambda_{3}})\]
as an element of the Grothendieck group of \(\ell\)-adic Galois representations where \(e_{\rm c,extr}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}})\) is given by
\[e_{\rm c,extr}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_ {2},\lambda_{3}}) =-e_{\rm c}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1}+1,\lambda_{2} +1})-e_{\rm c,extr}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1}+1,\lambda_{2}+1}) \otimes\mathbb{S}_{\Gamma(1)}[\lambda_{3}+2]\] \[+e_{\rm c}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1}+1,\lambda_{3} })+e_{\rm c,extr}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{1}+1,\lambda_{3}}) \otimes\mathbb{S}_{\Gamma(1)}[\lambda_{2}+3]\] \[-e_{\rm c}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{2},\lambda_{3}}) -e_{\rm c,extr}(\mathcal{A}_{2},\mathbb{V}_{\lambda_{2},\lambda_{3}})\otimes \mathbb{S}_{\Gamma(1)}[\lambda_{1}+4]\]
This remains true for \((\lambda_{1},\lambda_{2},\lambda_{3})=(0,0,0)\) if we set \(\mathbb{S}_{\Gamma(1)}[0,0,4]:=\mathbb{L}^{6}+\mathbb{L}^{5}+\mathbb{L}^{4}+2 \mathbb{L}^{3}+\mathbb{L}^{2}+\mathbb{L}+1\): by [30, Theorem 1] we have
\[e_{\rm c}(\mathcal{A}_{3},\mathbb{Q}_{\ell})=\mathbb{L}^{6}+\mathbb{L}^{5}+ \mathbb{L}^{4}+\mathbb{L}^{3}+1\]
As explained in [8, Section 8] this conjecture was made after extensive point counts for curves up to genus \(3\) over finite fields. In particular by [8, Remark 8.2] the conjecture is true for all \((\lambda_{1},\lambda_{2},\lambda_{3})\) with \(\lambda_{1}+\lambda_{2}+\lambda_{3}\leq 6\) on the basis of these point counts since \(S_{\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3},\lambda_{3}+4}(\Gamma(1))\) has dimension \(0\) in these cases by [48]. In view of [12, Theorem 1.9], using the classification results of Chevevier-Taibi [16], the conjecture is true for all \((\lambda_{1},\lambda_{2},\lambda_{3})\) with \(\lambda_{1}+\lambda_{2}+\lambda_{3}\leq 10\) on the basis of these point counts. The conjecture is claimed to be proven unconditionally by unpublished work of Taibi [49].
We will use the following values for the Euler characteristics \(e_{\rm c}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}})\), which are obtained by combining 5.1 with the vanishing \(S_{\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3},\lambda_{3}+4}(\Gamma(1))\) for all \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq 0\) with \(\lambda_{1},\lambda_{2},\lambda_{3}\leq 6\) obtained by [48] (compare to the tables at the end of [8]):
\begin{tabular}{|c|c|} \hline \((\lambda_{1},\lambda_{2},\lambda_{3})\) & \(e_{\rm c}({\cal A}_{3},{\mathbb{V}}_{\lambda_{1},\lambda_{2},\lambda_{3}})\) \\ \hline \hline \((0,0,0)\) & \(\mathbb{L}^{6}+\mathbb{L}^{5}+\mathbb{L}^{4}+\mathbb{L}^{3}+1\) \\ \hline \((2,0,0)\) & \(-\mathbb{L}^{3}-\mathbb{L}^{2}\) \\ \((1,1,0)\) & \(-\mathbb{L}\) \\ \hline \((4,0,0)\) & \(-\mathbb{L}^{3}-\mathbb{L}^{2}\) \\ \((3,1,0)\) & \(0\) \\ \((2,2,0)\) & \(0\) \\ \((2,1,1)\) & \(1\) \\ \hline \((6,0,0)\) & \(-2\mathbb{L}^{3}-\mathbb{L}^{2}\) \\ \((5,1,0)\) & \(-\mathbb{L}^{4}\) \\ \((4,2,0)\) & \(-\mathbb{L}^{5}+\mathbb{L}\) \\ \((4,1,1)\) & \(1\) \\ \((3,3,0)\) & \(\mathbb{L}^{7}-\mathbb{L}\) \\ \((3,2,1)\) & \(0\) \\ \((2,2,2)\) & \(1\) \\ \hline \((6,2,0)\) & \(\mathbb{L}\) \\ \((6,1,1)\) & \(-\mathbb{L}^{2}+1\) \\ \((5,3,0)\) & \(0\) \\ \((5,2,1)\) & \(0\) \\ \((4,4,0)\) & \(0\) \\ \((4,3,1)\) & \(0\) \\ \((4,2,2)\) & \(\mathbb{L}^{4}\) \\ \((3,3,2)\) & \(-\mathbb{L}^{6}+1\) \\ \hline \end{tabular}
We will use another contribution which does not appear n the above table. For \(\lambda=(9,6,3)\) we have a contribution from an \(8\)-dimensional \(\ell\)-adic Galois representation \({\mathbb{S}}_{\Gamma(1)}[3,3,7]\) which decomposes into a \(1\)-dimensional \(\ell\)-adic Galois representation of Tate type and an irreducible \(7\)-dimensional \(\ell\)-adic Galois representation (see [8, Example 9.1]), which is explained by a functorial lift from the exceptional group \({\rm G}_{2}\) predicted by [26].
The Langlands correspondence predicts in this case that an irreducible \(8\)-dimensional Galois representation \(\rho:{\rm Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\to{\rm GL}_{8}(\overline{ \mathbb{Q}}_{\ell})\) (which is the composition of a \({\rm Spin}_{7}\) Galois representation \(\rho^{\prime}:{\rm Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\to{\rm Spin}_{7}( \overline{\mathbb{Q}}_{\ell})=\overline{\rm PGSp}_{6}\) with the \(8\)-dimensional spin representation \({\rm spin}:{\rm Spin}_{7}(\overline{\mathbb{Q}}_{\ell})\to{\rm GL}_{8}(\overline{ \mathbb{Q}}_{\ell})\)) contributing to the cohomology \(H^{*}({\cal A}_{3},{\mathbb{V}}_{\lambda})\) must come from a packet of cuspidal automorphic representations \(\pi\) of \({\rm PGSp}_{6}(\mathbb{A}_{\mathbb{Q}})\) with \(\pi_{\infty}|_{{\rm Sp}_{6}(\mathbb{R})}\) varying over all members of a discrete series L-packet. As the \((\mathfrak{sp}_{6},{\rm U}(3))\)-cohomology of such discrete series representations is concentrated in degree \(3\) by [50], such a contribution can only occur in \(H^{6}({\cal A}_{3},{\mathbb{V}}_{\lambda})\).
As explained in [26], any such \(\rho^{\prime}:{\rm Gal}(\overline{\mathbb{Q}}/\mathbb{Q})\to{\rm Spin}_{7}( \overline{\mathbb{Q}}_{\ell})\) factoring through the inclusion \(\widehat{\rm G}_{2}={\rm G}_{2}(\overline{\mathbb{Q}}_{\ell})\hookrightarrow{ \rm Spin}_{7}(\overline{\mathbb{Q}}_{\ell})=\overline{\rm PGSp}_{6}\) of the stabilizer of a non-isotropic vector in the \(8\)-dimensional spin representation must come from a packet of cuspidal automorphic representations \(\pi\) of \({\rm G}_{2}(\mathbb{A}_{\mathbb{Q}})\) which lifts to a packet of cuspidal automorphic representations \(\pi^{\prime}\) of \({\rm PGSp}_{6}(\mathbb{A}_{\mathbb{Q}})\) with \(\pi^{\prime}_{\infty}|_{{\rm Sp}_{6}(\mathbb{R})}\) varying over all but one member of a discrete series L-packet, and again such a contribution can only occur in \(H^{6}({\cal A}_{3},{\mathbb{V}}_{\lambda})\); the remaining \(1\)-dimensional Tate-type contribution comes from the cycle class of a Hilbert modular threefold in this Siegel modular \(6\)-fold.
We record these predictions as the following conjecture:
**Conjecture 5.2**.: Any irreducible \(\ell\)-adic Galois representation of dimension \(7\) or \(8\) occurring in \(H^{*}(\mathcal{A}_{3},\mathbb{V}_{\lambda})\) can only occur in \(H^{6}(\mathcal{A}_{3},\mathbb{V}_{\lambda})\).
We obtain the following result, which is unconditional for \(1\leq n\leq 3\) on the basis of point counts (but is very much conditional on the above conjectures in the case \(n\geq 4\)):
**Theorem 5.3**.: Assume conjectures 5.1 and 5.2. Then the compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) is Tate type for all \(1\leq n\leq 5\), and are given by:
\[e_{\mathrm{c}}(\mathcal{X}_{3},\mathbb{Q}_{\ell}) =\mathbb{L}^{9}+2\mathbb{L}^{8}+3\mathbb{L}^{7}+4\mathbb{L}^{6}+3 \mathbb{L}^{5}+2\mathbb{L}^{4}+2\mathbb{L}^{3}+1\] \[e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 2},\mathbb{Q}_{\ell}) =\mathbb{L}^{12}+4\mathbb{L}^{11}+10\mathbb{L}^{10}+20\mathbb{L}^ {9}+25\mathbb{L}^{8}+24\mathbb{L}^{7}+17\mathbb{L}^{6}+\mathbb{L}^{5}-8 \mathbb{L}^{4}-4\mathbb{L}^{3}-\mathbb{L}^{2}+4\mathbb{L}+5\] \[e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 3},\mathbb{Q}_{\ell}) =\mathbb{L}^{15}+7\mathbb{L}^{14}+28\mathbb{L}^{13}+84\mathbb{L}^ {12}+164\mathbb{L}^{11}+237\mathbb{L}^{10}+260\mathbb{L}^{9}\] \[+164\mathbb{L}^{8}-21\mathbb{L}^{7}-171\mathbb{L}^{6}-212\mathbb{ L}^{5}-107\mathbb{L}^{4}+47\mathbb{L}^{3}+99\mathbb{L}^{2}+75\mathbb{L}+29\] \[e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 4},\mathbb{Q}_{\ell}) =\mathbb{L}^{18}+11\mathbb{L}^{17}+66\mathbb{L}^{16}+286\mathbb{L }^{15}+835\mathbb{L}^{14}+1775\mathbb{L}^{13}+2906\mathbb{L}^{12}+3480\mathbb{ L}^{11}+2476\mathbb{L}^{10}\] \[-415\mathbb{L}^{9}-3846\mathbb{L}^{8}-5322\mathbb{L}^{7}-3781 \mathbb{L}^{6}-597\mathbb{L}^{5}+2146\mathbb{L}^{4}+2877\mathbb{L}^{3}+1887 \mathbb{L}^{2}+757\mathbb{L}+162\] \[e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 5},\mathbb{Q}_{\ell}) =\mathbb{L}^{21}+16\mathbb{L}^{20}+136\mathbb{L}^{19}+816 \mathbb{L}^{18}+3380\mathbb{L}^{17}+10182\mathbb{L}^{16}+23578\mathbb{L}^{15}\] \[+42433\mathbb{L}^{14}+57157\mathbb{L}^{13}+47250\mathbb{L}^{12}-5 213\mathbb{L}^{11}-84003\mathbb{L}^{10}-137082\mathbb{L}^{9}-124223\mathbb{L}^ {8}\] \[-52325\mathbb{L}^{7}+33070\mathbb{L}^{6}+83756\mathbb{L}^{5}+838 16\mathbb{L}^{4}+53066\mathbb{L}^{3}+22340\mathbb{L}^{2}+6134\mathbb{L}+891\]
The compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 6},\mathbb{Q}_{\ell})\) is given by:
\[e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 6},\mathbb{Q}_{\ell}) =(\mathbb{L}^{6}+21\mathbb{L}^{5}+120\mathbb{L}^{4}+280\mathbb{L} ^{3}+309\mathbb{L}^{2}+161\mathbb{L}+32)\mathbb{S}_{\Gamma(1)}[0,10]\] \[+\mathbb{L}^{24}+22\mathbb{L}^{23}+253\mathbb{L}^{22}+2024 \mathbb{L}^{21}+11362\mathbb{L}^{20}+46613\mathbb{L}^{19}\] \[+146665\mathbb{L}^{18}+364262\mathbb{L}^{17}+720246\mathbb{L}^{1 6}+1084698\mathbb{L}^{15}+1036149\mathbb{L}^{14}+38201\mathbb{L}^{13}\] \[-1876517\mathbb{L}^{12}-3672164\mathbb{L}^{11}-4024657\mathbb{L}^{ 10}-2554079\mathbb{L}^{9}+101830\mathbb{L}^{8}+2028655\mathbb{L}^{7}\] \[+2921857\mathbb{L}^{6}+2536864\mathbb{L}^{5}+1553198\mathbb{L}^{4 }+687157\mathbb{L}^{3}+215631\mathbb{L}^{2}+45035\mathbb{L}+4930\]
where \(\mathbb{S}_{\Gamma(1)}[0,10]=\mathbb{S}_{\Gamma(1)}[18]+\mathbb{L}^{9}+\mathbb{ L}^{8}\) is the \(4\)-dimensional \(\ell\)-adic Galois representation attached to the Saito-Kurokawa lift \(\chi_{10}\in S_{0,10}(\Gamma(1))\) of the weight \(18\) cusp form \(f_{18}=\Delta E_{6}\in S_{18}(\Gamma(1))\). In particular the compactly supported Euler characteristic \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 6\).
Proof.: Follows by combining 1.3 and 1.5 with 5.1. In this case we computed the multiplicities \(m_{\lambda}^{j,n}\) with a SAGE program (available on request).
To argue that \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) is not Tate type if \(n\geq 6\) note that \(H^{24}(\mathcal{X}_{3}^{\times 9},\mathbb{Q}_{\ell})\) (which is not Tate type, owing to the \(8\)-dimensional contribution \(\mathbb{S}_{\Gamma(1)}[3,3,7]\) to \(H^{6}(\mathcal{A}_{3},\mathbb{V}_{9,6,3})\), which decomposes into a \(1\)-dimensional contribution and an irreducible \(7\)-dimensional contribution) appears as a summand in \(H^{24}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) for all \(n\geq 9\) by the Kunneth formula. This contribution cannot be cancelled in the Euler characteristic: since the contribution occurs in \(H^{i}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) even, any contribution leading to cancellation would have to occur in \(H^{i}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) odd. Since \(H^{*}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}})=0\) for \(\lambda_{1}+\lambda_{2}+\lambda_{3}>0\) odd, any contribution to \(H^{i}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) for \(i\) odd would have to come from a contribution to \(H^{j}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}})\) for \(j=1,3,5,7,9,11\), but there are no irreducible \(7\)-dimensional contributions in this case: the only irreducible \(7\)-dimensional contributions come from the contributions to \(H^{6}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}})\) predicted by [26]. The remaining cases \(n=7,8\) are checked by running the above computations further to see that the contribution \(\mathbb{S}_{\Gamma(1)}[0,10]\) persists.
Alternatively, note that \(H^{26}(\mathcal{X}_{3}^{\times 10},\mathbb{Q}_{\ell})\) (which is not Tate type, owing to the irreducible \(8\)-dimensional contributions \(\mathbb{S}_{\Gamma(1)}[2,2,6]\) and \(\mathbb{S}_{\Gamma(1)}[4,2,8]\) to \(H^{6}(\mathcal{A}_{3},\mathbb{V}_{10,8,2})\) and \(H^{6}(\mathcal{A}_{3},\mathbb{V}_{10,6,4})\) respectively, see [8, Table 1, Table 2]) appears as a summand in \(H^{26}(\mathcal{X}_{3}^{\times n},\mathbb{Q}_{\ell})\) for all \(n\geq 10\) by the Kunneth formula. This contribution cannot be cancelled in the Euler characteristic by the same argument as above: the only irreducible \(8\)-dimensional contributions come from the contribution \(\mathbb{S}^{\mathrm{gen}}[\lambda_{1}-\lambda_{2},\lambda_{2}-\lambda_{3}, \lambda_{3}+4]\) to \(H^{6}(\mathcal{A}_{3},\mathbb{V}_{\lambda_{1},\lambda_{2},\lambda_{3}})\). The remaining cases \(n=7,8,9\) are checked by running the above computations further to see that the contribution \(\mathbb{S}_{\Gamma(1)}[0,10]\) persists. This makes the above argument a bit less conjectural by removing the dependence on the functorial lift from \(\mathrm{G}_{2}\). That being said, since the above computations are already conditional on conjectures 5.1 and 5.2, we do not try to further justify the predictions of the Langlands correspondence which we have used in the above argument.
The contribution \((\mathbb{L}^{6}+21\mathbb{L}^{5}+120\mathbb{L}^{4}+280\mathbb{L}^{3}+309 \mathbb{L}^{2}+161\mathbb{L}+32)\mathbb{S}_{\Gamma(1)}[0,10]\) to \(e_{\mathrm{c}}(\mathcal{X}_{3}^{\times 6},\mathbb{Q}_{\ell})\) comes from the following \(4\) contributions:
\[e_{\mathrm{c}}(\mathcal{A}_{3},\mathbb{V}_{6,6,6}) +(15\mathbb{L}^{2}+35\mathbb{L}+15)e_{\mathrm{c}}(\mathcal{A}_{3 },\mathbb{V}_{6,6,4})\] \[+(15\mathbb{L}^{4}+105\mathbb{L}^{3}+189\mathbb{L}^{2}+105 \mathbb{L}+15)e_{\mathrm{c}}(\mathcal{A}_{3},\mathbb{V}_{6,6,2})\] \[+(\mathbb{L}^{6}+21\mathbb{L}^{5}+105\mathbb{L}^{4}+175\mathbb{L }^{3}+105\mathbb{L}^{2}+21\mathbb{L}+1)e_{\mathrm{c}}(\mathcal{A}_{3},\mathbb{ V}_{6,6,0})\]
which explains why the coefficients in the polynomial \(\mathbb{L}^{6}+21\mathbb{L}^{5}+120\mathbb{L}^{4}+280\mathbb{L}^{3}+309 \mathbb{L}^{2}+161\mathbb{L}+32\) are not symmetric: it arises as the sum of \(4\) polynomials with symmetric coefficients of different degrees. Note that the contribution \(\mathbb{S}_{\Gamma(1)}[0,10]\) should always persist, but we cannot argue this without estimates on the multiplicities \(m_{\lambda}^{j,n}\).
We obtain the following corollary:
**Corollary 5.4**.: The first \(5\) terms of the moment generating function \(M_{\#A_{3}(\mathbb{F}_{q})}(t)\) are rational functions in \(q\):
\[1+(\mathbf{q}^{3}+\mathbf{q}^{2}+\mathbf{q}+1+\frac{-q^{2}-q}{q^{6 }+q^{2}+q^{4}+q^{3}+1})t\] \[+(\mathbf{q}^{6}+3\mathbf{q}^{5}+6\mathbf{q}^{4}+10\mathbf{q}^{3} +6q^{2}+2q-2+\frac{-8q^{5}-14q^{4}-12q^{3}-7q^{2}+2q+7}{q^{4}+q^{3}+q^{4}+q^{ 3}+1})\frac{t^{2}}{2!}\] \[+\frac{\mathbf{q}^{6}+6\mathbf{q}^{8}+21\mathbf{q}^{7}+56\mathbf{ q}^{6}+81q^{5}}{\mathbf{q}^{7}q^{4}+43q^{3}-45q^{2}-119q-106}+\frac{-23q^{5}+39q^{4} +110q^{2}+144q^{2}+194q+135}{q^{6}q^{6}+q^{4}+q^{3}+1})\frac{t^{3}}{3!}\] \[+(\mathbf{q}^{12}+10\mathbf{q}^{11}+55\mathbf{q}^{10}+220\mathbf{ q}^{9}+550q^{8}+950q^{7}+1185q^{6}}+\frac{1478q^{5}+2929q^{4}+4176q^{3}+4463q^{ 2}+1848q-645}{q^{6}+q^{4}+q^{3}+1})\frac{t^{4}}{4!}\] \[+(\mathbf{q}^{15}+15\mathbf{q}^{14}+12\mathbf{q}^{13}+680\mathbf{ q}^{12}+2565q^{11}+6817q^{10}+13515q^{9}+19521q^{8}\] \[+(\mathbf{q}^{15}+17184q^{7}-3650q^{6}-40833q^{5}-63521q^{4}-42593q ^{3}+3203q^{2}+33402q+42708}+\frac{45276q^{5}+71227q^{4}+52951q^{2}+19137q^{2}- 27268q-41817}{q^{6}+q^{3}+q^{4}+q^{3}+1})\frac{t^{5}}{5!}\]
Note that the first \(4\) coefficients in each of these terms (in bold) are consistent with 2.1.
|
2305.19998 | Efficient Shapley Values Estimation by Amortization for Text
Classification | Despite the popularity of Shapley Values in explaining neural text
classification models, computing them is prohibitive for large pretrained
models due to a large number of model evaluations. In practice, Shapley Values
are often estimated with a small number of stochastic model evaluations.
However, we show that the estimated Shapley Values are sensitive to random seed
choices -- the top-ranked features often have little overlap across different
seeds, especially on examples with longer input texts. This can only be
mitigated by aggregating thousands of model evaluations, which on the other
hand, induces substantial computational overheads. To mitigate the trade-off
between stability and efficiency, we develop an amortized model that directly
predicts each input feature's Shapley Value without additional model
evaluations. It is trained on a set of examples whose Shapley Values are
estimated from a large number of model evaluations to ensure stability.
Experimental results on two text classification datasets demonstrate that our
amortized model estimates Shapley Values accurately with up to 60 times speedup
compared to traditional methods. Furthermore, the estimated values are stable
as the inference is deterministic. We release our code at
https://github.com/yangalan123/Amortized-Interpretability. | Chenghao Yang, Fan Yin, He He, Kai-Wei Chang, Xiaofei Ma, Bing Xiang | 2023-05-31T16:19:13Z | http://arxiv.org/abs/2305.19998v1 | # Efficient Shapley Values Estimation by Amortization for Text Classification
###### Abstract
Despite the popularity of Shapley Values in explaining neural text classification models, computing them is prohibitive for large pretrained models due to a large number of model evaluations. In practice, Shapley Values are often estimated with a small number of stochastic model evaluations. However, we show that the estimated Shapley Values are sensitive to random seed choices - the top-ranked features often have little overlap across different seeds, especially on examples with longer input texts. This can only be mitigated by aggregating thousands of model evaluations, which on the other hand, induces substantial computational overheads. To mitigate the trade-off between stability and efficiency, we develop an amortized model that directly predicts each input feature's Shapley Value without additional model evaluations. It is trained on a set of examples whose Shapley Values are estimated from a large number of model evaluations to ensure stability. Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup compared to traditional methods. Furthermore, the estimated values are stable as the inference is deterministic. We release our code at [https://github.com/yangalan123/Amortized-Interpretability](https://github.com/yangalan123/Amortized-Interpretability).
## 1 Introduction
Many powerful natural language processing (NLP) models used in commercial systems only allow users to access model outputs. When these systems are applied in high-stakes domains, such as healthcare, finance, and law, it is essential to interpret how these models come to their decisions. To this end, post-hoc black-box explanation methods have been proposed to identify the input features that are most critical to model predictions Ribeiro et al. (2016); Lundberg and Lee (2017). A famous class of post-hoc black-box local explanation methods takes advantage of the Shapley Values Shapley (1953) to identify important input features, such as Shapley Value Sampling (SVS) Strumbelj and Kononenko (2010) and KernelSHAP (KS) Lundberg and Lee (2017). These methods typically start by sampling permutations of the input features ("_perturbation samples_") and aggregating model output changes over the perturbation samples. Then, they assign an _explanation score_ for each input feature to indicate its contribution to the prediction.
Despite the widespread usage of Shapley Values methods, we observe that when they are applied to text data, the estimated explanation score for each token varies significantly with the random seeds used for sampling. Figure 1 shows an example of interpreting a BERT-based sentiment classifier Devlin et al. (2019) on Yelp-Polarity dataset, a restaurant review dataset Zhang et al. (2015) by KS. The set of tokens with high explanation scores
Figure 1: Heatmaps of explanation scores of an example from Yelp-Polarity based on two runs of KernelSHAP (KS) using different random seeds. KS is run on a fine-tuned BERT model using \(200\) samples per instance (approx. \(3.47\)s per instance on average using a single A100 GPU, more than \(150\) times slower than one forward inference of the BERT model). The darker each token is, the higher its explanation score. Clearly, interpretation results are significantly different when using different seeds.
varies significantly when using different random seeds. They become stable only when the number of perturbation samples increases to more than 2,000. As KS requires model prediction for each perturbation sample, the inference cost can be substantial. For example, it takes about 183 seconds to interpret each instance in Yelp-Polarity using the KS Captum implementation (Kokhlikyan et al., 2020) on an A100 GPU. In addition, this issue becomes more severe when the input text gets longer, as more perturbation samples are needed for reliable estimation of Shapley Values. This sensitivity to the sampling process leads to an unreliable interpretation of the model predictions and hinders developers from understanding model behavior.
To achieve a better trade-off between efficiency and stability, we propose a simple yet effective amortization method to estimate the explanation scores. Motivated by the observation that different instances might share a similar set of important words (e.g., in sentiment classification, emotional words are strong label indicators (Taboada et al., 2011)), an amortized model can leverage similar interpretation patterns across instances when predicting the explanation scores. Specifically, we amortize the cost of computing explanation scores by precomputing them on a set of training examples and train an amortized model to predict the explanation scores given the input. At inference time, our amortized model directly outputs explanation scores for new instances. Although we need to collect a training set for every model we wish to interpret, our experiments show that with as few as 5000 training instances, the amortized model achieves high estimation accuracy. We show our proposed amortized model in Figure 2.
The experimental results demonstrate the efficiency and effectiveness of our approach. First, our model reduces the computation time from about 3.47s per instance to less than 50ms,1 which is 60 times faster than the baseline methods. Second, our model is robust to randomness in training (e.g., random initialization, random seeds used for generating reference explanation scores in the training dataset), and produces stable estimations over different random seeds. Third, we show that the amortized model can be used along with SVS to perform _local adaption_, i.e., adapting to specific instances at inference time, thus further improving performance if more computation is available (6.3). Finally, we evaluate our model from the functionality perspective (Doshi-Velez and Kim, 2017; Ye and Durrett, 2022) by examining the quality of the explanation in downstream tasks. We perform case studies on feature selection and domain calibration using the estimated explanation scores, and show that our method outperforms the computationally expensive KS method.
Footnote 1: On Yelp-Polarity dataset and using A100 GPU, we compare with typical KS running with 200 samples.
## 2 Related Works
**Post-Hoc Local Explanation Methods** Post-hoc local explanations are proposed to understand the prediction process of neural models (Simonyan et al., 2014; Ribeiro et al., 2016; Lundberg and Lee, 2017; Shrikumar et al., 2017). They work by assigning an explanation score to each feature (e.g., a token) in an instance ("local") to indicate its contribution to the model prediction. In this paper, we focus on studying KernelSHAP (KS) (Lundberg and Lee, 2017), an _additive feature attribution method_ that estimates the Shapley Value (Shapley, 1953) for each feature.
Figure 2: Illustration of our proposed Amortized Model. Black-outlined circles represent original inputs without Shapley Values, while circles with colored outlines or colored fills denote inputs with Shapley Values.
There are other interpretability methods in NLP. For example, gradient-based methods Simonyan et al. (2014); Li et al. (2016), which use the gradient w.r.t. each input dimension as a measure for its saliency. Reference-based methods Shrikumar et al. (2017); Sundararajan et al. (2017) consider the model output difference between the original input and reference input (e.g., zero embedding vectors).
**Shapley Values Estimation** Shapley Values are concepts from game theory to attribute total contribution to individual features. However, in practice estimating Shapley values requires prohibitively high cost for computation, especially when explaining the prediction on long documents in NLP. KS works as an efficient way to approximate Shapley Values. Previous work on estimating Shapley Values mainly focuses on accelerating the sampling process Jethani et al. (2021); Covert and Lee (2021); Parvez and Chang (2021); Mitchell et al. (2022) or removing redundant features Aas et al. (2021); Covert et al. (2021). In this work, we propose a new method to combat this challenge by training an amortized model.
**Robustness of Local Explanation Methods** Despite being widely adopted, there has been a long discussion on the actual quality of explanation methods. Recently, people have found that explanation methods can assign substantially different attributions to similar inputs Alvarez-Melis and Jaakkola (2018); Ghorbani et al. (2019); Kindermans et al. (2019); Yeh et al. (2019); Slack et al. (2021); Yin et al. (2022), i.e., they are not robust enough, which adds to the concerns about how faithful these explanations are Doshi-Velez and Kim (2017); Adebayo et al. (2018); Jacovi and Goldberg (2020). In addition to previous work focusing on robustness against input perturbations, we demonstrate that even just changing the random seeds can cause the estimated Shapley Values to be weakly-correlated with each other, unless a large number of perturbation samples are used (which incurs high computational cost).
**Amortized Explanation Methods** Our method is similar to recent works on amortized explanation models including CXPlain Schwab and Karlen (2019) and FastSHAP Jethani et al. (2021)), where they also aim to improve the computational efficiency of explanation methods. The key differences are: 1) We do not make causal assumptions between input features and model outputs; and 2) we focus on text domains, where each feature is a discrete token (typical optimization methods for continuous variables do not directly apply).
## 3 Background
In this section, we briefly review the basics of Shapley Values, focusing on its application to the text classification task.
**Local explanation of black-box text classification models.** In text classification tasks, inputs are usually sequences of discrete tokens \(X=[w_{1},w_{2},\dots,w_{L}]\). Here \(L\) is the length of \(X\) and may vary across examples; \(w_{j}\) is the \(j\)-th token of \(X\). The classification model \(M_{\text{CLF}}\) takes the input \(X\) and predict the label as \(\hat{y}=\arg\max_{y\in\mathcal{Y}}M_{\text{CLF}}\left(X\right)[y]\). Local explanation methods treat each data instance independently and compute an explanation score \(\phi(j,y)\), representing the contribution of \(w_{j}\) to the label \(y\). Usually, we care about the explanation scores when \(y=\hat{y}\).
**Shapley Values (SV)** are concepts from game theory originally developed to assign credits in cooperative games Shapley (1953); Strumbelj and Kononenko (2010); Lundberg and Lee (2017); Covert et al. (2021). Let \(s\in\left\{0,1\right\}^{L}\) be a masking of the input and define \(X_{s}\stackrel{{\text{def}}}{{=}}\left\{w_{i}\right\}_{i:s_{i}=1}\) as the _perturbed input_ that consists of unmasked tokens \(x_{i}\) (where the corresponding mask \(s_{i}\) has a value of 1). In this paper, we follow the common practice Ye et al. (2021); Ye and Durrett (2022); Yin et al. (2022) to replace masked tokens with [PAD] in the input before sending it to the classifier. Let \(\left|s\right|\) represent the number of non-zero terms in \(s\). Shapley Values \(\phi_{\text{SV}}(i,y)\)Shapley (1953) are computed by:
\[\phi_{\text{SV}}(i,y)=\frac{1}{L}\sum_{s:s_{i}\neq 1}\binom{L-1}{ \left|s\right|}^{-1}\] \[\left(M_{\text{CLF}}\left(X_{s}\cup\left\{w_{i}\right\}\right) \left[y\right]-M_{\text{CLF}}\left(X_{s}\right)\left[y\right]\right). \tag{1}\]
Intuitively, \(\phi_{\text{SV}}(i,y)\) computes the marginal contributions of each token to the model prediction.
Computing SV is known to be NP-hard Deng and Papadimitriou (1994). In practice, we estimate Shapley Values approximately for efficiency. Shapley Values Sampling (SVS) Castro et al. (2009); Strumbelj and Kononenko (2010) is a widely-used Monte-Carlo estimator of SV:
\[\phi_{\text{SVS}}(i,y)=\frac{1}{m}\sum_{\begin{subarray}{c}\sigma _{j}\in\text{I}(L)\\ \leq j\leq m\end{subarray}}\sum_{i\in\sigma_{j}}\] \[\left[M_{\text{CLF}}\left(X_{\left([\sigma_{j}]_{i-1}\cup\{i\} \right)\right)}\left[y\right]\right.-M_{\text{CLF}}\left(X_{\left([\sigma_{j}] _{i-1}\right)}\right)\left[y\right]\right]. \tag{2}\]
Here \(\sigma_{j}\in\Pi(L)\) is the sampled **ordering** and \([\sigma_{j}]\) is the non-ordered **set** of indices for \(\sigma_{j}\). \([\sigma_{j}]_{i-1}\) represents the **set** of indices ranked lower than \(i\) in \(\sigma_{j}\cdot\mathbb{S}([\sigma_{j}])\) maps the indices set \([\sigma_{j}]\) to a mask \(s\in\{0,1\}^{L}\) such that \(s_{i}=\mathbf{1}[i\in[\sigma_{j}]]\). \(m\) is the number of _perturbation samples_ used for computing SVS.
**KernelSHAP** Although SVS has successfully reduced the exponential time complexity to polynomial, it still requires sampling permutations and needs to do sequential updates following sampled orderings and computing the explanation scores, which is an apparent efficiency bottleneck. Lundberg and Lee (2017) introduce a more efficient estimator, KernelSHAP (KS), which allows better parallelism and computing explanation scores for all tokens at once using linear regression. That is achieved by showing that computing SV is equivalent to solving the following optimization problem:
\[\phi_{\text{KS}}(\cdot,y)\approx\operatorname*{arg\,min}_{\phi( \cdot,y)}\frac{1}{m} \tag{3}\] \[\sum_{\begin{subarray}{c}s(k)\sim p(s)\\ 1\leq k\leq m\end{subarray}}[M_{\text{CLF}}\left(X_{s(k)}\right)[y]-\vec{s}(k) ^{T}\phi(\cdot,y)]^{2},\] \[\text{s.t.}\quad\mathbf{1}^{T}\phi(\cdot,y)=M_{\text{CLF}}\left( X\right)[y]-M_{\text{CLF}}\left(\varnothing\right)[y],\]
where \(\vec{s}(k)\) is the one-hot vector corresponding to the mask2\(s(k)\) sampled from the Shapley Kernel \(p(s)=\frac{L-1}{(|s|)|s|(L-|s|)}\). \(m\) is again the number of perturbation samples. We will use "SVS-\(m\)" and "KS-\(m\)" in the rest of the paper to indicate the sample size for SVS and KS. In practice, the specific perturbation samples depend on the random seed of the sampler, and we will show that the explanation scores are highly sensitive to the random seed under a small sample size.
Footnote 2: Note, \(s(k)\) is the \(k\)-th **mask sample** while \(s_{i}\in\{0,1\}\) is the \(i\)-th dimension of the **mask sample**\(s\).
Note that the larger the number of perturbation samples, the more model evaluations are required for a single instance, which can be computationally expensive for large Transformer models. Therefore, the main performance bottleneck is the number of model evaluations.
## 4 Stability of Local Explanation
One of the most common applications of SV is feature selection, which selects the most important features by following the order of the explanation scores. People commonly use KS with an affordable number of perturbation samples in practice (the typical numbers of perturbation samples used in the literature are around \(25\), \(200\), \(2000\)). However, as we see in Figure 1, the ranking of the scores can be quite sensitive to random seeds when using stochastic estimation of SV. In this section, we investigate this stability issue. We demonstrate stochastic approximation of SV is unstable in text classification tasks under common settings, especially with long texts. In particular, when ranking input tokens based on explanation scores, Spearman's correlation between rankings across different runs is low.
**Measuring ranking stability.** Given explanation scores produced by different random seeds using an SV estimator, we want to measure the difference between these scores. Specifically, we are interested in the difference in the rankings of the scores as this is what we use for feature selection. To measure the ranking stability of multiple runs using different random seeds, we compute Spearman's correlation between any two of them and use the average Spearman's correlation as the measure of the ranking stability. In addition, we follow Ghorbani et al. (2019) to report Top-K intersections between two rankings, since in many applications only the top features are of explanatory interest. We measure the size of the intersection of Top-K features from two different runs.
**Setup.** We conduct our experiments on the validation set of the Yelp-Polarity dataset (Zhang et al., 2015) and MNLI dataset (Williams et al., 2018). Yelp-Polarity is a binary sentiment classification task and MNLI is a three-way textual entailment classification task. We conduct experiments on \(500\) random samples with balanced labels (we refer to these datasets as "Stability Evaluation Sets" subsequently). Results are averaged over \(5\) different random seeds.3 We use the publicly available fine-tuned BERT-base-uncased checkpoints4(Morris et al., 2020) as the target models to interpret and use the implementation of Captum (Kokhlikyan et al., 2020) to compute the explanation scores for both KS and SVS. For each explanation method, we test with the recommended numbers of pertur
bation samples5 used to compute the explanation scores for every instance. For Top-K intersections, we report results with \(K=5\) and \(K=10\).
Footnote 5: For SVS, the recommended number of perturbation samples is \(25\) in Captum. For KS, to our best knowledge, the typical numbers of perturbation samples used in previous works are \(25,200,2000\). We also include KS-\(8000\) to see how stable KS can be given much longer running time.
**Trade-off between stability and computation cost.** The ranking stability results are listed in Table 1 and Table 2 for Yelp-Polarity and MNLI datasets. We observe that using 25 to 200 perturbation samples, the stability of the explanation scores is low (Spearman's correlation is only 0.16). Sampling more perturbed inputs makes the scores more stable. However, the computational cost explodes at the same time, going from one second to two minutes per instance. To reduce the sensitivity to an acceptable level (i.e., making the Spearman's correlation between two different runs above \(0.40\), which indicates moderate correlation (Akoglu, 2018)), we usually need thousands of model evaluations and spend roughly \(33.40\) seconds per instance.
**Low MSE does not imply stability.** Mean Squared Error (MSE) is commonly used to evaluate the distance between two lists of explanation scores. In Table 1, we observe that MSE only weakly correlates with ranking stability (e.g., For Yelp-Polarity, \(R=-0.41\) and \(p<0.05\), so the correlation is not significant). Even when the difference of MSE for different settings is as low as 0.01, the correlation between rankings produced by explanations can still be low. Therefore, from users' perspectives, low MSEs do not mean the explanations are reliable as they can suggest distinct rankings.
**Longer input suffers more from instability.** We also plot the Spearman's correlation decomposed at different input lengths in Figure 3. Here, we observe a clear trend that the ranking stability degrades significantly even at an input length of 20 tokens. The general trend is that the longer the input length is, the worse the ranking stability. The same trend holds across datasets. As many NLP tasks involve sentences longer than 20 tokens (e.g., SST-2 (Socher et al., 2013), MNLI (Williams et al., 2018)), obtaining stable explanations to analyze NLP models can be quite challenging.
**Discussion: why Shapley Values estimation is unstable in text domain?** One of the most prominent characteristics of the text domain is that individual tokens/n-grams can have a large impact on the label. Thus they need to be all included in the perturbation samples for an accurate estimate. When the input length grows, the number of n-grams will grow fast. As shown in Section 3, the probability of certain n-grams getting sampled is drastically reduced as each n-gram will be sampled with equivalent probability. Therefore, the observed model output will have a large variance as certain n-grams may not get sampled. A concurrent work (Kwon and Zou, 2022) presented a related theoretical analysis on why the uniform sampling setting in SV computation can lead to suboptimal attribution.
## 5 Amortized Inference for Shapley Values
Motivated by the above observation, we propose to train an amortized model to predict the explanation scores given an input _without any model evaluation on perturbation samples_. The inference cost is thus amortized by training on a set of pre-computed
\begin{table}
\begin{tabular}{l r r r r r} \hline Setting & Spearman & Top-5 Inter. & Top-10 Inter. & MSE & Running Time \\ \hline SVS-25 & \(0.84(\pm 0.00)\) & \(3.41(\pm 0.00)\) & \(7.02(\pm 0.00)\) & \(0.01(\pm 0.00)\) & \(183.72\)s/it \\ KS-25 & \(0.04(\pm 0.00)\) & \(0.43(\pm 0.01)\) & \(1.45(\pm 0.01)\) & \(0.00(\pm 0.00)\) & \(1.92\)s/it \\ KS-200 & \(0.16(\pm 0.00)\) & \(1.09(\pm 0.01)\) & \(2.47(\pm 0.00)\) & \(0.82(\pm 0.29)\) & \(3.47\)s/it \\ KS-2000 & \(0.37(\pm 0.00)\) & \(2.45(\pm 0.01)\) & \(4.38(\pm 0.05)\) & \(0.03(\pm 0.00)\) & \(33.40\)s/it \\ KS-8000 & \(0.63(\pm 0.00)\) & \(3.73(\pm 0.02)\) & \(6.93(\pm 0.01)\) & \(0.01(\pm 0.00)\) & \(123.29\)s/it \\ \hline \end{tabular}
\end{table}
Table 1: Ranking stability experiments on the Yelp-Polarity dataset. Each local explanation setting is evaluated across 5 runs with different random seeds. βTop-K Inter.β denotes top-K intersection. All values in this table are absolute values. Here we can see a clear trade-off between stability and computation cost.
\begin{table}
\begin{tabular}{l r r r r r} \hline Setting & Spearman & Top-5 Inter. & Top-10 Inter. & MSE & Running Time \\ \hline SVS-25 & \(0.75(\pm 0.00)\) & \(3.54(\pm 0.02)\) & \(7.46(\pm 0.02)\) & \(0.02(\pm 0.00)\) & \(128.07\)s/it \\ KS-25 & \(0.06(\pm 0.00)\) & \(0.97(\pm 0.01)\) & \(3.41(\pm 0.03)\) & \(0.01(\pm 0.00)\) & \(0.33\)s/it \\ KS-200 & \(0.24(\pm 0.00)\) & \(1.79(\pm 0.01)\) & \(4.37(\pm 0.03)\) & \(0.07(\pm 0.00)\) & \(2.04\)s/it \\ KS-2000 & \(0.52(\pm 0.00)\) & \(3.19(\pm 0.00)\) & \(6.09(\pm 0.00)\) & \(0.03(\pm 0.00)\) & \(20.39\)s/it \\ KS-8000 & \(0.76(\pm 0.00)\) & \(4.08(\pm 0.02)\) & \(7.74(\pm 0.02)\) & \(0.01(\pm 0.00)\) & \(89.48\)s/it \\ \hline \end{tabular}
\end{table}
Table 2: Ranking stability experiments on the MNLI dataset.
reliable explanation scores.
We build an amortized explanation model for text classification in two stages. In the first stage, we construct a training set for the amortized model. We compute reliable explanation scores as the reference scores for training using the existing SV estimator. As shown in Section 4, SVS-25 is the most stable SV estimator and we use it to obtain reference scores. In the second stage, we train a BERT-based amortized model that takes the text as input and outputs the explanation scores using MSE loss.
Specifically, given input tokens \(X\), we use a pretrained language model \(M_{\mathrm{LM}}\) to encode words into \(d\)-dim embeddings \(\vec{e}=M_{\mathrm{LM}}(X)=[\vec{e}_{1},\dots,\vec{e}_{L(X)}]\in\mathbb{R}^{L (X)\times d}\). Then, we use a linear layer to transform each \(\vec{e}_{i}\) to the predicted explanation score \(\phi_{AM}(i,\hat{y}_{i})=W\vec{e}_{i}+b\). To train the model, we use MSE loss to fit \(\phi_{AM}(i,\hat{y})\) to the pre-computed reference scores \(\phi(i,\hat{y})\) over the training set \(\mathbb{X}_{\mathrm{Train}}\). This is an amortized model in the sense that there are no individual sampling and model queries for each test example \(X\) as in SVS and KS. When a new sample comes in, the amortized model makes a single inference on the input tokens to predict their explanation scores.
```
0:\(m\): the desired number of local adaption perturbation samples, \(M_{\mathrm{AM}}\): the trained amortized explanation model, \(X\): the target data instance that has length \(L\), \(\hat{y}\): the predicted label, \(M_{\mathrm{CLF}}\): the target model \(\phi\gets M_{\mathrm{AM}}(X)\) for\(j=1\) to \(m\)do sample ordering \(\sigma\) from permutation \(\Pi(L)\) \(\phi\leftarrow\phi+\sum_{i}\left[M_{\mathrm{CF}}\left(X_{\mathbb{S}\left([ \sigma]_{i-1}\cup\{i\}\right)}\right)[\hat{y}]\right.\) \(\left.-M_{\mathrm{CLF}}\left(X_{\mathbb{S}\left([\sigma]_{i-1}\right)}\right) [\hat{y}]\right]\) endfor \(\phi\leftarrow\frac{\phi}{m}\)
```
**Algorithm 1** Local Adaption
### Better Fit via Local Adaption
By amortization, our model can learn to capture the shared feature attribution patterns across data to achieve a good efficiency-stability trade-off. We further show that the explanations generated by our amortized model can be used to initialize the explanation scores of SVS. This way, the evaluation of SVS can be significantly sped up compared with using random initialization. On the other hand, applying SVS upon amortized method improves the latter's performance as some important tokens might not be captured by the amortized method but can be identified by SVS through additional sampling (e.g., low-frequency tokens). The detailed algorithm is shown in Algorithm 1. Note that here we can recover the original SVS computation (Strumbelj and Kononenko, 2010) by replacing \(\phi\gets M_{\mathrm{AM}}(X)\) to be \(\phi\gets 0\). \(M_{\mathrm{AM}}\) is the amortized model trained using MSE as explained earlier.
## 6 Experiments
In this section, we present experiments to demonstrate the properties of the proposed approach in terms of accuracy against reference scores (6.1) and sensitivity to training-time randomness (6.2). We also show that we achieve a better fit via a local adaption method that combines our approach with SVS (6.3). Then, we evaluate the quality of the explanations generated by our amortized model on two downstream applications (6.5).
**Setup.** We conduct experiments on the validation set of Yelp-Polarity and MNLI datasets. To generate reference explanation scores, we leverage the Thermostat (Feldhus et al., 2021) dataset, which contains 9,815 pre-computed explanation scores of SVS-25 on MNLI. We also compute explanation scores of SVS-25 for 25,000 instances on Yelp-Polarity. We use BERT-base-uncased (Devlin et al.,
Figure 3: Ranking stability over different input lengths on Yelp-Polarity and MNLI datasets. We observe that longer input suffers more from instability.
2019) for \(M_{\rm LM}\). For dataset preprocessing and other experiment details, we refer readers to Appendix C.
To our best knowledge, FastSHAP (Jethani et al., 2021) is the most relevant work to us that also takes an amortization approach to estimate SV on tabular or image data. We adapt it to explain the text classifier and use it as a baseline to compare with our approach. We find it non-trivial to adapt FastSHAP to the text domain. As pre-trained language models occupy a large amount of GPU memory, we can only use a small batch size with limited perturbation samples (i.e., \(32\) perturbation samples per instance). This is equivalent to approximate KS-32 and the corresponding reference explanation scores computed by FastSHAP are unstable. More details can be found in Appendix A.
### Shapley Values Approximation
To examine how well our model fits the pre-computed SV (SVS-25), we compute both Spearman's correlation and MSE over the test set. As it is intractable to compute exact Shapley Values for ground truth, we use SVS-25 as a proxy. We also include different settings for KS results over the same test set. KS is also an approximation to permutation-based SV computation (Lundberg and Lee, 2017). Table 3 shows the correlation and MSE of aforementioned methods against SVS-25.
First, we find that despite the simplicity of our amortized model, the proposed amortized models achieve a high correlation with the reference scores (\(0.61>0.60\)) on Yelp-Polarity. The correlation between outputs from the amortized models and references is moderate (\(0.42>0.40\)) on MNLI when data size is limited. During inference time, our amortized models output explanation scores for each instance within \(50\) milliseconds, which is about \(40\)-\(60\) times faster than KS-200 and \(400\)-\(600\) times faster than KS-2000 on Yelp-Polarity and MNLI. Although the approximation results are not as good as KS-2000/8000 (which requires far more model evaluations), our approach achieves reasonably good results with orders of magnitude less compute.
We also find that the amortized model achieves the best MSE score among all approximation methods. Note that the two metrics, Spearman's correlation and MSE, do not convey the same information. MSE measures how well the reference explanation scores are fitted while Spearman's correlation reflects how well the ranking information is learned. We advocate for reporting both metrics.
**Cost of training the amortized models** To produce the training set, we need to pre-compute the explanation scores on a set of data. Although this is a one time cost (for each model), one might wonder how time consuming this step is as we need to run the standard sample-based estimation. As the learning curve shows in Figure 4, we observe that the model achieves good performance with about \(25\%\) (\(\approx 5,000\) on Yelp-Polarity) instances. Additionally, in Section 6.4, we show this one-time training will result in a model transferable to other domains, so we may not need to train a new amortized model for each new domain.
### Sensitivity Analysis
Given a trained amortized model, there is no randomness when generating explanation scores. However, there is still some randomness in the
Figure 4: Learning curves for the amortized model over Yelp-Polarity and MNLI datasets. The Spearmanβs correlations in this figure are computed against SVS-25. We can see our amortized model can learn efficiently even if there is only 10% data used for training.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{MNLI} & \multicolumn{2}{c}{Yelp-Polarity} \\ & Spearman & MSE & Spearman & MSE \\ \hline SVS-25 & 0.75 & 1.90e-2 & 0.84 & 6.64e-3 \\ KS-25 & 0.17 & 9.95e-2 & 0.12 & 4.34e-2 \\ KS-200 & 0.35 & 7.73e-2 & 0.24 & 5.77e-2 \\ KS-2000 & 0.60 & 2.54e-2 & 0.51 & 1.86e-2 \\ KS-8000 & **0.74** & **1.25e-2** & **0.70** & **6.25e-3** \\ \hline FastSHAP & 0.23 & 1.90e-1 & 0.18 & 7.91e-3 \\ Our Amortized Model & **0.42** & **9.59e-3** & **0.61** & **4.46e-6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Spearmanβs correlation and MSE of variants of SV methods against SVS-25, a proxy of exact SV on MNLI and Yelp-Polarity. As we show in Section 4, MSE correlates poorly with ranking stability and Spearmanβs correlation should be considered as **the main metric**. We only list MSE for reference. Bold-faced numbers are the best in each column. Results are averaged over 5 runs. Our amortized model achieves better approximation compared to KS-200 and FastSHAP baseline, but not as good as much more time-consuming methods KS-2000/8000. SVS-25 is listed as an upper bound.
training process, including the training data, the random initialization of the output layer and randomness during update such as dropout. Therefore, similar to Section 4, we study the sensitivity of the amortized model. Table 4 shows the results with different training data and random seeds. We observe that: 1) when using the same data (100%), random initialization does not affect the outputs of amortized models - the correlation between different runs is high (i.e., \(0.77\) on MNLI and \(0.76\) on Yelp-Polarity). 2) With more training samples, the model is more stable.
### Local Adaption
The experiment results for Local Adaption (Section 5.1) are shown in Table 5. Here we can see that: 1) by doing local adaption, we can further improve the approximation results using our amortized model, 2) by using our amortized model as initialization, we can improve the sample efficiency of SVS significantly (by comparing the performance of SVS-X and Adapt-X). These findings hold across datasets.
### Domain Transferability
To see how well our model performs on out-of-domain data, we train a classification model and its amortized explanation model on Yelp-Polarity and then explain its performance on SST-2 (Socher et al., 2013) validation set. Both tasks are two-way sentiment classification and have significant domain differences.
Our amortized model achieves a Spearman's correlation of approximately 0.50 with ground truth SV (SVS-25) while only requiring 0.017s per instance. In comparison, KS-100 achieves a lower Spearman's correlation of 0.46 with the ground truth and takes 1.6s per instance; KS-200 performs slightly better in Spearman's correlation but requires significantly more time. Thus, our amortized model is more than 90 times faster and more correlated with ground truth Shapley Values. This shows that, once trained, our amortized model can provide efficient and stable estimations of SV even for out-of-domain data.
In practice, we do not recommend directly explaining model predictions on out-of-domain data without verification, because it may be misaligned with user expectations for explanations, and the out-of-domain explanations may not be reliable (Hase et al., 2021; Denain and Steinhardt, 2022). More exploration on this direction is required but is orthogonal to this work.
### Evaluating the Quality of Explanation
**Feature Selection.** The first case study is feature selection, which is a straightforward application of local explanation scores. The goal is to find decision-critical features via removing input features gradually according to the rank given by the explanation methods. Following previous work (Zaidan et al., 2007; Jain and Wallace, 2019; DeYoung et al., 2020), we measure faithfulness by changes in the model output after masking tokens identified as important by the explanation method. The more faithful the explanation method is to the target model, the more performance drop will be incurred by masking important tokens.
We gradually mask Top-\(\alpha\) tokens (\(\alpha=1\%,5\%,10\%,20\%\)) and compute the accuracy over corrupted results using the stability evaluation sets for MNLI and Yelp-Polarity datasets as mentioned in Section 4. As the results show in Figure 5, the amortized model is more faithful than KS
\begin{table}
\begin{tabular}{c c c} \hline \hline Training Data Proportion & Spearman (MNLI) & Spearman (Yelp-Polarity) \\ \hline
10\% & 0.45 & 0.40 \\
30\% & 0.57 & 0.65 \\
50\% & 0.65 & 0.71 \\
70\% & 0.65 & 0.72 \\
100\% & 0.77 & 0.76 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Training time sensitivity study. To evaluate how much the amortized model will be influenced by randomness during training, we sample training data 5 times with different random seeds and then compute the averaged Spearmanβs correlation among all pairs of runs. The standard deviation is less than 1e-2. Our amortized model is stable against training time randomness with only 10% of data.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Method} & MNLI & Yelp-Polarity \\ & Spearman & Spearman \\ \hline SVS-2 & 0.41 & 0.52 \\ SVS-3 & 0.47 & 0.60 \\ SVS-5 & 0.55 & 0.69 \\ SVS-25 & **0.75** & **0.84** \\ \hline Our Amortized Model & 0.42 & 0.61 \\ Our Amortized Model (Adapt-2) & 0.47 & 0.64 \\ Our Amortized Model (Adapt-3) & 0.53 & 0.69 \\ Our Amortized Model (Adapt-5) & **0.57** & **0.71** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Approximation results for the Shapley explanation methods on MNLI and Yelp-Polarity datasets. Bold-faced numbers are the best in each column. Results are averaged over 5 runs. Spearmanβs correlations are computed against SVS-25. Adapt-m means here how many sampled ordering \(\sigma\)s we used here to do local adaption (\(m\) in Algorithm 1).
200 but underperforms KS-2000/8000 and SVS-25. However, the amortized model is more efficient than these methods. So amortized model achieves a better efficiency-faithfulness trade-off.
**Explanation for Model Calibration.** Recent work suggests that good explanations should be informative enough to help users to predict model behavior (Doshi-Velez and Kim, 2017; Chandrasekaran et al., 2018; Hase and Bansal, 2020; Ye et al., 2021). Ye and Durrett (2022) propose to combine the local explanation with pre-defined feature templates (e.g., aggregating explanation scores for overlapping words / POS Tags in NLI as features) to calibrate an existing model to new domains. The rationale behind this is that, if the local explanation truly connects to human-understandable model behavior, then following the same way how humans transfer knowledge to new domains, the explanations guided by human heuristics (in the form of feature templates) should help calibrate the model to new domains. Inspired by this, we conduct a study using the same calibrator architecture but plugging in different local explanation scores.
We follow Ye and Durrett (2022) to calibrate a fine-tuned MNLI model6 to MRPC. The experiment results are shown in Table 6. In the table, "BOW" means the baseline that uses constant explanation scores when building the features for the calibration model. Compared with the explanation provided by KS-2000, the explanation given by the amortized model achieves better accuracy, suggesting that the amortized model learns robust explanation scores that can be generalized to out-of-domain data in downstream applications.7
Footnote 6: [https://huggingface.co/textattack/bert-base-uncased-MNLI](https://huggingface.co/textattack/bert-base-uncased-MNLI)
Footnote 7: See Section 6.4 for a domain transfer experiment that directly compares to SVS-25 and w/o calibration.
## 7 Conclusion
In this paper, we empirically demonstrated that it is challenging to obtain stable explanation scores on long text inputs. Inspired by the fact that different instances can share similarly important features, we proposed to efficiently estimate the explanation scores through an amortized model trained to fit pre-computed reference explanation scores.
In the future, we plan to explore model architecture and training loss for developing effective amortized models. In particular, we may incorporate sorting-based loss to learn the ranking order of features. Additionally, we could investigate the transferability of the amortized model across different domains, as well as exploring other SHAP-based methods instead of the time-consuming SVS-25 in the data collection process to improve efficiency further.
### Limitations
In this paper, we mainly focus on developing an amortized model to efficiently achieve a reliable estimation of SV. Though not experimented with in the paper, our method can be widely applied to other black-box post-hoc explanation methods including LIME (Ribeiro et al., 2016). Also, due to the limited budget, we only run experiments on BERT-based models. However, as we do not
\begin{table}
\begin{tabular}{c c} \hline \hline Model & Acc \\ \hline BOW & 67.3 \\ ShapCal (KS-2000) & 67.4 \\ ShapCal (Amortized) & 68.0 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Calibration Experiments for Amortized Models. The explanation scores can help the calibrator achieves better accuracy on out-of-domain data than KS-2000.
Figure 5: Feature selection based on interpretations on Yelp-Polarity and MNLI datasets. The faster the curve drops, the more faithful the explanation scores are. We can see our amortized model is more faithful to the target model compared to KS-200, but are not as faithful as other more costly methods.
make any assumption for the model as other black-box explanation methods, our amortized model can be easily applied to other large language models. We only need to collect the model output and our model can be trained offline with just thousands of examples as we show in our method and experiments.
**Comparison and Training with Exact Shapley Values** Computing exact SV is computationally prohibitive for large language models (LLMs) on lengthy text inputs, as it necessitates the evaluation of LLMs on an exponential (in sequence length) number of perturbation samples per instance. As a result, we resort to using SVS-25, which serves as a reliable approximation, for training our amortized models.
## Acknowledgements
We want to thank Xi Ye and Prof. Greg Durrett for their help regarding their previous work and implementation on using SV for calibration (Section 6.5). We thank the generous support from AWS AI on computational resources and external collaborations. We further thank Prof. Chenhao Tan for the high-level idea discussion on explainability stability issues at an early stage of this paper, and thank Prof. Yongchan Kwon and Prof. James Zou for their in-depth theoretical analysis of sub-optimality of uniform sampling of computing SV. We thank all anonymous reviewers and chairs at ACL'23 and ICLR'23 for their insightful and helpful comments. Yin and Chang are supported in part by a CISCO grant and a Sloan Fellowship. HH is supported in part by a Cisco grant and Samsung Research (under the project Next Generation Deep Learning: From Pattern Recognition to AI).
|
2304.00138 | Robust Tracking Control for Nonlinear Systems: Performance optimization
via extremum seeking | This paper presents a controller design and optimization framework for
nonlinear dynamic systems to track a given reference signal in the presence of
disturbances when the task is repeated over a finite-time interval. This novel
framework mainly consists of two steps. The first step is to design a robust
linear quadratic tracking controller based on the existing control structure
with a Youla-type filter $\tilde Q$. Secondly, an extra degree of freedom: a
parameterization in terms of $\tilde Q$, is added to this design framework.
This extra design parameter is tuned iteratively from measured tracking cost
function with the given disturbances and modeling uncertainties to achieve the
best transient performance. The proposed method is validated with simulation
placed on a Furuta inverted pendulum, showing significant tracking performance
improvement. | Jiapeng Xu, Ying Tan, Xiang Chen | 2023-03-31T21:23:40Z | http://arxiv.org/abs/2304.00138v1 | # Robust Tracking Control for Nonlinear Systems: Performance optimization via extremum seeking
###### Abstract
This paper presents a controller design and optimization framework for nonlinear dynamic systems to track a given reference signal in the presence of disturbances when the task is repeated over a finite-time interval. This novel framework mainly consists of two steps. The first step is to design a robust linear quadratic tracking controller based on the existing control structure with a Youla-type filter \(\tilde{Q}\). Secondly, an extra degree of freedom: a parameterization in terms of \(Q\), is added to this design framework. This extra design parameter is tuned iteratively from measured tracking cost function with the given disturbances and modeling uncertainties to achieve the best transient performance. The proposed method is validated with simulation placed on a Furuta inverted pendulum, showing significant tracking performance improvement.
## I Introduction
Robust tracking control of nonlinear systems has been extensively studied in the literature using various robust techniques, such as \(H_{\infty}\) control [1, 2] and sliding mode control [3]. These methods are in general the worst-case design, which would ensure the stability under the worst-case disturbances. On the other hand, optimal performance such as a linear quadratic form has been the focus of the design in industry applications. However, it is usually hard to analyze the performance for nonlinear dynamics. One of the key reasons to contribute this difficulty comes from the the analysis tool used in stability analysis. Lyapunov direct method [4] has been used to provide sufficient conditions to guarantee the stability of nonlinear dynamics. The optimal control for nonlinear dynamics requires to solve the Hamilton-Jacobi-Bellman (HJB) equation, which is a nonlinear partial differential equation with respect to a given cost. Solving this HJB equation is computational costly.
In contrast, both robust and optimal control designs have been extensively investigated for linear time-invariant (LTI) dynamic systems [5, 6, 7, 8]. In particular, a robust controller design with a Youla-type filter \(\tilde{Q}\)[8, 9] has been proposed recently, which is motivated by the generalized internal model control (GIMC) proposed in [6]. The robust controller with \(\tilde{Q}\) provides automatic robustness recovery in the linear quadratic Gaussian (LQG)/\(H_{2}\) control [8]. Its key idea is to use the \(\tilde{Q}\) filter to balance the optimal performance without consideration of the disturbance and robust performance using the techniques such as \(H_{\infty}\) control. Since the filter \(\tilde{Q}\) is driven by the residual signal indicating the mismatch between the nominal model and the true system, \(\tilde{Q}\) is only activated when there exists unmodelled dynamics or external disturbances, such that this kind of controller design can lead to a high performance in the presence of disturbances and uncertainties. This technique is quite different from the traditional mixed \(H_{2}/H_{\infty}\) control, a trade-off design [10, 11, 12].
This work proposes to utilizes the robust controller with \(\tilde{Q}\) in [8] to systematically design the feedback control for a nonlinear dynamic system via its linearization. The proposed framework is used to generate optimal tracking performance for a class of nonlinear dynamic systems to track a given reference trajectory. More specifically, the proposed framework first presents a robust linear quadratic tracking (LQT) controller design based on the filter \(\tilde{Q}\). Then by introducing an extra gain factor in terms of \(\tilde{Q}\), which can be treated as the balance between the LQT performance and the robustness with respect to disturbances and uncertainties coming from linearizations and other external signals, an updating law is generated to tune this gain factor to minimize the tracking cost in the presence of modeling uncertainties and disturbances. The choice of the gain factor does not affect the local stability properties of the closed-loop nonlinear system, while it improves the tracking performance. In this work, the data-driven extremum seeking (ES) approach [13, 14, 15, 16], which is a model-free optimization method, is adapted to find this optimal gain factor. Alternatively, other model-free optimization techniques such as reinforcement learning can also be considered [2, 17].
The effectiveness of the proposed framework is validated with simulation placed on a Furuta inverted pendulum. It has been shown that this optimal parameter is dependent on the nonlinear dynamics, the type of the reference trajectories, as well as the type of disturbances. The obtained optimal gain can achieve much better transient tracking performance compared with the standard LQT controller and the robust controllers such as \(H_{\infty}\).
The remainder of this paper is organized as follows. Section II formulates the tracking problem for nonlinear systems of interest. Section III presents a design procedure of LQT controller with \(\tilde{Q}\). Section IV further presents a controller design for nonlinear systems where performance is further optimized via ES. Section V provides simulation results on an inverted pendulum. Finally, Section VI concludes this work. |
2305.00577 | Contextual Response Interpretation for Automated Structured Interviews:
A Case Study in Market Research | Structured interviews are used in many settings, importantly in market
research on topics such as brand perception, customer habits, or preferences,
which are critical to product development, marketing, and e-commerce at large.
Such interviews generally consist of a series of questions that are asked to a
participant. These interviews are typically conducted by skilled interviewers,
who interpret the responses from the participants and can adapt the interview
accordingly. Using automated conversational agents to conduct such interviews
would enable reaching a much larger and potentially more diverse group of
participants than currently possible. However, the technical challenges
involved in building such a conversational system are relatively unexplored. To
learn more about these challenges, we convert a market research multiple-choice
questionnaire to a conversational format and conduct a user study. We address
the key task of conducting structured interviews, namely interpreting the
participant's response, for example, by matching it to one or more predefined
options. Our findings can be applied to improve response interpretation for the
information elicitation phase of conversational recommender systems. | Harshita Sahijwani, Kaustubh Dhole, Ankur Purwar, Venugopal Vasudevan, Eugene Agichtein | 2023-04-30T21:16:53Z | http://arxiv.org/abs/2305.00577v1 | Contextual Response Interpretation for Automated Structured Interviews: A Case Study in Market Research
###### Abstract.
Structured interviews are used in many settings, importantly in market research on topics such as brand perception, customer habits, or preferences, which are critical to product development, marketing, and e-commerce at large. Such interviews generally consist of a series of questions that are asked to a participant. These interviews are typically conducted by skilled interviewers, who interpret the responses from the participants and can adapt the interview accordingly. Using automated conversational agents to conduct such interviews would enable reaching a much larger and potentially more diverse group of participants than currently possible. However, the technical challenges involved in building such a conversational system are relatively unexplored. To learn more about these challenges, we convert a market research multiple-choice questionnaire to a conversational format and conduct a user study. We address the key task of conducting structured interviews, namely interpreting the participant's response, for example, by matching it to one or more predefined options. Our findings can be applied to improve response interpretation for the information elicitation phase of conversational recommender systems.
conversational recommender systems, intent prediction, conversational preference elicitation +
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Information systems
+
Footnote β : journal: Journal of the degree
## 1. Introduction
Information elicitation conversations, such as when a sales agent tries to understand their customer's preferences or a medical professional asks about a patient's history, often begin with a routine set of questions. In e-commerce, market research professionals and companies conduct many such surveys each year, often multiple times, before developing, updating, or launching new products - to collect critical data on customer preferences, interests, and awareness, among other topics.
In structured interviews, an interviewer asks a predetermined set of questions conversationally, adapting them to the user's responses and behavior. While extremely informative and a de-facto standard in market research (e.g., via focus groups), these studies are limited in scale to a small number of participants and are time-consuming and expensive to conduct.
To expand the reach of such studies, online static multiple-choice questionnaires or surveys are used. However, such online questionnaires have some disadvantages. They need to be shorter than interviews to avoid "respondent fatigue" (Ballall et al., 2010). There is also a greater risk of missing data because of a lack of probing or supervision. Also, it is difficult to ask open-ended questions (Ball et al., 2010). Conversational systems that can conduct structured interviews can thus potentially be more effective tools for preference elicitation. Such a system would, given a structured interview provided by a domain expert, converse with the participant to elicit responses to a series of questions. Ideally, it should also be able to ask clarification questions,
Figure 1. The userβs conversational responses should be mapped to the correct answer option(s).
prime the user with possible answers, and reorder and skip questions based on the user's responses. An essential requirement for such an agent to be effective is the ability to interpret the responses, often by matching them to a previously defined set of options.
As a first step towards building a conversational system for conducting structured interviews, we investigate the trade-offs of conducting a structured interview via an automated conversational agent vs. the traditional, static, multiple-choice web-based questionnaire. To this end, we conduct a large online user study where a questionnaire with choices for each question is presented in both a conversational interface and as a static multiple-choice questionnaire. The questionnaire was provided by a reputed Personal Care products company's marketing team. The company has a wide range of products for skin care, which target specific skin conditions. Market research and brand awareness are critical for ensuring that their products meet their consumers' needs and that they can find the right product.
We then address the response interpretation problem for this setting, i.e., given a structured interview in the form of a list of questions and the set of possible answers (options) for each question, the model needs to infer the options with which the user's response matches. For the related problem of intent classification for goal-oriented and open-domain conversational agents, prior work achieves good results by jointly training large language models on intent classification and slot-filling tasks. However, in a system-initiative conversation where the user is asked open-ended questions about their preferences, intent classification is challenging because 1) interview questions often elicit descriptive answers as opposed to names of entities of an expected type, and 2) it is expensive to collect conversational data for supervised learning. We investigate three approaches for using contextual information for response interpretation: 1) using historical probability distribution over the answer options, 2) using previous conversation context, and 3) using external knowledge.
Our research questions are RQ1) Does the change in interface, and the absence of options lead to more informative responses? RQ2) What types of questions would benefit from an open-ended conversational interface? And RQ3) How can we address the response interpretation problem (defined below) for this setting?
\begin{tabular}{p{42.7pt} p{284.5pt}} \hline
**Setting:** & Structured interview conducted by a conversational agent with a user \\
**Given:** & A conversation consisting of system utterances (in the form of questions) \\ & \(s_{1}\)- \(s_{n-2}\), \(s_{n-1}\), \(s_{n}\), \\ & and user responses \\ & \(u_{1}\)- \(u_{n-2}\), \(u_{n-1}\), \(u_{n}\), \\ & and a set of possible answers to \(s_{i}\) given by \\
**Problem:** & A(\(q=s_{i}\)) = \(u_{i},1,...,u_{i,m}\) \\ & \(A(q=s_{i})\) = \(u_{i},1,...,u_{i,m}\) \\ & \(A\) conversion turn \(i\), match \(u_{i}\) to a subset \(M_{i}\) of possible answer options \(A(q=s_{i})\) that represents user intent \\ \hline \end{tabular}
**Response Interpretation Problem Definition**
## 2. Related Work
There has been extensive prior work on closely related problems like intent prediction and slot-filling for conversational systems (Kumar et al., 2017; Liu et al., 2017; Liu et al., 2018), dialog representation (Kumar et al., 2017; Liu et al., 2018), knowledge grounded language models (Kumar et al., 2017), and domain-specific language models (Kumar et al., 2017).
Open-domain and domain-specific conversational agents usually have a predefined set of intents and slot values that they can identify and process. Existing intent classifiers apply a variety of approaches like transformer-based models (Kumar et al., 2017), hierarchical text classification (Liu et al., 2018), and knowledge-guided pattern matching (Liu et al., 2018) to map user utterance to the relevant intent. However, these methods rely on the availability of extensive training data and the intents and slots being limited in number. In the structured interview setting, users often give long descriptive answers to open-ended questions, which makes it hard to apply these intent classification models.
Reading comprehension tasks that require answering multiple-choice questions based on some given context are also closely related to our task. Luo et al. (Luo et al., 2018) propose a BERT-based framework for handling multiple-choice questionnaires focused on reference passages. (Liu et al., 2018; Liu et al., 2018) address the problems of history selection and dialog representation for conversational reading comprehension. However, answers in reading comprehension tasks are generally factual and precise as opposed to ones in structured interviews. The challenges involved in training models for this task are different.
Language Models pre-trained on dialog(Kumar et al., 2017; Liu et al., 2018) are also relevant to our work. TOD-BERT (Kumar et al., 2017), after being pre-trained on nine human-human and multi-turn task-oriented dialogue datasets, outperformed strong baselines like BERT on four downstream task-oriented dialogue applications. We use TOD-BERT in our experiments to study the advantages of dialog pre-training for our task.
External knowledge bases and knowledge graphs have been incorporated in many approaches for NLP and IR tasks to yield promising results (Liu et al., 2018; Liu et al., 2018; Liu et al., 2018; Liu et al., 2018; Liu et al., 2018). Most of these approaches rely on the existence of a knowledge graph with relevant information. Domain-specific models like SciBERT(Kumar et al., 2017) and BioBERT (Li et al., 2018) have shown that downstream tasks can greatly benefit from models pre-trained on in-domain data. Although our data is domain-specific, there isn't a pre-trained model or knowledge graph tailored for our setting. Therefore, we use ConceptNet neighbors of terms in conversations to experiment with the effects of incorporating external knowledge.
## 3. Data Collection
### User Study
We conducted a user study with 139 participants to compare the informativeness and other characteristics of _Conversational Interface_ responses with _Web-based Questionnaire_ responses. We used a questionnaire provided by domain experts from a reputed company, as described in SS1. It contains 25 multiple-choice questions about the client's lifestyle, skin and hair care routines, and preferences. The questionnaire contains 12 single-option questions (the user can select exactly one option) and 13 multi-option questions (the user can select multiple options). The user study consists of 2 phases. In the first phase, the participants interact with a text-based conversational agent that asks a question from the questionnaire, responds to the user's free-form answer with an acknowledgment ("Ok",
"Alright" or "I see"), and then proceeds to ask the next question. The participants are then asked to fill out an online web-based survey with the same questions, but this time with options to choose from. They were shown their conversational response to the question and asked to pick the options that matched it. In addition to the responses from the questionnaire, the participants could also choose from two additional options, "None of the above" and "I don't know".
For our experiments, we only use single-option questions.
### Response Interpretation Data
We model the response interpretation task as a binary classification problem. That is, given a \(\mathtt{<}\)conversational response, answer option\(\mathtt{>}\) pair, the model predicts the probability that they are semantically equivalent. We use the data from the user study in SS3.1 as a source of ground truth for \(\mathtt{<}\)conversational response, answer option\(\mathtt{>}\) pairs. We split conversations among the train, validation and test sets in a 60:20:20 ratio. We construct a labeled dataset of \(\mathtt{<}\)conversational response, answer option\(\mathtt{>}\) pairs from conversations in the train set to train our binary classification models. The \(\mathtt{<}\)conversational response, answer option\(\mathtt{>}\) pairs from 93.1 are used as positive examples. We add an equal number of randomly selected negative examples. The model is trained on 22865 samples and validated on 7724 samples. It is then evaluated on the holdout set of 20% of the conversations.
## 4. Methods
This section describes the different methods we use for response prediction.
### Using Probabilistic Models Learned from Historical Data
We use purely probabilistic models, which do not consider response text, as baselines.
#### 4.1.1. Context-Less: Using Prior Probability Distributions
In this method, we infer the prior probability distribution over the options for each question using the training data. We infer the probability of an answer option \(a_{j,k}\in A(s_{j})\) being the match for question \(s_{j}\) as follows:
\[P(M_{j}=\{a_{j,k}\})=\frac{\mathcal{N}(a_{j,k})}{\sum_{i=1}^{m}\mathcal{N}(a_{ j,i})} \tag{1}\]
where \(\mathcal{N}(a_{j,i})\) represents the number of times \(a_{j,i}\) is observed as the matching choice \(M_{j}\) for \(s_{j}\) in the training data. The model prediction is therefore \(a_{j,k}\), where \(k=\operatorname*{argmax}_{x}P(M_{j}=\{a_{j,x}\})\).
#### 4.1.2. Contextual: Probability Distribution Conditioned on One Previous Response
In this method, we use a conditional probability distribution. Given that \(a_{i}\in A(s_{i})\) was the selected option for \(s_{i}\), the probability that \(a_{j,k}\in A(s_{j})\) will be selected for \(s_{j}\), where \(i<j\) is given by
\[P(M_{j}=\{a_{j,k}\}|M_{i}=\{a_{i}\})=\frac{P(M_{j}=\{a_{j,k}\}\text{ and }M_{i}=\{a_{i}\})}{P(M_{i}=\{a_{i}\})} \tag{2}\]
Intuitively, if the answer to \(s_{i}\) provides some information about the answer to \(s_{j}\), then \(H(M_{j})>H(M_{j}|M_{i})\), where \(H(x)\) is the entropy of the probability distribution over the values of random variable x.
\[H(x)=-\sum_{i=1}^{n}p(x_{i})log_{2}p(x_{i}) \tag{3}\]
For example, we observe in our dataset that if the user's response for the question "After applying a facial moisturizer, how do you want your skin to feel?" is known, the entropy of probability distribution over the options for "What type of weather do you usually live in?" is much lower than the prior. We find the conditional probability distribution with the lowest entropy as follows:
\[\operatorname*{argmin}_{i}H(M_{j}|M_{i}) \tag{4}\]
The model prediction is therefore \(a_{j,k}\) where \(k=\operatorname*{argmax}_{x}P(M_{j}=\{a_{j,x}\}|M_{i}=\{a_{i}\})\).
### Fine-tuning Pre-Trained Language Models
In this approach, we treat response matching as a binary classification task. Given a \(\mathtt{<}\)conversational response, answer option\(\mathtt{>}\) pair, we train the model to output a score that indicates their semantic similarity. The final prediction is the option with the highest score.
#### 4.2.1. Fine-Tuned BERT Classifier
In this method, we fine-tune BERT (Bertson et al., 2017) to output a score of either 1 (when conversational response and answer option match) or 0 (when conversational response and answer option don't match) when given the conversational response and answer option as input. We employ a linear layer on top of the [CLS] token for classification.
We predict the semantic similarity score of a user response \(u_{j}\) with all the possible answer options for the question \(s_{j}\) as follows:
\[S_{j,k}=BERT(\{CLS\}\|u_{j}\|\|[SEP]\|a_{j,k})\ \ \forall a_{j,k}\in A(q=s_{j}) \tag{5}\]
The model prediction is \(a_{j,k}\), where \(k=\operatorname*{argmax}_{x}S_{j,x}\).
#### 4.2.2. Incorporating Conversation Context
We include conversation context in the model input in addition to the conversational response. We append each conversational utterance with either a "[SYS]" or a "[USR]" token depending on whether it is a system or a user utterance. Let \(t_{j}\) represent the concatenation of the \(j^{th}\) system and user utterances.
\[t_{j}=[SYS]\|s_{j}\|[USR]\|u_{j}\]
We experiment with three settings:
* Context of the current turn \(j\): \[S_{j,k}=BERT(\{CLS\}\|l_{j}\|[SEP]\|a_{j,k})\ \ \forall a_{j,k}\in A(q=s_{j})\]
* Context of 1-previous turn: \[S_{j,k}=BERT(\{CLS\}\|l_{j-1}\|l_{j}\|[SEP]\|a_{j,k})\ \ \forall a_{j,k}\in A(q=s_{j})\]
* Context of 2-previous turns: \[S_{j,k}=BERT(\{CLS\}\|l_{j-2}\|l_{j-1}\|l_{j}\|[SEP]\|a_{j,k})\ \ \forall a_{j,k}\in A(q=s_{j})\]
The model prediction is \(a_{j,k}\), where \(k=\operatorname*{argmax}_{x}S_{j,x}\).
#### 4.2.3. Incorporating Dialog Pre-training
We hypothesize that a model pre-trained on dialog tasks would perform better than a generic pre-trained language model in our conversational setting. In this approach, fine-tune TOD-BERT instead of BERT. TOD-BERT has the same architecture as BERT but has been pre-trained on various dialog tasks.
#### 4.2.4. Incorporating External Knowledge
BERT often does not capture the semantic relatedness of domain-specific terms. To bridge the vocabulary gap between the user responses and questionnaire answer options, we concatenate one-hop neighbors from ConceptNet 1 of all the terms in the user input to the user input. We exclude infrequent neighbors to avoid adding noise to our input text.
Footnote 1: [https://conceptnet.io/](https://conceptnet.io/)
## 5. Experimental Setting
We use 5-fold cross-validation for our experiments. We treat each fold as the test set one by one and use the other folds as train and validation. We report the average of results from all test folds.
### Models Compared
* Probabilistic Baseline: We use the conditional probability-based model described in SS4.1.2 as the baseline.
* BERT: We fine-tuned bert-base-uncased2 on our dataset of <conversational response, answer option> pairs (SS4.2.1). We experiment with different lengths of conversation context. Results are reported for the best version, which only considers the current conversation turn.
* TOD-BERT: We also tried a BERT model pre-trained on conversational data. Results are reported for TOD-BERT (described in SS4.2.3) fine-tuned on our task with 2 previous turns of context.
* BERT-CNNet: Since our dataset is domain-specific and has a different vocabulary than BERT's pre-training data, we also experiment with augmenting input to BERT with domain-specific keywords. Again, results are reported for the best version that only considers the current conversation turn. (SS4.2.4)
Footnote 2: [https://github.com/google-research/bert/blob/master/README.md](https://github.com/google-research/bert/blob/master/README.md)
### Evaluation Metric
For this paper, we train and evaluate our models on single-option questions. Therefore, we use accuracy as the evaluation metric, which we define as the fraction of test questions where the model assigns the highest score to the true answer option based on the ground truth data described in SS3.2.
### Human Annotation
We observed that in the user study, in the _Web-based Questionnaire_, the participants often selected options that they hadn't implied in their _Conversational Interface_ responses. To measure how difficult response interpretation is for humans, we recruited annotators from MTurk who were familiar with and interested in the domain. We asked them to choose the most appropriate option for each question, given the chat responses from the original user study participant. Four different workers annotated each question for a sample of 27 conversations. We use Fleiss Kappa (Fleiss, 2016) to measure inter-annotator agreement. The average agreement is 0.46, which indicates moderate agreement. However, it varied significantly across different questions, as Table 2 shows. The average agreement between the MTurkers and original respondents is 0.44, which is also moderate.
## 6. Results and Discussion
We first report the main results of different methods for response interpretation, then discuss findings about user behavior, and finally, investigate the factors that make the task challenging.
### Response Interpretation Results
Table 1 shows the accuracy of all the models on single-option questions. We consider improvement to be statistically significant if test on each fold returns a p-value < 0.05. Significant results are marked in bold text.
The accuracy of TOD-BERT is not significantly higher than our probabilistic baseline. This is because the conversations in our setting are different from the goal-oriented dialog that TOD-BERT is pre-trained on. The model is not able to transfer its knowledge to response interpretation in a structured interview.
Fine-tuned BERT and BERT-CNNET significantly outperform the baseline.
The highest value of accuracy we achieve is 64%, which is relatively low. As discussed in SS5.3, the inter-annotator agreement is lower on some questions, indicating that intent prediction on these questions is difficult even for humans. We obtain higher accuracy values by excluding questions with low inter-annotator agreement from our test set. We set the threshold for low agreement as 0.4, which is standard for Fleiss Kappa. This leaves us with 7 single-option questions out of 12. Table 1 also shows these results.
### Tradeoff Between Effort and Information
Table 2 summarizes our findings from the user study. The average dwell time (Time elapsed between the question's appearance and the user's first click/keypress) for a question was comparable for _Web-based Questionnaire_ and _Conversational Interface_. The input time was much longer for _Conversational Interface_ because participants had to type their responses instead of selecting options with clicks. On average, the _Conversational Interface_ response has more words than the _Web-based Questionnaire_ response. In some cases, the extra effort on the users' part resulted in more informative answers. For example, for the questions, "When do you moisturize your face"? (Q4) and "How do you handle unexpected stress?" (Q8), the _Conversational Interface_ response is significantly more verbose than the _Web-based Questionnaire_ response. These questions elicited descriptive answers that were more informative in _Conversational Interface_.
On the other hand, for the question "What kind of hair day are you having today?" (Q5), users were more likely to give a response like "good" or "not bad". Although the longest conversational response for this question had 13 words, on average _Web-based Questionnaire_ elicited more informative responses.
\begin{table}
\begin{tabular}{l l c c c} Model & \multicolumn{2}{c}{Overall} & \multicolumn{2}{c}{On High-\(\kappa\) Questions} \\ & Accuracy & Std & Accuracy & Std \\ \hline Prob. Baseline & 0.51 & 0.02 & 0.53 & 0.02 \\ BERT & **0.64 (+24.0 \%)** & 0.04 & **0.71 (+34 \%)** & 0.04 \\ TOD-BERT & 0.55 (+7.6 \%) & 0.04 & 0.63 (+18.8 \%) & 0.03 \\ BERT-CNNET & **0.62 (+20.9 \%)** & 0.02 & **0.68 (+28.3 \%)** & 0.05 \\ \end{tabular}
\end{table}
Table 1. Main Results: Accuracy on Single-Option Questions
We also observe that 26% of the _Conversational Interface_ responses annotated by MTurkers were mapped to "None of the above", which indicates that _Conversational Interface_ often collects information that is entirely absent from _Web-based Questionnaire_ options. The highest number of "None of the above" responses were observed for questions "After applying a facial moisturizer, how do you like your skin to feel?" (Q10) and "How would you describe your natural hair?" (Q12). This might have been because these questions can be interpreted in different ways, but the options list is small and specific.
### Error Analysis
Table 3 shows the correlation between 4 features of questions with the best model's accuracy (Accuracy) and the inter-annotator agreement (\(\kappa\)) for that question. Contrary to what we expected, a larger number of options does not make the task harder for the model or human annotators. The number of words in the conversational response (Conv. Response Length) negatively correlates with \(\kappa\) more than with Accuracy. That might be because longer responses could partially match more than one answer option and cause disagreement. A longer dwell time indicates that the question is hard to understand or hard to answer. It negatively correlates with Accuracy more than with \(\kappa\). This might be because it is harder for the model to handle unusual responses it hasn't been trained on.
Thus, we can see that the model fails to generalize to unusual responses. Another case where we observe high error is when matching responses requires some logical reasoning. For example, for the question "Which ONE benefit are you primarily looking for, over time, from your facial moisturizer products?", the user responds by saying "The main benefit I'm looking for is smooth/healthy looking skin that isn't oily or shiny". However, the choices in the questionnaire are "Mainting the appearance/feel of my skin", "Enhance my skin's appearance/ feel", "Fix my skin's problem areas" and "Prevent future skin problems". The model would have to infer that the user's response implies that they want to enhance their skin's appearance. The domain-specific nature of the task also remains a source of error. ConceptNet does not have high enough coverage of skincare terms.
## 7. Conclusion and Future Work
In summary, we conducted a study to investigate the difference in responses between _Conversational Interface_ and _Web-based Questionnaire_. We find that _Conversational Interface_ has the advantage of eliciting an answer that might not be one of the options but is informative of the user's preferences. We also see that _Conversational Interface_ elicits descriptive, more informative answers from users for open-ended questions. On the other hand, questions that ask for specific information and have a comprehensive list of options can be answered more efficiently using _Web-based Questionnaire_.
Moreover, we investigated the problem of automated response interpretation in a conversational structured interview setting, which is more challenging than the traditional intent classification task. We compared three complementary approaches to this problem, namely incorporating historical information, conversation context, and external knowledge for more effective semantic matching, all using state-of-the-art contextual large language models to represent the conversational and structured data. Our results demonstrate that effectively incorporating contextual information in structured interviews is harder than in other types of dialog. Although responses to previous interview questions can contain clues to infer future responses, we could not capture them by concatenating previous turns with the input to our model. A possible future research direction would be to create a more effective context representation for structured interviews. Another direction of research we plan to pursue is automatically adapting the conversation to ask clarification questions if the participants' response is unclear or to even skip some questions if the participant already provided information matching one of the options. Such an adaptive system can also use a combination of open-ended conversational interaction and suggesting options when necessary. Lastly, incorporating external knowledge in the absence of an appropriate knowledge graph, possibly using unstructured text from our domain, is another direction we plan to explore.
###### Acknowledgements.
This research was supported by Procter & Gamble.
|
2310.20411 | A New Kilohertz Gravitational-Wave Feature from Rapidly Rotating
Core-Collapse Supernovae | We present self-consistent three-dimensional core-collapse supernova
simulations of a rotating $20M_\odot$ progenitor model with various initial
angular velocities from $0.0$ to $4.0$ rad s$^{-1}$ using a smoothed particle
hydrodynamics code, SPHYNX, and a grid-based hydrodynamics code, FLASH. We
identify two strong gravitational-wave features, with peak frequencies of
$\sim300$ Hz and $\sim1.3$ kHz in the first $100$ ms postbounce. We demonstrate
that these two features are associated with the $m=1$ deformation from the
proto-neutron star (PNS) modulation induced by the low-$T/|W|$ instability,
regardless of the simulation code. The $300$ Hz feature is present in models
with an initial angular velocity between $1.0$ and $4.0$ rad s$^{-1}$, while
the $1.3$ kHz feature is present only in a narrower range, from $1.5$ to $3.5$
rad s$^{-1}$. We show that the $1.3$ kHz signal originates from the
high-density inner core of the PNS, and the $m=1$ deformation triggers a strong
asymmetric distribution of electron anti-neutrinos. In addition to the $300$ Hz
and $1.3$ kHz features, we also observe one weaker but noticeable
gravitational-wave feature from higher-order modes in the range between $1.5$
and $3.5$ rad s$^{-1}$. Its peak frequency is around $800$ Hz initially and
gradually increases to $900-1000$ Hz. Therefore, in addition to the
gravitational bounce signal, the detection of the $300$ Hz, $1.3$ kHz, the
higher-order mode, and even the related asymmetric emission of neutrinos, could
provide additional diagnostics to estimate the initial angular velocity of a
collapsing core. | He-Feng Hsieh, RubΓ©n CabezΓ³n, Li-Ting Ma, Kuo-Chuan Pan | 2023-10-31T12:33:28Z | http://arxiv.org/abs/2310.20411v1 | # A New Kilohertz Gravitational-Wave Feature from Rapidly Rotating Core-Collapse Supernovae
###### Abstract
We present self-consistent three-dimensional core-collapse supernova simulations of a rotating \(20M_{\odot}\) progenitor model with various initial angular velocities from \(0.0\) to \(4.0\) rad s\({}^{-1}\) using a smoothed particle hydrodynamics code, SPHYNNX, and a grid-based hydrodynamics code, FLASH. We identify two strong gravitational-wave features, with peak frequencies of \(\sim 300\) Hz and \(\sim 1.3\) kHz in the first \(100\) ms postbounce. We demonstrate that these two features are associated with the \(m=1\) deformation from the proto-neutron star (PNS) modulation induced by the low-\(T/|W|\) instability, regardless of the simulation code. The \(300\) Hz feature is present in models with an initial angular velocity between \(1.0\) and \(4.0\) rad s\({}^{-1}\), while the \(1.3\) kHz feature is present only in a narrower range, from \(1.5\) to \(3.5\) rad s\({}^{-1}\). We show that the \(1.3\) kHz signal originates from the high-density inner core of the PNS, and the \(m=1\) deformation triggers a strong asymmetric distribution of electron anti-neutrinos. In addition to the \(300\) Hz and \(1.3\) kHz features, we also observe one weaker but noticeable gravitational-wave feature from higher-order modes in the range between \(1.5\) and \(3.5\) rad s\({}^{-1}\). Its peak frequency is around \(800\) Hz initially and gradually increases to \(900-1000\) Hz. Therefore, in addition to the gravitational bounce signal, the detection of the \(300\) Hz, \(1.3\) kHz, the higher-order mode, and even the related asymmetric emission of neutrinos, could provide additional diagnostics to estimate the initial angular velocity of a collapsing core.
Core-collapse supernovae (304); Gravitational wave astronomy (675); Hydrodynamical simulations (767); Neutron stars (1108)
0000-0001-00001-00001-0001-0001-0
calculations exclusively with the s12 progenitor. Pajkos et al. (2019, 2021) conducted a similar analysis for different progenitor masses and performed longer simulations during the accretion phase. However, in their latest work, the analysis is done with approximately fifty 2D simulations and only four 3D simulations. Three-dimensional fluid instabilities, such as standing acceleration shock instability (SASI; Blondin et al., 2003) could play an important role in the GW features (Kuroda et al., 2016), and the spiral modes of SASI can only be developed in 3D simulations. Therefore, extending the previous works to full 3D calculations is crucial and necessary to investigate the realistic GW features from CCSNe.
Unlike simulations of binary neutron star mergers, it is important to highlight that gravitational waveforms produced from CCSN simulations rely on grid-based hydrodynamics codes. Additionally, the majority of these grid-based hydrodynamics codes adopt either Cartesian coordinates (O'Connor and Couch, 2018; Kuroda et al., 2020; Pan et al., 2021; Shibagaki et al., 2021) or spherical coordinates (Burrows et al., 2019; Powell et al., 2021; Takiwaki et al., 2021). Spherical grids require an inner boundary condition and/or a fixed proto-neutron star (PNS) center. Such limitations might restrict the GW emissions from regions close to the coordinate center or the inner boundary. On the other hand, Cartesian grids have no inner boundary conditions at the coordinate center but can generate \(m=4\) perturbations (e.g., Ott et al., 2007) and therefore, artificial GW emissions from these grid effects may pollute the GW features. Furthermore, sound waves might oscillate and bounce back between grid-refinement boundaries, which might also induce artificial GW emissions. An alternative to avoid these artifacts is to use a pure meshless Lagrangian method, such as smoothed particle hydrodynamics (SPH), which does not suffer from these grid effects. Furthermore, GW emissions can be recovered directly from the particle distribution and physical magnitudes carried by each particle, without time derivatives involved, producing clean and reliable waveforms (Centrella and McMillan, 1993).
An additional reason for using an SPH code in this work is its intrinsic conservation properties and, in particular, angular momentum conservation. Given that we are simulating from slow to fast rotators, angular momentum conservation and transfer are key aspects to be confident in our results. SPH codes are constructed so that their momentum and energy equations are pairwise equivalent between particles and their neighbors (Monaghan, 2005). This, theoretically, ensures conservation to machine precision. Note, nevertheless, that in real simulations, this conservation is somewhat degraded by the inclusion of self-gravity, but it is still maintained at consistently high accuracy.
Therefore, in this work, we investigate the GW features of a wide range of initial angular velocities of a \(20M_{\odot}\) progenitor, using the SPH code SPHYNX (Cabezon et al., 2017; Garcia-Senz et al., 2022). A subset of these simulations was also repeated with the grid-based adaptive mesh refinement (AMR) code, FLASH (Fryxell et al., 2000; Dubey et al., 2008), to ensure that our conclusions are code-independent, and when differences appear, we could understand them within the framework of the usage of different hydrodynamics solvers.
In the following section, we introduce the numerical methods that we employ. Namely, the hydrodynamics codes, the physics included in them, the initial setup including a list of all calculated models, and the extraction methods of the GW emissions and the spherical harmonic modes. Section 3 presents our results, focusing on the features in the GW emissions, their dependence on the initial angular velocity, and their potential observability. Finally, Section 4 offers a summary and conclusion of the results.
## 2 Numerical Methods and Models
We describe the numerical codes and the corresponding setup of our simulations in Section 2.1. In Section 2.2, we present the initial conditions of the investigated supernova progenitor and the rotation setup. Finally, we describe the analysis methods for obtaining the GW emissions and spherical harmonic coefficients in Sections 2.3 and 2.4, respectively.
### Hydrodynamics Codes
SPHYNX1 is a state-of-the-art SPH code, with accurate gradient evaluation (Garcia-Senz et al., 2012; Rosswog, 2015; Garcia-Senz et al., 2022), pairing-resistant interpolating kernels (Cabezon et al., 2008), generalized volume elements (Hopkins, 2013; Saitoh and Makino, 2013), adaptive artificial viscosity via switches (Read et al., 2010), and adaptive spatial and temporal resolution. To our knowledge, this is currently the only SPH code that can simulate CCSNe with spectral neutrino treatment.
Footnote 1: [https://astro.physik.unibas.ch/sphynx](https://astro.physik.unibas.ch/sphynx)
In the context of CCSN, SPHYNX was used for the development and comparison of a spectral neutrino leakage scheme (Perego et al., 2014). It was also used in a code-comparison work (Cabezon et al., 2018), where it showed particularly good agreement with other Eulerian codes, such as FLASH, at least for the collapse phase and the first \(50\) ms postbounce, and for a series of CCSN simulations with several physics implementations and different progenitors.
The implementation of neutrino treatment in SPHYNX is based on the Isotropic Diffusion Source Approximation (IDSA; Liebendorfer et al., 2009) and is coupled with a parametrized deleptonization (Liebendorfer, 2005) for the collapse phase. Self-gravity is calculated using a Barnes-Hut
algorithm on an octree, and it also includes an effective GR potential correction (Case A in Marek et al., 2006), adopted to replace the monopole term of the Newtonian gravitational potential. Finally, we use the LS220 equation of state from stellarcollapse.org (Lattimer and Swesty, 1991; O'Connor and Ott, 2010) in all of our simulations. For further details on the coupling of all these physics modules, we refer the reader to Section 2.4 in Cabezon et al. (2018).
For comparisons, we additionally perform two simulations with the grid-based Eulerian hydrodynamics code FLASH2 that includes the IDSA for neutrino transport (Pan et al., 2016, 2018, 2019, 2021). We also use the same effective GR potential and LS220 equation of state.
Footnote 2: [https://flash.rochester.edu](https://flash.rochester.edu)
### Initial Setup
In this work, we employ the same \(20M_{\odot}\) spherically symmetric progenitor star with solar metallicity as the initial conditions for all simulations. The initial radial profiles of density, temperature, electron fraction, and radial velocity are adopted from the 1D, s20 model provided by Woosley and Heger (2007). The initial 3D particle distribution for the SPH calculations was achieved following the original 1D density profile but with a random angular arrangement. Next, the resulting 3D distribution was relaxed, allowing the particles to move only tangentially (i.e., at a fixed radius), which smoothed out spurious clumps of particles and provided clean, smooth initial profiles. A small external pressure (\(P_{\rm ext}=1.8\times 10^{22}\) dyn cm\({}^{-2}\)) was added to all particles, similar to the pressure exerted by the outer layers of the star, which are not included in the simulation. This prevented particles in the very low-density region from experiencing an artificially high gradient of pressure and escaping from the system. Note that \(P_{\rm ext}\) is only relevant to a very thin layer of particles in the outer region of the simulated section of the star, as the pressure profile ranges between \(10^{24}-10^{33}\) dyn cm\({}^{-2}\) for the vast majority of the domain. Regarding neutrino transport, we use \(20\) energy bins, logarithmically spaced from \(3\) until \(300\) MeV for the electron-flavor neutrinos. The heavier \(\mu\) and \(\tau\) neutrinos and anti-neutrinos are considered as a single species that cools the system via a leakage scheme.
Finally, an artificial tangential velocity was imposed to induce rotation. We assigned the initial rotational profile as follows,
\[\Omega(r)=\frac{\Omega_{0}}{1+\left(\frac{r}{A}\right)^{2}}, \tag{1}\]
where \(\Omega\) is the angular velocity as a function of the spherical radius \(r\), \(A=1.03376\times 10^{8}\) cm is a characteristic length that is taken from the fitting formula provided in Pajkos et al. (2019) (see Figure 2 therein), and \(\Omega_{0}\) is the angular velocity at the center of the star, of which we investigate several values ranging from \(0.0\) to \(4.0\) rad s\({}^{-1}\).
We perform the SPH simulations with three different resolutions, which contain \(200\)k, \(675\)k, and \(1600\)k particles that cover the central \(10^{4}\) km of the progenitor star. The corresponding resolutions within \(10\) km in the radius are about \(600\), \(400\), and \(300\) m after the core bounce. The primary analysis and results presented in this paper are based on the high-resolution simulations with \(1600\)k particles. Furthermore, we used the simulations with \(675\)k particles to investigate a larger parameter space of \(\Omega_{0}\).
The Cartesian grid setup in our FLASH simulations closely follows the setup described in Pan et al. (2021). Therefore, we just give here a brief review of our setup. We use a three-dimensional simulation box that covers the inner \(10^{4}\) km of the CCSN progenitor, and employ nine levels of AMR, which yield a maximum spatial resolution of \(488\) m at the highest AMR level. The central \(r<120\) km sphere has the highest spatial resolution, while we reduce the AMR level as we move farther away from the center of the progenitor to save computing time. This results in an effective angular resolution \(\sim 0^{\circ}.2-0^{\circ}.4\). For the outer boundary conditions, we use a power-law profile that depends on the spherical radius. In addition, we adopt the same \(20\) neutrino energy bins as in SPHYNX, spaced logarithmically from \(3\) to \(300\) MeV for the electron flavor neutrinos and a leakage scheme for the \(\mu\) and \(\tau\) neutrinos. We use the same s20 progenitor from Woosley and Heger (2007) and take the same rotational profile as in Equation (1) with \(\Omega_{0}=2\) and \(3\) rad s\({}^{-1}\).
Table 1 summarizes the relevant information for all our models. Letters F and S in the model name denote the code used to perform the simulation, FLASH or SPHYNX, respectively. The number in the model names shows the adopted value of the initial central angular velocity, \(\Omega_{0}\). Finally, L and H stand for low and high resolution, respectively, for SPHYNX models. The column \(f_{\rm peak}\) shows the resulting peak GW frequency around kHz in the time interval \(t_{\rm pb}=10-100\) ms. We provide a range of peak frequencies for models that exhibit noticeable variations in their peak frequencies over time. The sixth column of Table 1 indicates the ratio between rotational energy and gravitational energy, \(T/|W|\), calculated for the regions with \(\rho\geq 10^{6}\) g cm\({}^{-3}\) at the initial time. The last column \(\Delta x_{\rm min,20ms}\) shows the highest resolution for each model, which is defined as the smallest smoothing length for SPHYNX models or the smallest cell size for FLASH models, at \(20\) ms postbounce.
### Extracting the Gravitational Waves
In order to extract the GW emissions in SPHYNX, we adopt the transverse-traceless gauge to cover the far zone of the source. Hence, we have only two polarizations of the
amplitude of the GWs:
\[h_{+} = \frac{1}{D}\frac{G}{c^{4}}\left(\ddot{\mathbf{t}}_{\theta\theta}- \ddot{\mathbf{t}}_{\phi\phi}\right), \tag{2}\] \[h_{\times} = \frac{1}{D}\frac{G}{c^{4}}\ddot{\mathbf{t}}_{\theta\phi}\,, \tag{3}\]
where \(G\) is gravitational constant, \(c\) is the speed of light, and \(D\) is the distance to the source. The reduced quadrupole (\(\mathbf{\dot{t}}_{lm}\)) is defined in Cartesian coordinates as,
\[\mathbf{\dot{t}}_{lm}=\int\rho\left(x_{l}x_{m}-\frac{1}{3}\delta_{lm}x_{k}x^{k }\right)d^{3}x\,. \tag{4}\]
where \(l\) and \(m\) are components of the position vector \(\mathbf{x}\), \(\delta_{lm}\) is a \(\delta\)-Kronecker, and \(\rho\) is the local density.
The components of the reduced quadrupole in spherical coordinates are related to the Cartesian by (Oohara et al., 1997),
\[\mathbf{\dot{T}}_{\theta\theta} = \left(\mathbf{\ddot{t}}_{xx}\cos^{2}\phi+\mathbf{\ddot{t}}_{yy} \sin^{2}\phi+\mathbf{\dot{t}}_{xy}\sin 2\phi\right)\cos^{2}\theta \tag{5}\] \[\quad+\mathbf{\ddot{t}}_{zz}\sin^{2}\theta-\left(\mathbf{\ddot{t} }_{xz}\cos\phi+\mathbf{\ddot{t}}_{yz}\sin\phi\right)\sin 2\theta\,,\] \[\mathbf{\ddot{t}}_{\phi\phi} = \mathbf{\ddot{t}}_{xx}\sin^{2}\phi+\mathbf{\ddot{t}}_{yy}\cos^{2 }\phi-\mathbf{\ddot{t}}_{xy}\sin 2\phi\,,\] (6) \[\mathbf{\ddot{t}}_{\theta\phi} = -\frac{1}{2}\left(\mathbf{\ddot{t}}_{xx}-\mathbf{\ddot{t}}_{yy} \right)\cos\theta\sin 2\phi+\mathbf{\ddot{t}}_{xy}\cos\theta\cos 2\phi\] (7) \[\quad-\left(\mathbf{\ddot{t}}_{xz}\sin\phi-\mathbf{\ddot{t}}_{yz} \cos\phi\right)\sin\theta\,.\]
Therefore, to obtain the GW waveforms we compute the six non-zero components of \(\ddot{\mathbf{t}}_{lm}\) in Cartesian coordinates and then transform them into spherical coordinates using Equations (5), (6), and (7). Finally, we substitute these components into Equations (2) and (3) to obtain the final waveforms.
There are different methods for evaluating the components of \(\ddot{\mathbf{t}}_{lm}\). Nevertheless, time derivatives cause numerical difficulties due to two main reasons: the numerical noise introduced by the discretization and the magnification of the high-frequency components of the noise. To avoid these problems, we opted to use the method proposed by Centrella & McMillan (1993), which takes advantage of the Lagrangian nature of SPH and is similar to the stress formula of Finn & Evans (1990).
A discretized version of Equation (4) is
\[\mathbf{\dot{t}}_{lm}=\sum_{i}m^{i}\left[x_{l}^{i}x_{m}^{i}-\frac{1}{3}\delta_ {lm}\mathbf{r}^{i}\cdot\mathbf{r}^{i}\right]\,, \tag{8}\]
where the subscripts refer to the three Cartesian components, the superscripts label the SPH particles, \(m^{i}\) is the mass of each particle, and the summation is over all the particles of the simulation. Taking the second time derivative of Equation (8) is straightforward, obtaining
\[\ddot{\mathbf{k}}_{lm} = \frac{2}{3}\sum_{i}m^{i}\left[2v_{l}^{i}v_{m}^{i}+a_{l}^{i}x_{m}^ {i}+x_{l}^{i}a_{m}^{i}\right. \tag{9}\] \[\quad+\left.\delta_{lm}\left(v_{l}^{i}v_{m}^{i}+x_{l}^{i}a_{m}^{i }-\mathbf{v}^{i}\cdot\mathbf{v}^{i}-\mathbf{r}^{i}\cdot\mathbf{a}^{i}\right) \right]\,.\]
The \(x\), \(v\), and \(a\) terms are the components of the position, velocity, and acceleration vectors of particle \(i\), respectively. Using Equation (9) in Equations (2)\(-\)(7) we can find the amplitude of both polarizations of the GW emissions directly from magnitudes calculated by the SPH code without having to compute explicit second-time derivatives.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & Code & \(\Omega_{0}\) & \# of particles & \(f_{\rm peak}\) & \(T/|W|_{\rm int}\) & \(\Delta x_{\rm min,20ms}\) \\ & & [rad s\({}^{-1}\)] & [\(10^{3}\)] & [kHz] & \([10^{-3}\)] & [m] \\ \hline S00 & SPHYNX & \(0.0\) & \(675\) & - & \(4.84\times 10^{-10}\) & 372.3 \\ S05 & SPHYNX & \(0.5\) & \(675\) & - & \(0.12\) & 372.4 \\ S10 & SPHYNX & \(1.0\) & \(675\) & - & \(0.48\) & 374.4 \\ S15 & SPHYNX & \(1.5\) & \(675\) & - & \(1.09\) & 378.5 \\ S20 & SPHYNX & \(2.0\) & \(675\) & \(1.35\) & \(1.94\) & 383.7 \\ S25 & SPHYNX & \(2.5\) & \(675\) & \(1.29\) & \(3.02\) & 389.3 \\ S30 & SPHYNX & \(3.0\) & \(675\) & \(1.29\) & \(4.35\) & 395.4 \\ S35 & SPHYNX & \(3.5\) & \(675\) & \(1.34\) & \(5.93\) & 403.0 \\ S40 & SPHYNX & \(4.0\) & \(675\) & - & \(7.74\) & 410.9 \\ S30L & SPHYNX & \(3.0\) & \(200\) & \(1.28\) & \(4.35\) & 591.9 \\ S20H & SPHYNX & \(2.0\) & \(1600\) & \(1.38\) & \(1.94\) & 288.1 \\ S30H & SPHYNX & \(3.0\) & \(1600\) & \(1.12-1.32\) & \(4.36\) & 296.9 \\ \hline \hline F20 & FLASH & \(2.0\) & - & \(1.01-1.48\) & \(1.89\) & 488.3 \\ F30 & FLASH & \(3.0\) & - & \(1.21-1.39\) & \(4.25\) & 488.3 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the models. Columns from left to right show the model name, numerical code, initial angular velocity (\(\Omega_{0}\)), number of particles, peak frequency of the kHz signal, the value of \(T/|W|\) at the initial time, and the highest spatial resolution at \(20\) ms postbounce (\(\Delta x_{\rm min,20ms}\)).
FLASH also uses the quadrupole formula to extract the GW emissions (Equations 2\(-\)7), but unlike SPHYNX, the GW calculations in FLASH follow the strain formulation to compute the first time derivative of the quadrupole moment. The second time derivative of the quadrupole moment is evaluated by a finite difference method via post-processing (Pan et al., 2018, 2021).
### Spherical Harmonic Mode Analysis
It is known that, for a moderately rotating core, the low-\(T/|W|\) instability could develop after the core bounce, which induces an \(m=1\) or \(m=2\) deformation that results in quasi-sinusoidal oscillations of the GW emissions (Ott et al., 2005; Shibagaki et al., 2020). To investigate the relationship between the GW signals and the deformation induced by the low-\(T/|W|\) instability, we apply a spherical harmonic decomposition to the resulting fluid distributions. Following Burrows et al. (2012), we evaluate the coefficient of each mode using
\[a_{lm}=\frac{(-1)^{|m|}}{\sqrt{4\pi(2l+1)}}\frac{\sum wR(\theta,\phi)Y_{l}^{m} (\theta,\phi)}{\sum w}, \tag{10}\]
where \(R\) is the distance from the PNS center, and \(w\) is a weighting function taken as the volume. For SPHYNX, we set the weighting function to the associated volume of the SPH particles, \(w=m^{i}/\rho^{i}\), where \(m^{i}\) and \(\rho^{i}\) are the mass and density carried by the SPH particles, respectively. The orthonormal harmonic basis functions, \(Y_{l}^{m}\), are expressed as
\[Y_{l}^{m}(\theta,\phi)=\begin{cases}\sqrt{2}N_{l}^{m}P_{l}^{m}(\cos\theta) \cos m\phi&m>0,\\ N_{l}^{0}P_{l}^{0}(\cos\theta)&m=0,\\ \sqrt{2}N_{l}^{|m|}P_{l}^{|m|}(\cos\theta)\sin|m|\phi&m<0,\end{cases} \tag{11}\]
where
\[N_{l}^{m}=\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}} \tag{12}\]
and \(P_{l}^{m}(\cos\theta)\) is the associated Legendre polynomial.
In this work, we focus on the dipole mode (\(l=1\), \(m=1\)) and quadrupole mode (\(l=2,m=2\)), which represent the \(m=1\) and \(m=2\) deformations, respectively.
## 3 Results
### Dynamics Overview
We first summarize and compare the evolution of the collapsar in our simulations performed with SPHYNX and FLASH. A broad overview of the time evolution of several PNS quantities is shown in Figure 1 for models with \(\Omega_{0}=2\) and \(3\) rad s\({}^{-1}\) (models S20H, S30H, F20, and F30). The panels in the first row of Figure 1 show the time evolution of the central density and averaged shock radius, respectively, which are roughly in agreement between both hydrodynamics codes. As pointed out in Cabezon et al. (2018), SPHYNX simulations generally produce slightly higher central densities than FLASH simulations. This is due to the different definitions of the central density. In FLASH, the central density is taken from the cell-averaged density of the single densest cell (\(\Delta x=488\) m), while for SPHYNX, it is evaluated as an average of the \(50\) densest SPH particles, with an equivalent cell width of less than \(300\) m. From the averaged shock radius evolution, we find that models with both initial angular velocities exhibit a very rapid explosion, where the averaged shock radii exceed \(500\) km within \(70-100\) ms postbounce. This is much faster than the previous work with slower initial angular velocities (Takiwaki et al., 2016; Andresen et al., 2019; Pan et al., 2021; Shibagaki et al., 2021), but comparable with the explosion times in Kuroda et al. (2014).
In the second row of Figure 1, we show the mass accretion rates measured at \(r=200\) and \(500\) km, which are in good agreement between codes in the models with \(\Omega_{0}=2\) rad s\({}^{-1}\) (solid lines). The sudden drops in the mass accretion rates occur when the shock front reaches the measured radii at \(t_{\rm pb}\sim 40\) and \(80\) ms, respectively, transitioning from infall to outflow. The \(\Omega_{0}=3\) cases (dashed lines) show behavior similar to that of the \(\Omega_{0}=2\) cases, though the deviations become larger when the shock front reaches the measured radii. The more substantial negative mass accretion rate in the SPHYNX model (S30H) implies a stronger explosion compared to the FLASH model (F30). This interpretation can also be comprehended from the entropy distribution as well. Figure 2 shows slice color plots of the entropy distribution at \(50\) ms postbounce along the XZ-plane, perpendicular to the equatorial plane with the \(z\)-axis representing the rotation axis, using models S30H and F30. Both models have similar shock expansion and convective regions, but S30H has a more spherically symmetric shock expansion at around \(300\) km due to poor shock resolution (\(\Delta x\sim 25\) km) at this low-density region. In consequence, the negative accretion rate contributed from the outflow near the equatorial plane is negated by the inflow through the pole in model F30, leading to a less negative mass accretion rate compared to model S30H. Furthermore, the drops in the mass accretion in model S30H occur later than model F30 due to the slower averaged shock expansion.
The panels in the third row of Figure 1 show the evolution of the enclosed mass within a density contour of \(10^{11}\) g cm\({}^{-3}\) (\(M_{11}\)) and \(10^{13}\) g cm\({}^{-3}\) (\(M_{13}\)), respectively, and the corresponding isodensity radii (\(R_{11}\) and \(R_{13}\)) in the last row. The radius at \(R_{11}\) usually coincides with the neutrino sphere, and therefore, the enclosed mass \(M_{11}\) is used to represent the PNS mass. The enclosed mass \(M_{13}\) roughly describes the PNS inner core. In the models with \(\Omega_{0}=2\) rad s\({}^{-1}\), model S20H has a slightly lower enclosed mass \(M_{11}\) compared to
model F20. This is mainly due to the lower mass accretion rates in SPHYNX around core bounce, while the PNS radius \(R_{11}\) remains similar. In the cases of \(\Omega_{0}=3\) rad s\({}^{-1}\), the deviation in the PNS mass \(M_{11}\) increases further after \(t_{\rm pb}=40\) ms when the low-\(T/|W|\) instability starts to develop and generates stronger asymmetric accretion at later stages postbounce. On the other hand, the PNS inner core mass \(M_{13}\) shows the opposite behavior. In both angular velocities, SPHYNX models have more compact inner cores than their FLASH counterparts due to better core resolutions in models S20H and S30H (see Table 1).
Typically, when there is no rotation, the central density and the PNS inner core mass \(M_{13}\) are expected to increase over time due to ongoing mass accretion and PNS cooling. However, in the case of rotating progenitors, we observe that model S20H has a slight increase in the inner core mass \(M_{13}\) within \(100\) ms postbounce, whereas model F20 shows almost no increase within the first \(40\) ms postbounce and is followed by a slight decrease in the inner core mass. These distinctions may be due to differences in angular momentum transport and conservation between SPHYNX and FLASH as described in Cabezon et al. (2018). In the models with \(\Omega_{0}=3\) rad s\({}^{-1}\), when angular momentum keeps propagat
Figure 1: Panels from left to right and from top to bottom describe the time evolution of the central density (\(\rho_{c}\)), the averaged shock radius (\(R_{\rm sh}\)), the mass accretion rate measured at \(r=200\) km and \(500\) km (\(\dot{M}_{200}\) and \(\dot{M}_{500}\)), the enclosed mass within a density contour of \(10^{11}\) g cm\({}^{-3}\) and \(10^{13}\) g cm\({}^{-3}\) (\(M_{11}\) and \(M_{13}\)), and the averaged isodensity radii corresponding to \(M_{11}\) and \(M_{13}\) (\(R_{11}\) and \(R_{13}\)) in the models S20H, S30H, F20, and F30.
ing inward, the centrifugal force eventually overcomes the gravitational force and, therefore, it induces a decrease in the PNS inner core mass \(M_{13}\). Both models S30H and F30 behave similarly but with different decreasing rates.
In addition, we also find that FLASH simulations have a notable neutron star kick and a modulation motion (\(v_{\rm pns}\sim 240\) and \(360\) km s\({}^{-1}\) in models F20 and F30, respectively) due to an asymmetric explosion and together with numerical artifacts on angular momentum non-conservation. These motions affect the evolution of the PNS in FLASH simulations, especially at late time. On the other hand, the neutron star kick velocities in S20H and S30H are less than \(1\) km s\({}^{-1}\). In the following sections, we show that these differences in the compactness and motion of the PNS inner core will affect the GW signatures of the low-\(T/|W|\) instability.
A comparison of radial profiles of various fluid and neutrino quantities between the models S30H and F30 at different postbounce times is shown in Figure 3. The spherically-averaged profiles of density, electron fraction, and entropy are consistent in the radius coordinate. The reason for the lower angular momentum observed in the inner core of the PNS in model F30 is attributed to the fact that angular momentum conservation has deteriorated, which is primarily caused by inherent numerical dissipation in grid-based hydrodynamics codes (as discussed in Cabezon et al., 2018). When considering the quantities of electron-type neutrinos, it should be noted that model S30H has a more compact and hotter inner core in its PNS in comparison to the corresponding PNS in model F30 (see Figure 1). This leads to a higher production rate and energy level of electron neutrinos within the PNS inner core in model S30H. Overall, the radial profiles of neutrino quantities at different times are consistent between the models S30H and F30.
### GW Features and Origin in Rapidly Rotating CCSNe
In this section, we discuss the GW features in the models with \(\Omega_{0}=2\) and \(3\) rad s\({}^{-1}\), using SPHYNX with \(1600\)k particles (models S20H and S30H) and FLASH (models F20 and F30). The top panels in Figure 4 show the plus mode of GW emissions from models S20H and S30H, seen along the equatorial plane at a distance of \(10\) kpc. The GW strain is shown at the top, and the corresponding spectrogram is displayed at the bottom in each panel. The spectrogram is computed using the wavelet analysis implemented in the PyCWT3 code, a Python package based on Torrence & Compo (1998), where the power spectrum is divided by the wavelet scales to rectify the energy bias (Liu et al., 2007). In addition, we divide the resulting amplitude by the square root of the sampling rate to ensure consistent strength between different sampling rates.
Footnote 3: [https://github.com/regeik/pycwt](https://github.com/regeik/pycwt)
First, both SPHYNX models show distinctive bounce and ring-down signals in the first \(20\) ms postbounce, which have been extensively studied in previous works (e.g., Abdikamalov et al., 2014; Richers et al., 2017; Abdikamalov et al., 2022, and references therein). After the bounce and ring-down signals, we can observe that the GW emissions exhibit quasi-sinusoidal time oscillations starting at \(t_{\rm pb}=20-30\) ms in both models. We find that this GW feature is simultaneous with the so-called low-\(T/|W|\) instability (Saijo et al., 2003; Ott et al., 2005, 2007; Scheidegger et al., 2008, 2010; Kuroda et al., 2014; Shibagaki et al., 2020). The low-\(T/|W|\) instability is a non-axisymmetric rotational instability that develops in cores with a high degree of differential rotation around the corotation radius, where the pattern frequency of the induced oscillation is equal to the local angu
Figure 2: Entropy distribution on the XZ-plane at \(t_{\rm pb}=50\) ms in models S30H (top) and F30 (bottom). The arrows represent the velocity field.
lar frequency of the background flow (Centrella et al., 2001; Watts et al., 2005; Saijo & Yoshida, 2006). This low-\(T/|W|\) instability can induce \(m=1\) and/or \(m=2\) deformations that lead to a time-changing quadrupole moment, which is the ultimate source of the GW emissions.
From the spectrograms of S20H and S30H in Figure 4, we can identify two strong GW signals in the frequency ranges from \(200\) to \(400\) Hz and from \(1100\) to \(1400\) Hz. Hereafter we refer to them as the \(300\) Hz signal and the kHz signal, respectively. In the bottom panels of Figure 4, we present the GW strains obtained using the FLASH code with \(\Omega_{0}=2\) rad s\({}^{-1}\) (F20) and \(\Omega_{0}=3\) rad s\({}^{-1}\) (F30), and their corresponding spectrograms. The GW emissions in both models exhibit similar bounce, ring-down, \(300\) Hz, and kHz signals as discussed above for the SPHYNX simulations. Although the bounce and ring-down signals in models F20 and F30 are consistent with the S20H and S30H models, we can see that the \(300\) Hz and kHz signals evolve differently, especially with respect to the occurrence time and peak frequency of the kHz signal. As discussed in Section 3.1, FLASH and SPHYNX show slightly different dynamical evolutions of the PNS after core bounce (see Figure 1). In Section 3.3, we will show that the \(300\) Hz and kHz signals are correlated with the outer and inner structures of the PNS, respectively, and therefore causes the differences between FLASH and SPHYNX models.
To investigate the origin of the \(300\) Hz and kHz signals, we calculate the contributions of GW emissions from different density regions by post-processing the simulation data, using the formulae described in Section 2.3. Figure 5 shows the GW contributions from regions within density \(\rho<10^{11}\), \(10^{11}\leq\rho<10^{13}\), and \(\rho\geq 10^{13}\) g cm\({}^{-3}\) for models S30H
Figure 3: Spherically-averaged, radial profiles of various fluid and neutrino quantities at different postbounce times for models S30H (blue) and F30 (orange). The profiles at different times are shifted cumulatively by the offset labeled in each panel, where the black dotted lines denote the corresponding zero point.
and F30, seen along the pole to eliminate bounce and ringdown signals. In both models, we can see that the \(300\) Hz and kHz signals emanate from separate regions. The \(300\) Hz signal is mainly from the region of \(10^{11}\leq\rho<10^{13}\) g cm\({}^{-3}\), while the kHz signal is mainly from the PNS inner core where \(\rho\geq 10^{13}\) g cm\({}^{-3}\).
In Figure 5, we also evaluate the spherical harmonic components \(a_{11}\) and \(a_{22}\) in the regions of \(10^{11}\leq\rho<10^{13}\) and \(\rho\geq 10^{13}\) g cm\({}^{-3}\) (defined in Section 2.4), and overlap their spectrograms in white and red lines, respectively. The components \(a_{11}\) and \(a_{22}\) represent the \(m=1\) and \(m=2\) deformations in the specific density region, and their contours indicate the mode frequency of the corresponding deformation, \(f_{\rm mode,}m\). The mode frequency is related to the pattern frequency by \(f_{\rm pat,}m=f_{\rm mode,}m/m\)(Watts et al., 2005). Note that the mode frequency of \(a_{11}\) is doubled in Figure 5 to facilitate comparison between the components \(a_{11}\) and \(a_{22}\). Comparing the spherical harmonic modes with the \(300\) Hz and kHz GW signals reveals that the \(a_{22}\) component coincides with both. This is because the dominant quadrupole component of GW emissions stems from the \(l=2,m=2\) mode, making it a natural source for both signals. On the other hand, the pattern frequencies of \(a_{11}\) and \(a_{22}\) satisfy the relation \(f_{\rm pat,1}\simeq f_{\rm pat,2}\), indicating that the \(a_{22}\) component is a daughter mode of the \(a_{11}\) component. This infers that both the \(300\) Hz and kHz signals are associated with the \(m=1\) spiral deformation induced by the low-\(T/|W|\) instability. Shibagaki et al. (2020) conducted a full-GR CCSN simulation of a rapid-rotating \(70M_{\odot}\) progenitor. They found a transient quasi-periodic time modulation at \(450\) Hz from the \(m=1\) spiral deformation in \(50-100\) km. The \(300\) Hz signal in our models is similar to the \(450\) Hz signal in Shibagaki et al. (2020). On the other hand, the kHz signal resembles the \(\sim 930\) Hz signal found by Ott et al. (2007) in CCSN simulations of a \(20M_{\odot}\) progenitor, which is correlated with the \(m=1\) mode at \(10-15\) km, but the peak frequency is higher in our cases. In addition to the \(300\) Hz and kHz signals, some higher-order modes of GW emissions at around \(800\) Hz can be seen in Figure 5 as well. In model S30H, the \(800\) Hz signal is correlated to the kHz signal and the \(a_{22}\) component but is much weaker than the \(300\) Hz and kHz signals. In model F30, similar higher-order modes also exist between
Figure 4: GW strains and spectrograms of the plus mode for the models with \(\Omega_{0}=2\) and \(3\) rad s\({}^{-1}\), using SPHYNX with \(1600\)k particles (top panels) and FLASH (bottom panels), seen along the equatorial plane at a distance of \(10\) kpc. The white line represents the doubled dynamical frequency at a density of \(10^{13}\) g cm\({}^{-3}\) (see Section 3.3 for a more detailed description).
Figure 5: GW spectrograms of the plus mode emitted from regions with density \(\rho<10^{11}\) g cm\({}^{-3}\) (left), \(10^{11}\leq\rho<10^{13}\) g cm\({}^{-3}\) (middle), and \(\rho\geq 10^{13}\) g cm\({}^{-3}\) (right) for models S30H (top) and F30 (bottom), seen along the pole at a source distance of \(10\) kpc. In the middle and right panels, the white and red contours show the spectrogram of the normalized spherical harmonic coefficients \(a_{11}\) and \(a_{22}\), respectively, where the frequency of \(a_{11}\) is doubled to ease comparison.
Figure 6: Normalized mode amplitude at a radius of \(15\) km for models S20H and S30H (top), and F20 and F30 (bottom).
the \(300\) Hz and kHz signals, but the interactions among these GW features are more complex than that in model S30H.
To investigate the potential impact of \(m=4\) perturbations induced by the Cartesian grid discretization in FLASH simulations, we also perform an analysis of the azimuthal density modes in the equatorial plane by computing the Fourier amplitude (Centrella et al., 2001)
\[C_{m}(\varpi)=\frac{1}{2\pi}\int_{0}^{2\pi}\rho(\varpi,z=0)e^{im\phi}d\phi, \tag{13}\]
where \(\varpi\) is the cylindrical radius relative to the PNS center. Figure 6 shows the normalized mode amplitude, \(|C_{m}|/C_{0}\), evaluated at a radius of \(\varpi=15\) km. We note that the relative difference in the mean density, \(C_{0}\), is below \(30\%\) before \(t_{\rm pb}\sim 35\) ms between SPHYNX and FLASH simulations and then increases to \(80\%\) due to the PNS kicks in FLASH simulations, making the normalized mode amplitudes in FLASH simulations higher than the SPHYNX counterpart simulations. Furthermore, we can see that in FLASH simulations, the \(m=4\) mode is the dominant mode in the early postbounce phase (\(t_{\rm pb}\lesssim 10\) ms) due to grid effects, while it is always subdominant in SPHYNX simulations, as expected for a meshless method. However, there is no clear relation between the \(m=4\) grid mode and the other \(m=\{1,2,3\}\) modes in FLASH simulations, and thus both F20 and F30 models remain dynamically stable to grid perturbations. This is also consistent with the earlier work done with the full GR simulations via the Whisky code in Ott et al. (2007).
Recently, Takiwaki et al. (2021) proposed that the low-\(T/|W|\) instability in CCSN environments could be triggered by the Rossby waves growing near the convective zone. In their explanation, such instability requires having a corotation radius to coincide with the convective layer in the PNS. Therefore, it is interesting to investigate whether the \(300\) Hz and kHz low-\(T/|W|\) signals in our models are satisfied with the same criteria. Figure 7 shows the radial profiles of the rotational, Brunt-Vaisala, and Lamb frequencies, evaluated in the equatorial plane, for models F30 and S30H at \(t_{\rm pb}=30\) ms. We evaluate the Lamb frequency via
\[f_{\rm Lamb}=\frac{1}{2\pi}\frac{\sqrt{l(l+1)}c_{s}}{r}, \tag{14}\]
and the Brunt-Vaisala frequency using the Ledoux criterion (Buras et al., 2006; Ott et al., 2013)
\[f_{\rm BV}=\frac{\text{sign}(C_{L})}{2\pi}\sqrt{\left|\frac{C_{L}}{\rho}\frac{ d\Phi}{dr}\right|}, \tag{15}\]
where \(l\) is taken to be \(1\), \(c_{s}\) is the local speed of sound, \(d\Phi\) is the local gravitational potential where the approximation \(d\Phi/dr\sim-GM(r)/r^{2}\) is adopted. The Ledoux criterion reads (Ledoux, 1947)
\[C_{L}=-\left(\frac{\partial\rho}{\partial P}\right)_{s,Y_{l}}\left[\left( \frac{\partial P}{\partial s}\right)_{\rho,Y_{l}}\left(\frac{ds}{dr}\right)+ \left(\frac{\partial P}{\partial Y_{l}}\right)_{\rho,s}\left(\frac{dY_{l}}{dr }\right)\right], \tag{16}\]
where we approximate the lepton fraction by the electron fraction, \(Y_{l}\sim Y_{e}\), for simplicity. In model F30, two corotation radii at \(5-10\) km and \(30\) km, which are described by the intersection of the pattern frequencies of the \(300\) Hz and kHz signals (black lines) and the rotational frequency (blue line), are in convective regions with negative Brunt-Vaisala frequency. This is consistent with the statement proposed by Takiwaki et al. (2021). On the other hand, in the case of model S30H, only one corotation radius of the kHz signal at \(5-10\) km coincides with the edge of a convective layer. However, we note that the Brunt-Vaisala frequency oscillates rapidly between \(30\) and \(40\) km in S30H, suggesting that a convective zone could be developing in that region as well.
In addition to the Rossby wave scenario, it is worth mentioning that the area where the kHz signal originated, approximately at around the isodensity radius \(R_{13}\) (\(r\sim 20\) km), aligns with the region where electron anti-neutrinos are predominantly generated and remain coupled with the matter. Therefore, the density variation driven by the \(m=1\) deformation can affect the production of electron anti-neutrinos in the PNS inner core. In the IDSA neutrino treatment, electron-type neutrinos are decomposed into trapped and free-streaming neutrinos. Among those, only trapped neutrinos are coupled with the fluid. Since in our implementation of IDSA, the \(\mathcal{O}(v/c)\) terms of the neutrino pressure are included (see Equation 24 in Liebendorfer et al., 2009), the neutrino pressure gradient could contribute to the development the \(m=1\) deformation. To establish this, we apply the spherical harmonic decomposition to the mean energy of trapped electron anti-neutrinos through a variant of Equation (10):
\[a_{lm}=\frac{(-1)^{|m|}}{\sqrt{4\pi(2l+1)}}\frac{\sum w\left\langle E_{\bar{ \nu}_{e}}\right\rangle(\theta,\phi)Y_{l}^{m}(\theta,\phi)}{\sum w}, \tag{17}\]
where \(\left\langle E_{\bar{\nu}_{e}}\right\rangle=Z_{\bar{\nu}_{e}}/Y_{\bar{\nu}_{ e}}\) is the mean energy of trapped electron anti-neutrinos. We use the mean energy \(\left\langle E_{\bar{\nu}_{e}}\right\rangle\), instead of \(Y_{\bar{\nu}_{e}}\) or \(Z_{\bar{\nu}_{e}}\), to avoid the sharp decline in electron anti-neutrino fractions, which could introduce strong numerical noise. We evaluate the spherical harmonic components in the shell \(R_{13}\pm 10\) km. Figure 8 shows the spectrograms of the \(a_{11}\) and \(a_{22}\) components for models S30H and F30, revealing that the distributions of trapped electron anti-neutrinos are in good agreement with not only the kHz signal but also the \(300\) Hz signal. Albeit the asymmetric electron anti-neutrino distribution requires the \(m=1\) deformation from the low-\(T/|W|\) instability to grow. In the following section, we show that there are distinct phase differences between the \(m=1\)
deformation and the asymmetric neutrino distribution in the PNS inner core. This suggests that neutrino pressure could play a role in promoting the development of \(m=1\) deformation and exhibiting unique GW signatures in the kHz window. We also find that the distribution of electron neutrinos has a similar asymmetric effect but is less pronounced than electron anti-neutrinos. This is because electron neutrinos have been produced since the collapse.
### GW Dependence on Initial Angular Velocity
To investigate the dependence of the \(300\) Hz and kHz low-\(T/|W|\) signals on the initial angular velocity, we perform a series of SPHYNX simulations with \(675\)k particles (See Table 1). Figure 9 shows the time evolution of the enclosed mass (\(M_{13}\)), the corresponding isodensity radius (\(R_{13}\)), and the doubled dynamical frequency, \(2f_{\rm dyn}\sim 2\sqrt{R_{13}^{2}/GM_{13}/2\pi}\), for the domain with \(\rho\geq 10^{13}\) g cm\({}^{-3}\) for models with \(\Omega_{0}\) ranging from \(0.0\) to \(4.0\) rad s\({}^{-1}\), in steps of \(0.5\) rad s\({}^{-1}\). In addition, we also plot the same quantities for models S20H and S30H in dashed lines for comparison. First, we can see that the PNS inner core structures are consistent with different particle numbers, and the trend basically follows the description provided in Section 3.1 for models with \(\Omega_{0}=2\) and \(3\) rad s\({}^{-1}\): higher initial angular velocities will result in a decrease of the enclosed mass and isodensity radius, while the doubled dynamical frequency remains a relatively unchanged range of kHz along the simulations.
Figure 10 shows the corresponding GW strain and spectrogram for these models, assuming viewing from the pole and at a distance of \(10\) kpc. We note that in Figure 10, the color ranges are fixed at the same values for all panels to facilitate comparison between different initial angular velocities. We first focus on the models S20 and S30 to ensure that our
Figure 8: Spectrograms of the normalized spherical harmonic coefficients, \(a_{11}\) (white, doubled frequency) and \(a_{22}\) (red), of the mean energy of trapped electron anti-neutrinos for models S30H (left) and F30 (right). The GW spectrograms of the plus mode, seen along the pole at \(10\) kpc, are shown in filled contour to ease comparison.
Figure 7: Radial profiles of rotational (blue), Brunt-VÀÀssÀlÀ (orange), and Lamb (green) frequencies at \(t_{\rm pb}=30\) ms for models F30 (left) and S30H (right). The black solid lines denote the half-peak frequencies of the \(300\) Hz and kHz gravitational-wave signals.
simulations with \(675\)k particles accurately capture the main GW features observed in our simulations with \(1600\)k particles (models S20H and S30H). By comparing spectrograms in Figure 4 and Figure 10, we find that both the \(300\) Hz and kHz signals in models S20 and S30 are similar to those of models S20H and S30H. Since the main low-\(T/|W|\) features discussed in our high-resolution runs are also captured in the simulations with \(675\)k particles, we can conclude that we are able to conduct a parameter study of the initial angular velocity at a considerably lower computational cost using this resolution.
As a result, from Figure 10 we find that the \(300\) Hz signal starts to appear when the initial angular velocity \(\Omega_{0}\geq 1\) rad s\({}^{-1}\) and it persists consistently across different angular velocities. However, the kHz signal appears only in a range within \(1.5\leq\Omega_{0}\leq 3.5\) rad s\({}^{-1}\), though the kHz signal in model S35 occurred in a short duration between \(30-60\) ms postbounce. In addition, we overlay the doubled dynamical frequency on the spectrograms in Figure 10. We find that the kHz signal, once it appears, has a peak frequency described by the doubled dynamical frequency at the density of \(10^{13}\) g cm\({}^{-3}\). As we have discussed in Section 3.2, the \(m=1\) deformation associated with the kHz signal originates from the PNS inner core where \(\rho>10^{13}\) g cm\({}^{-3}\), and its pattern frequency (\(a_{11}\) in Figure 5) should be related to the characteristic frequency, the dynamical frequency here, in this density region. Therefore, it is not surprising that the kHz signal can be described by the doubled dynamical frequency at the density of \(10^{13}\) g cm\({}^{-3}\).
To visualize the \(m=1\) deformation, we follow Takiwaki et al. (2016) to show the density fluctuation in the equatorial plane, \((\rho-\bar{\rho})/\bar{\rho}\), at \(t_{\rm pb}=50\) ms in Figure 11, where \(\bar{\rho}\) is the azimuthally averaged density. Different isodensity contours are plotted as dashed lines as well. We can see that the \(m=1\) deformation can develop in several density regions. In the density region of \(10^{11}\lesssim\rho\lesssim 10^{13}\) g cm\({}^{-3}\), all models with \(\Omega_{0}\geq 1.0\) rad s\({}^{-1}\) show the spiral structure in the density fluctuation plot, but not in models with \(\Omega_{0}\leq 0.5\) rad s\({}^{-1}\). This is consistent with the presence of the \(300\) Hz signal in Figure 10 and also confirms that the \(300\) Hz signal is indeed emitted mainly from this density region in Figure 5. On the other hand, we can see that in the region of \(\rho>10^{13}\) g cm\({}^{-3}\), where the kHz signal emanated, the \(m=1\) deformation develops only in the models with \(1.5\leq\Omega_{0}\leq 3.5\) rad s\({}^{-1}\). This supports our hypothesis that the kHz signal is associated with the low-\(T/|W|\) instability. As a consequence, the strength of the kHz signal is correlated with the presence and amplitude of the \(m=1\) deformation in the high-density region of \(\rho>10^{13}\) g cm\({}^{-3}\).
Figure 12 shows the mean energy of trapped electron anti-neutrinos for models with different initial angular velocities. We can see that the mean energy of electron anti-neutrinos also exhibits prominent asymmetric distributions at \(R_{13}\sim 20\) km in the models with \(1.5\leq\Omega_{0}\leq 3.5\) rad s\({}^{-1}\), which is consistent with the models for which the \(m=1\) density deformation and the kHz signal are present. This alignment once again highlights the correlation between the \(m=1\) density deformation and the neutrino distribution. For neutrino treatments that include the \(\mathcal{O}(v/c)\) terms, the density asymmetries can lead to an asymmetric neutrino pressure, potentially contributing to the development of the \(m=1\) density deformation in the PNS inner core. Comparing Figure 11 and Figure 12, we find that the phase of \(m=1\) density deformation leads the phase of the mean energy variation around \(R_{13}\) (the system rotates counter-clockwise). This suggests that the high-density region produces more energetic neutrinos, subsequently heating the surrounding matter and facilitating
Figure 9: Time evolution of the PNS mass with density \(\rho\geq 10^{13}\) g cm\({}^{-3}\) (top), the corresponding averaged isodensity radius (middle), and the doubled dynamical frequency (bottom) in the SPHYNX simulations with \(675\)k particles (solid lines). Different colors represent simulations with different initial angular velocities. Dashed lines show the counterpart models (S20H and S30H) with \(1600\)k particles for comparison.
Figure 10: GW strains and spectrograms of the plus mode seen along the pole at a distance of \(10\) kpc. Different panels represent the SPHYNX \(675\)k particle simulations with different initial angular velocity. The white lines represent the doubled dynamical frequency for the domain with \(\rho\geq 10^{13}\) g cm\({}^{-3}\) in each model.
expansion, which eventually results in a low-density region later.
A similar effect but with a lower modulation frequency has been proposed by Takiwaki and Kotake (2018). They suggest that the neutrino emissions have a time modulation similar to the GW frequency from the low-\(T/|W|\) instability (\(\sim 100-300\) Hz) and could be detectable by the Hyper-Kamiokande and the IceCube detectors. It is also expected that the asymmetric neutrino distributions associated with the kHz GW signal should produce noticeable time modulations on the neutrino emissions. However, in our current implementation of the IDSA, the free-streaming neutrinos are averaged over angles and therefore cannot be directly evaluated without additional approximations.
### Detectability of GW Signals
Figure 11: Normalized density fluctuation in the equatorial plane at \(t_{\rm pb}=50\) ms for models using SPHYNX with \(675\)k particles and different initial angular velocities. The average density, \(\bar{\rho}\), is taken over the azimuthal direction in the plane. The black dashed curves denote the isodensity contours at \(\rho=10^{11}\), \(10^{12}\), \(10^{13}\), and \(10^{14}\) g cm\({}^{-3}\).
Figure 12: Mean energy of trapped electron anti-neutrino distribution on the XZ-plane at \(t_{\rm pb}=50\) ms for models using SPHYNX with \(675\)k particles and different initial angular velocities. The red and blue curves respectively illustrate positive and negative density fluctuations centered around zero. The white dashed curves denote the isodensity contours at \(\rho=10^{12}\), \(10^{13}\), and \(10^{14}\) g cm\({}^{-3}\).
In this section, we discuss the detectability of the GW features from rapid-rotating CCSNe using the current ground-based GW detectors. Figure 13 shows the amplitude spectral density (ASD) of the plus mode of GW emissions between \(t_{\rm pb}=-10\) and \(100\) ms in the SPHYNX simulations with \(675\)k particles, seen along the pole and assumed at a distance \(10\) kpc. Note that the bounce and ring-down signals are not present in this viewing angle which makes it easier to focus on the GW signals related to the low-\(T/|W|\) instability. The sensitivity curves of the GW detectors Advanced LIGO, Advanced Virgo, and KAGRA (Abbott et al., 2020) are plotted as black lines for reference. It is clear from Figure 13 that both the \(300\) Hz and kHz signals are detectable by the current ground-based GW detectors at our assumed source distance, and they are among the strongest signals in this time period. The peak frequencies of the \(300\) Hz and kHz signals fall within narrow ranges between \(210-300\) Hz and \(1280-1350\) Hz, respectively, and are not sensitive to the initial angular velocity whenever these signals are present.
In addition, it is worth noting that there is one weaker but noticeable GW signal associated with the higher-order mode at around \(800\) Hz, as discussed in Section 3.2. By examining Figures 10 and 13, we can see that the peak frequency of this higher-order mode signal increases gradually after \(t_{\rm pb}=70-100\) ms, which leads to a secondary peak around \(900-1000\) Hz in the ASD. Among these GW features, the \(300\) Hz signal is the most robust signal from the low-\(T/|W|\) instability and can be excited when the initial angular velocity is higher than \(1.0\) rad s\({}^{-1}\). The kHz signal will appear when the initial angular velocity is within the range \(1.5\leq\Omega_{0}\leq 3.5\) rad s\({}^{-1}\). The cases with \(\Omega_{0}=2.5\) and \(3.0\) rad s\({}^{-1}\) have the strongest emissions, suggesting a resonance frequency at around \(2.5-3.0\) rad s\({}^{-1}\). The higher-order mode signal, which peaked around \(800\) Hz, appears in a similar range as the kHz signal (\(1.5\leq\Omega_{0}\leq 3.5\) rad s\({}^{-1}\)) but does not show a clear resonance frequency. Therefore, if we could detect the higher-order mode signal in addition to the \(300\) Hz and kHz signals, these would provide additional constraints on narrowing down the initial angular velocity of a collapsar.
In Figure 14, we compare the ASD of GW emissions of SPHYNX simulations with different numbers of particles, using \(\Omega_{0}=3.0\) rad s\({}^{-1}\). We also plot the ASD of model F30 for comparison. The ASD is evaluated within the time range between \(t_{\rm pb}=-10\) and \(100\) ms. We can see that the GW emissions have a qualitatively similar spectral density distribution between the SPHYNX simulations with \(200\)k (S30L), \(675\)k (S30), and \(1600\)k particles (S30H). However, model S30L shows a kHz signal that is one order of magnitude weaker in amplitude compared to models S30 and S30H, and the peak frequency is also \(30-50\) Hz lower. We consider that these differences are due to an underresolved PNS inner core in the lowest resolution model (S30L). Even with its low resolution, the S30L model displays an ASD similar to those of the S30 and S30H models, which have converged results. This is because the primary contributions to the GW emissions originate from the densest regions of the PNS, which have the highest spatial resolution in SPHYNX. In addition, as long as the overall evolution is accurately described, the GW emissions can be adequately followed with integration over the entire domain, as discussed in Section 2.3.
Comparing codes, the peak frequencies of the \(300\) Hz and kHz low-\(T/|W|\) signals in the FLASH simulation (F30) are qualitatively similar to those of the models S30 and S30H. Nevertheless, the GW emission in the F30 model is approximately one order of magnitude stronger in the signal amplitude than the results from SPHYNX. As pointed out in Andresen et al. (2021), the grid resolution in the post-shock region will affect the numerical damping of the convective cells
Figure 14: Similar to Figure 13, but for the \(\Omega_{0}=3.0\) models using different codes and resolutions. For the comparison, only the GW emissions between \(t_{\rm pb}=-10\) and \(100\) ms are used for analysis.
Figure 13: Amplitude spectral density of the plus mode of GW emissions between \(t_{\rm pb}=-10\) and \(100\) ms, for models using SPHYNX with \(675\)k particles and various initial angular velocities, seen along the pole at a distance of \(10\) kpc. The black solid, dashed, and dotted lines denote the sensitive curves of advanced LIGO (aLIGO), advanced Virgo (AdV), and KAGRA, respectively.
and the activities of the g-mode around the PNS, which could explain the variation in the magnitudes of the GW emissions between SPHYNX and FLASH.
## 4 Summary & Conclusions
We have performed an analysis of the development of the low-\(T/|W|\) instability and associated GW emissions in the early postbounce phase of rotating CCSNe. To this end, we performed 3D hydrodynamical core-collapse simulations of a \(20M_{\odot}\) progenitor with different initial angular velocities (\(\Omega_{0}\)), which are parametrically added to the progenitor model. In this work, we computed models with \(0\leq\Omega_{0}\leq 4\) rad s\({}^{-1}\) using the smoothed particle hydrodynamics code SPHYNX, and models with \(\Omega_{0}=2\) and \(3\) rad s\({}^{-1}\) using the grid-based hydrodynamics code FLASH for comparison.
Among our rotating models, the GW emissions exhibit two strong low-\(T/|W|\) signals after \(20\) ms postbounce in both SPHYNX and FLASH simulations. Both signals are correlated with the \(m=1\) deformation induced by the low-\(T/|W|\) instability, with peak frequencies of about \(300\) Hz and \(1.3\) kHz (called kHz here). The \(300\) Hz signal is present in models with \(\Omega_{0}\geq 1.0\) rad s\({}^{-1}\), which mainly emanated from the region of \(10^{11}\leq\rho<10^{13}\) g cm\({}^{-3}\). The peak frequency of the \(300\) Hz signal is not sensitive to the initial angular velocity, the code used, or the spatial resolution. On the other hand, the kHz signal is present only in models with a narrower range of initial angular velocity, \(1.5\leq\Omega_{0}\leq 3.5\) rad s\({}^{-1}\), originates mainly in the region of \(\rho\geq 10^{13}\) g cm\({}^{-3}\), and is highly associated with the asymmetric distribution of electron anti-neutrinos. The peak frequency of the kHz signal, once present, is not sensitive to the initial angular velocity, but is moderately affected by the dynamical evolution of the PNS inner core.
In addition to the \(300\) Hz and kHz signals, there is an additional weaker higher-order mode of GW emissions at around \(800\) Hz, emanating mainly from the region of \(10^{11}\leq\rho<10^{13}\) g cm\({}^{-3}\), in those of our models that developed the low-\(T/|W|\) instability. This higher-order signal is also correlated with the \(m=1\) deformation in the PNS, but its occurrence time is different from those of the \(300\) Hz and kHz signals. The range of initial angular velocity where the higher-order signal exists is similar to that of the kHz signal, around \(1.5\leq\Omega_{0}\leq 3.5\) rad s\({}^{-1}\). However, the peak frequency of this higher-order signal gradually increases to \(900-1000\) Hz after \(70-100\) ms postbounce, which leads to a secondary peak in the amplitude spectral density. Therefore, the initial angular velocity of the CCSN progenitors can be inferred from the detection of the higher-order signal, and the \(300\) Hz and kHz low-\(T/|W|\) signals.
We note that the GW features and their peak frequencies presented in this work can be dependent on the numerical methods and the physical models used, especially the kHz signal. Furthermore, the kHz features associated with the low-\(T/|W|\) instability are typically originated from the asymmetric density distribution in the PNS inner core, where electron anti-neutrinos are largely produced and still coupled with the matter. The density asymmetries in this region can induce an asymmetric neutrino distribution and consequently result in an asymmetric neutrino pressure. This, in turn, could facilitate the development of density deformation. A more in-depth stability analysis is necessary and will be one of our future works.
In this work, we have not considered the effects of the magnetic field, which can affect the dynamical evolution of the PNS and the explosion mechanism (e.g., Kuroda et al., 2020; Matsumoto et al., 2020; Muller & Varma, 2020; Obergaulinger & Aloy, 2020, 2021; Raynaud et al., 2022; Powell et al., 2023). In the presence of a strong magnetic field, the low-\(T/|W|\) instability could be suppressed (Fu & Lai, 2011; Muhlberger et al., 2014). The angular momentum transport driven by the magnetorotational instability can redistribute the rotational profile (Bugli et al., 2018), which in turn affects the development of the low-\(T/|W|\) instability and weakens the associated GW signals by an order of magnitude (Bugli et al., 2023). To obtain a more complete diagnostic of the angular momentum profile from the low-\(T/|W|\) signals, 3D magnetohydrodynamics simulations should be performed.
## Acknowledgments
We are grateful to the referee for a thoughtful report and useful suggestions that helped us improve the manuscript. This work is supported by the National Center for Theoretical Sciences of Taiwan, the Ministry of Education (Higher Education Sprout Project NTU-112L104022), the National Science and Technology Council of Taiwan through grant NSTC 111-2112-M-007-037 and 112-2811-M-002-113, the Center for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University through a grant from the Ministry of Education of Taiwan, and the Swiss Platform for Advanced Scientific Computing (PASC) projects SPHEXA and SPH-EXA2: Optimizing Smooth Particle Hydrodynamics for Exascale Computing. KCP is supported by the NSTC grant NSTC 111-2112-M-007-037 and 112-2112-M-007-040. This work has also been carried out as part of the SKACH consortium through funding from SERI. Simulations and data analysis have been carried out on the Taiwania supercomputer at the National Center for High-Performance Computing (NCHC) in Taiwan, on the CICA Cluster at the National Tsing Hua University, and on the scientific computing core facility sciCORE ([http://scicore.unibas.ch/](http://scicore.unibas.ch/)) at the University of Basel. Analysis and visualization of the simulation data were completed using the analysis toolkit yturk2011.
SPHYNX(Cabezon et al., 2017; Garcia-Senz et al., 2022), FLASH (Fryxell et al., 2000; Dubey et al., 2008), Matplotlib (Hunter, 2007), NumPy(van der Walt et al., 2011), PyCWT (Torrence & Compo, 1998), SciPy(Virtanen et al., 2019), yturk2011.
|
2309.05902 | The Game of Cycles with Sources Allowed | In this paper, we introduce a variant of Francis Su's "Game of Cycles," that
we call "Cycles with Sources." The only change to the rules is permitting nodes
to be sources, while sinks are still prohibited. Despite this minor change in
the rules, we show that even on simple games, like line graphs, there is a
great change in the outcome of optimal play, which we fully analyze using
Sprague-Grundy Theory. | Vigyan Sahai, Ravi Tripathi | 2023-09-12T01:23:37Z | http://arxiv.org/abs/2309.05902v1 | # The Game of Cycles with Sources Allowed
###### Abstract
In this paper we introduce a variant of Francis Su's "Game of Cycles," that we call "Cycles with Sources." The only change to the rules is permitting nodes to be sources, while sinks are still prohibited. Despite this minor change in the rules, we show that even on simple games, like line graphs, there is a great change in the outcome of optimal play, which we fully analyze using Sprague-Grundy Theory.
###### Contents
* 1 Introduction
* 2 Simple Examples of the Game of Cycles
* 3 Cycles with Sources on a Line
* 4 Graphs Consisting of a Single Cycle
* 5 References
* 6 Code Appendix
## 1 Introduction
Francis Su's "Game of Cycles"[7] has the property that for many classes of simple graphs, the "parity conjecture" [1] is true: that is, for a graph with an even number of markable edges, the second player can always win, and for a graph with an odd number of markable edges, the first player can always win.
In this work, we ask: how sensitive is this phenomenon to small changes in the rules of the game? In particular, in the game of cycles, any move that creates a source node or a sink node is illegal. We ask: what if only moves that create sink nodes are illegal, but moves that create source nodes are allowed? We call this variant of Cycles, "Cycles with Sources". Changing this removes the symmetry that was at play with the original cycles game, which could often be won by "mirroring" your opponent's moves. By removing this symmetry, we throw the parity conjecture into question.
We analyze games on simple graphs like lines using the Sprague-Grundy Theorem, which states that any impartial computational game is equivalent to a game of Nim of size \(n\). We deal with this \(n\), called the Grundy number or Nimber. Player 1 loses if the Number is 0, and any other Nimber greater than 0 results in Player 1's victory. This is because any game with a Nimber greater than 0 can be reduced into a game with Number 0. The second player now becomes the "first player" and loses. The process of finding Nimbers is explained further in detail in section 2, however, we recommend being familiar with Sprague-Grundy Theory.
Ultimately for line graphs, this simple rule change turned a game that had Number 1 for odd sizes and 0 for even sizes into a game whose Nimbers followed a repeated cycle of length 17, starting at games of size 19, and containing Nimbers as high as 8, as calculated by a computer program shown in section 5. In proving that the observed repetition continues forever, we established a more general
principle for any game whose moves always divide it into sub-games in a certain way, where these sub-games are also similarly sub-dividable, showing that repeating sequences can be shown to repeat indefinitely, provided the repetition occurs for long enough.
Cycles with Sources can also simplify the outcome of certain games. In section 4, we analyze a game consisting of a single cycle, which with the regular rule-set follows the parity conjecture. However, in Cycles with Sources, the rule change turned the game into a guaranteed victory for player no matter the size of \(n\). However, unlike with sources disallowed, the winning strategy is not as simple, as Nimbers on the second move can reach as high as 9.
In Section 2 we analyze the basic game of cycles on a line, which is already understood, but it informs our approach in the next section, in which we apply the new rule-set to a line and examine how the Nimbers of the same types of sub-games are affected. Theorem 2, proved in Section 3, then informs other simple cases with the sources-allowed rule-set, demonstrated in Section 4. Further questions about the game of cycles can be found in [3, 2].
## 2 Simple Examples of the Game of Cycles
We start with a case that has already been analyzed, specifically by Mathews [5], the standard game of cycles with its normal rule-set on a straight line of \(n\) edges (this game was also analyzed by Lin [3] with respect to whether the first or second player wins, however, without Sprague-Grundy Theory), simply to gain an understanding of the way Nimbers relate to the Nimbers of sub-games of various "types."
**Theorem 1**.: _[_5_]_ _A line segment of length \(n\) has Number 0 if \(n\) is even, and Number 1 if \(n\) is odd._
Proof.: We will prove the theorem by induction. We refer to a game of type \(i\) with \(n\) unmarked segments as \(g_{i}(n)\) from now on. All games other than \(g_{1}(n)\) are sub-games where the directed segment at the end is not restricted by the no source or sink rule. Here is our induction hypothesis for all \(n>1\):
1. \(\bullet
First we must consider the special case where \(a=1\). In this case, one segment has Nimber \(0\), while the other has Nimber \(b-1=(n-1)-1=n-2\). Because \(n>2\), this Nimber will always be greater than \(0\). And of course, \((n-2)\oplus 0=n-2\), which is greater than \(0\). Next we consider the general case where \(b\geq a>1\). In this case, by the induction hypothesis, the Nimbers of the sub-games will be \(b-1\) and \(a-1\). Now, because \(n=a+b\) is odd, we know that exactly one of \(a-1\) and \(b-1\) must be odd. Therefore, \((a-1)\oplus(b-1)\) cannot be \(0\). Thus we have shown that none of the sub-games of this case can have Number \(0\). Therefore, the mex of the Nimbers of all the sub-games must be \(0\), and this establishes the induction hypothesis for this case. 2. \(n+1\) is odd. Similarly to the previous case, we can split the line into \(2\) sub-games with lengths of \(a\) and \(b\). First we can show that the Nimber \(1\) will never appear through the xor of any pair of sub-games. In order to achieve a Nimber of \(1\) through xor, the two numbers must be odd and even. However, since \(n+1\) is odd, \(n\) must be even, so \(a+b\) must add to an even number. An odd number added to an even number is odd, not even, therefore the Nimber \(1\) will never appear through xor. Next we can show that the Nimber \(0\) must appear. Since \(a+b\) will be even, the case when \(a=b\) will occur when split at the middle segment. These two sub-games are \(g_{3}\) and \(g_{2}\) of type \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) and \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) respectively, which both have the same Nimber, the xor of which is \(0\). Thus we have shown that none of the sub-games of this case can have value \(1\), and the value of \(0\) is present. Therefore, the mex of the values of all the sub-games must be \(1\), and this establishes the induction hypothesis for this case.
2. \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) : \(n+1\) segments. Similarly to the previous case, we can split the line into \(2\) sub-games with the length of the left sub-game being \(a\) and the length of the right sub-game being \(b\)(we sometimes will call the sub-games by their lengths \(a\) and \(b\)). However, the left sub-game will either be of the form \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) or \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\), with a Nimber of \(0\) or \(1\) depending on the parity of \(a\), this pattern can be seen in the initial hypothesis. First we observe the two end cases, where \(a=0\), and \(b=1\). In the case of \(a=0\) the \(b\) sub-game turns into the \(n\) case, with a Nimber of \(n-1\). In the case of \(b=1\), the arrow can point in either direction. Depending on the parity of \(a\) we can choose the direction such that the \(a\) sub-game has a Nimber of \(0\). The \(b\) sub-game has a Nimber of \(0\), and \(0\oplus 0\) is \(0\). Next we can show that the \(b=1\) base case can be extended to include all the Nimbers up to the base case of \(a=0\). By moving the partition of the sub-games, increasing \(b\) and decreasing \(a\), we can increase the Number of the \(b\) sub-game, because it is of the form \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) which has a Nimber of \(b-1\). By alternating the direction of the arrow according to the parity of \(a\) we can ensure the \(a\) sub-game has a Nimber of \(0\). And the xor of any number with \(0\) is just the number itself. So we can achieve all the Nimbers from \(0\) to \(n-2\) this way. Finally we can show that any Nimbers higher than \(n-1\) are impossible. The highest Nimber that can be achieved through either the \(a\) or \(b\) sub-games is n-2, excluding the \(a=0\) case. This Nimber could be xored with at most a Nimber of \(1\), therefore only at most increasing the Nimber by \(1\) equaling \(n-1\). Any other combination of sub-games must include either a \(0\) or \(1\) and therefore must be smaller. Thus we have shown that the sub-games of this case can have value \(0\) to \(n-1\), and nothing higher than \(n-1\). Therefore, the mex of the values of all the sub-games must be \(n\), and this establishes the induction hypothesis for this case.
3. \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) : \(n+1\) segments. This case is exactly the same as the previous one, except the arrow is the other way and as such the left sub-games will be of the form \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\). However, the Nimbers are still the same and at least one of them will equal \(0\) for all \(n\), the parity is simply flipped.
4. \(\
Consider any valid move \((p,q,a,b)\) on a fixed \(g_{i}(n)\), where \((p,q)\in S_{i}\) and \(a+b+1=n\). We will consider pairs of Nimbers formed where at least one of \(a\) and \(b\) is less than or equal to \(s\), which we will call the "outer \(2s\)" cases. We first suppose \(a\leqslant s\). To avoid violating condition 2, moves of the form \((p,q,a,b_{0})\) must be valid for all \(b_{0}\geqslant s-T\). In particular, \(b-T\geqslant s-T\), so the game \(g_{i}(n-T)\) can be split into \(g_{p}(a)\) and \(g_{q}(b-T)\). Since \(n>b\geqslant s+1\), the inductive hypothesis applies to \(b\), meaning \(Nim(g_{q}(b))=Nim(g_{q}(b-T))\). We therefore have \(Nim(g_{p}(a))\oplus Nim(g_{q}(b))=Nim(g_{p}(a))\oplus Nim(g_{q}(b-T))\). This same argument applies symmetrically to the case where \(b\leqslant s\), evoking condition 3. This means that the Nimbers of the "outer \(2s\)" pairs in \(g_{i}(n)\) and their xors are the same as those of the "outer \(2s\)" of \(g_{i}(n-T)\). Note that the "outer \(2s\)" of \(g_{i}(n-T)\) may not be \(2s\) distinct moves and may overlap if \(n-T\leqslant 2s\).
Now we show that the Nimbers of the "middle" pairs, the pairs for which \(a,b>s\) (which are always playable for any \((p,q)\in S_{i}\) by condition 1), are the same as the pairs of Nimbers of the "outer \(2s\)." If \(b>s\), we can repeatedly invoke the inductive hypothesis until we reach a move \((p,q,a-jT,b+jT)\), which is valid by condition 1. \(Nim(g_{p}(a-jT))\oplus Nim(g_{q}(b+jT))=Nim(g_{p}(a))\oplus Nim(g_{q}(b))\). Therefore, the "middle" pairs have duplicate Nimbers of the valid "outer \(2s\)" pairs.
We have proved that every pair of Nimbers formed by playing a valid move in \(g_{i}(n)\) is one of the pairs of Nimbers formed from an "outer \(2s\)" move of \(g_{i}(n)\), which in turn is a pair of Nimbers formed from an "outer \(2s\)" move of \(g_{i}(n-T)\). In addition, the Nimbers of every "middle" pair in \(g_{i}(n-T)\) are also found in the pairs of the "outer \(2s\)" of \(g_{i}(n-T)\). Therefore, every pair of Nimbers in either game is found in the Nimbers of the shared "outer \(2s\)" pairs, and, trivially, every Number pair in the shared "outer \(2s\)" is in both games. Therefore, \(g_{i}(n)\) and \(g_{i}(n-T)\) contain all of the same pairs of Nimbers and their xors, and so, taking the mex, \(Nim(g_{i}(n))=Nim(g_{i}(n-T))\).
The motivation for this Theorem was a pattern we noticed in the Nimbers of the Game of Cycles on a line of length \(n\) with sources allowed. In particular, the "outer \(2s\)" above was in fact an "outer \(86\)" in the various game types that emerge when examining this game.
**Theorem 3**.: _For Cycles with sources the Nimbers of line segments of successive length are given by the following sequence: The first 18 are:_
\[0,1,0,1,0,3,2,0,2,3,0,1,0,1,0,5,7,0\]
_From then on, the Nimbers form the following repeated sequence_
\[1,0,1,0,3,2,4,5,3,0,1,0,1,0,5,7,8\]
Proof.: Let \(\{g_{1}(n),\ldots,g_{6}(n)\}\) be the following types of games, where \(n\) is the number of unmarked edges.
1. \(\bullet\)\
* Placing a left arrow on the left edge of type 5, denoted \((5,4,0,n-1)\) for \(g_{5}(n)\), \(\forall n\)
* Placing a right arrow on the right edge of type 5, denoted \((4,5,n-1,0)\) for \(g_{5}(n)\), \(\forall n\)
Because any unplayable move is of this form, conditions 2 and 3 hold. Lastly, a computer program can verify the first 87 Nimbers of all 6 game types:
1. \(\
1. \(\bullet\to\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\)
2. \(\bullet\to\bullet\bullet\bullet\cdots\bullet\bullet\bullet\bullet\bullet\bullet\)
3. \(\bullet\leftarrow\bullet\bullet\bullet\cdots\bullet\bullet\bullet\bullet\bullet\bullet\)
For each game type, \(g_{i}(n)\), a move must be of the form \((p,q,a,b)\), where if \(i=1\) then \((p,q)=(1,1)\) or \((p,q)=(2,3)\), if \(i=2\) then \((p,q)=(1,2)\) or \((p,q)=(2,1)\), if \(i=3\) then \((p,q)=(1,3)\) or \((p,q)=(3,1)\). Unplayable moves only exist if \(a\) or \(b\) are \(0\), and the original cycle game of length \(n=1\) does not exist, satisfying condition 1 of Theorem 2. A list of the unplayable moves is given below:
* On \(g_{1}(n)\) marking the leftmost edge to face the left node, denoted \((2,3,0,n-1)\) for \(n>1\).
* On \(g_{1}(n)\) marking the rightmost edge to face the left node, denoted \((2,3,n-1,0)\) for \(n>1\).
* On \(g_{2}(n)\) marking the leftmost edge to face the left node, denoted \((2,1,0,n-1)\) for \(n>1\).
* On \(g_{2}(n)\) marking the rightmost edge to face the right node, denoted \((3,1,n-1,0)\) for \(n>1\).
* On \(g_{3}(n)\) marking the leftmost edge to face the right node, denoted \((3,1,0,n-1)\) for \(n>1\).
* On \(g_{3}(n)\) marking the rightmost edge to face the left node, denoted \((1,3,n-1,0)\) for \(n>1\).
These forms of unplayable moves exist for all \(n\geq 1\), satisfying conditions 2 and 3. The initial case exists when \(s=3\) and the period of the pattern is two, \(T=2\), following the parity of \(n\). The Nimbers of \(n=1\ldots 7\) can be verified by computer.
1. \(\bullet\to\bullet\bullet\bullet\bullet\cdots\bullet\bullet\bullet\to\bullet:\)\(n=1\ldots 7\) unmarked segments: Number is \(1,0,1,0,1,0,1\) respectively.
2. \(\bullet\to\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\)\(n=1\ldots 7\) unmarked segments: Number is \(0,1,0,1,0,1,0\) respectively.
3. \(\bullet\leftarrow\bullet\bullet\bullet\cdots\bullet\bullet\bullet\bullet\bullet\)\(n=1\ldots 7\) unmarked segments: Number is \(0,1,0,1,0,1,0\) respectively.
Clearly for all \(g_{i}(n)\), where \(i=1,2,3\) and \(n\in[4,7]\), \(Nim(g_{i}(n))=Nim(g_{i}(n-2))\). This allows the game to satisfy all requirements of Theorem 2 allowing the pattern of its sub-games to continue on for all \(n>7\). However, it should be noted that the first player always turns the game into \(g_{1}(n-1)\), switching the Nimbers around for the Number of the fully unmarked game from that of \(g_{1}(n-1)\). For example, if \(n=5\), the first player turns the unmarked game into \(g_{1}(4)\), with Number \(0\), which suggests a second player victory, but now, the first player _is_ the second player in this state, showing that the first player is in the winning position, therefore the original unmarked game must have a Number of \(1\).
* For \(n\) even: 1. \(\bigcirc\): \(n\) unmarked segments: Number is \(0\) 2. \(\bullet\to\bullet\bullet\bullet\cdots\bullet\bullet\bullet\to\bullet:\)\(n\) unmarked segments: Number is \(0\). 3. \(\bullet\to\bullet\bullet\bullet\bullet\bullet\bullet\bullet\bullet\)\(n\) unmarked segments: Number is \(1\). 4. \(\bullet\leftarrow\bullet\bullet\bullet\cdots\bullet\bullet\bullet\bullet\)\(n\) unmarked segments: Number is \(1\).
* For \(n\) odd: 1. \(\bigcirc\): \(n\) unmarked segments: Number is \(1\) 2. \(\bullet\to\bullet\bullet\bullet\cdots\bullet\bullet\bullet\to\bullet:\)\(n\) unmarked segments: Number is \(1\). 3. \(\bullet\to\bullet\bullet\bullet\cdots\bullet\bullet\bullet\bullet\bullet\)\(n\) unmarked segments: Number is \(0\). 4. \(\bullet\leftarrow\bullet\bullet\cdots\bullet\bullet\bullet\bullet\)\(n\) unmarked segments: Number is \(0\).
Amazingly, despite how complicated the game of cycles becomes when dealing with allowing sources on a line, allowing sources on a cycle produces a simple result.
**Theorem 5**.: _For Cycles with Sources every simple cycle has Number 0._
Proof.: Just like the previous game, this can also be viewed with sub-games of lines. When marking an edge for the first time, the game is reduced to a game type 4, \(g_{4}(n)\), in the Theorem 2 proof. This game type, as proved in the first theorem, does not at any point have a Number of 0. Since getting to this position takes a move, just like the previous proof, the Nimbers are swapped as the first player cannot reduce the unmarked game to a Number of 0, resulting in the unmarked game having a Number of 0 for all \(n\).
Player 1 is always losing when playing on a cycle with sources allowed, but the winning strategy is not as immediately obvious as winning strategies for cases of the game when sources are disallowed.
## Acknowledgements
We would like to thank Professor Amit Sahai for introducing us to "The Game of Cycles" and giving us direction on what to research, specifically the idea of changing the rules of the game, and for giving advice on how to write a research paper. We further thank Professor Sahai for helping us learn Combinatorial Game Theory and how to use Latex. We also thank Owen Maitzen and Gaurav Sen for creating amazing YouTube videos explaining Sprague-Grundy Theory [4, 6].
|
2301.00080 | Impact Invariant Trajectory Optimization of 5-Link Biped Robot Using
Hybrid Optimization | Bipedal robots have received much attention because of the variety of motion
maneuvers that they can produce, and the many applications they have in various
areas including rehabilitation. One of these motion maneuvers is walking. In
this study, we presented a framework for the trajectory optimization of a
5-link (planar) Biped Robot using hybrid optimization. The walking is modeled
with two phases of single-stance (support) phase and the collision phase. The
dynamic equations of the robot in each phase are extracted by the Lagrange
method. It is assumed that the robot heel strike to the ground is full plastic.
The gait is optimized with a method called hybrid optimization. The objective
function of this problem is considered to be the integral of torque-squared
along the trajectory, and also various constraints such as zero dynamics are
satisfied without any approximation. Furthermore, in a new framework, there is
presented a constraint called impact invariance, which ensures the periodicity
of the time-varying trajectories. On the other hand, other constraints provide
better and more human-like movement. | Aref Amiri, Hassan Salarieh | 2022-12-31T00:08:58Z | http://arxiv.org/abs/2301.00080v1 | # Impact Invariant Trajectory Optimization of 5-Link Biped Robot Using Hybrid Optimization
###### Abstract
Bipedal robots have received much attention because of the variety of motion maneuvers that they can produce, and the many applications they have in various areas including rehabilitation. One of these motion maneuvers is walking. In this study, we presented a framework for the trajectory optimization of a 5-link (planar) Biped Robot using hybrid optimization. The walking is modeled with two phases of single-stance (support) phase and the collision phase. The dynamic equations of the robot in each phase are extracted by the Lagrange method. It is assumed that the robot heel strike to the ground is full plastic. The gait is optimized with a method called hybrid optimization. The objective function of this problem is considered to be the integral of torque-squared along the trajectory, and also various constraints such as zero dynamics are satisfied without any approximation. Furthermore, in a new framework, there is presented a constraint called impact invariance, which ensures the periodicity of the time-varying trajectories. On the other hand, other constraints provide better and more human-like movement..
keywords: Trajectory optimization, bipedal robots, walking robots, zero dynamics; +
Footnote β : journal: Journal of Robotics
## 1 Introduction
The mechanism of movement and transfer of objects has always been one of the most important and active areas of human research. Due to the limitations of moving with a wheel, replacing it with feet is an attractive but difficult option, so this field is a hot topic in today's robotic world. With the advancement of robotics science and the usefulness of this issue, a lot of research has been done on the design, optimization, and control of legged robots [1-6]. As the science of bipedal robots has advanced in recent years, there have been significant efforts to improve the performance of these robots in important maneuvers, such as walking and running, but research is still ongoing to find ideal answers [7,8]. Designing reference trajectories for human walking cycles is very important. Several techniques have been adopted to define reference trajectories. So far, many researchers have studied low-energy (or low input torques) paths for bipedal robots [7,9]. We are looking for a periodic path that meets a specific goal in terms of speed and minimizes the torque required to produce the gate. In general, this open and non-trivial problem is solved by finding numerical answers. Various parameters can be considered to optimize the problem, for example, torques, Cartesian coordinate or joint coordinates constraints can be used[10-12]. Many authors have used polynomial functions for Cartesian coordinates of swing leg's foot, hip, and trunk angle [13,14]. Polynomial functions are used for the coordinates of the joints to limit the number of optimization parameters [15]. The optimal path for each coordinate of joints is usually written in the form of polynomials with unknown coefficients. The coefficients should be obtained through the optimization process [15]. For all bipedal robots, it is important to define optimal periodic motions despite the fact that the number of actuators is less than the degree of freedom of the system, and also zero dynamics problem there exists which should be satisfied during optimization.
In this paper, a new method is presented to produce a periodic path for the walking of bipedal robots which satisfies the impact invariance constraint. Also, in order to achieve the feasible trajectory, the zero dynamics constraint is satisfied without any approximation. In addition, by considering some other kinematic and dynamic constraints, and |
2306.17573 | Parallels in the symbolism of star constellations | We answer the question whether, when forming constellations in the night sky,
people in astronomical cultures around the world and through time consistently
imagined and assigned the same symbolism to the same (type of) star group.
Evidence of semantic parallels has so far been anecdotal. We use two
complementary definitions for a star group: (1) a star group in a fixed region
of the sky (regardless of its exact star composition), and (2) a star group
with a particular shape and brightness (regardless of its location in the sky).
Over a dataset of 2003 constellations from 82 astronomical cultures, we find
many semantic parallels which are likely naturally induced by the shape and
composition of the star pattern. In certain cultural regions, geometric and
group symbols are perceived consistently over small and uniformly bright star
groups, naturalistic humanoids in large star groups with non-linear minimum
spanning tree (MST) and stars inside the convex hull, and reptiles in star
groups with low aspect ratio or linear MST. These naturally induced semantics,
seemingly endogenous to certain sky patterns, show that there are universal
(rather than learnt) patterns behind forming and naming constellations. | Doina Bucur | 2023-06-30T11:48:46Z | http://arxiv.org/abs/2306.17573v3 | # The semantics of constellation line figures
###### Abstract
We answer the question whether, when forming constellations in the night sky, people in astronomical cultures around the world consistently imagine and assign the same symbolism to the same (type of) star cluster. Evidence of semantic universality has so far been anecdotal. We use two complementary definitions for a star cluster: (1) a star group in a particular sky region (regardless of its exact shape), and (2) a star group with a particular shape and brightness (regardless of its location in the sky). Over a dataset of 1903 constellations from 75 astronomical cultures, we find semantic parallels which are likely _culturally induced_: body parts in the sky region delineated by the International Astronomical Union (IAU) as Ori, fish in Cru and Sco, geometric symbols in Cru, groups in UMa, mammals in UMa, and reptiles in Sco. Surprisingly, we find many more significant semantic parallels which can only be _naturally induced_ by the shape and composition of the star pattern underlying a constellation (or, are endogenous to the sky rather than culture-dependent): arthropods in IAU Sco, body parts in Tau, geometric and group symbols in star clusters (regardless of sky region) with a small number of bright stars comparable in magnitude, humanoids and mammals naturalistically drawn in star clusters with large spatial diameter and many stars, landscapes in IAU Eri, man-made objects of various types in many IAU regions, and reptiles consistently drawn in star clusters with low aspect ratio or low branching in the minimum spanning tree drawn over the stars. These naturally induced semantics show that there are universal (rather than only cultural) thought patterns behind forming and naming constellations.
## 1 Introduction
Constellation line figures are geometric representations at the intersection of nature and culture. The stars in a constellation can be chosen and a meaning freely assigned (a bear, a scorpion), but that same constellation is constrained to a shared background of stars in all astronomical cultures around the world. We ask where and to what extent **constellation semantics** assigned to line figures are **universal**. Semantic universality may be caused by the star pattern itself, which naturally resembles the entity named; this is an natural (or endogenous) effect of the star pattern. Semantic universality may also be caused by a common cultural influence, which imposes one common view upon a sky region; this is the cultural (or exogenous) effect of the human imagination. We quantify semantic universality, and make the distinction between the two causes (natural or cultural) when possible.
A **natural effect** of the star pattern on constellation semantics and line geometry has anecdotal evidence, and was not measured systematically. The dipper symbolism for the Big Dipper asterism is present worldwide but only episodically documented across different traditions [10]--so this symbolism was ascribed to the star pattern naturally resembling this man-made tool, rather than to cultural diffusion [28]. There are also some parallels between Western and indigenous American constellations to an extent unlikely to be due to colonial influence: constellations located in the sky region of the International Astronomical Union (**IAU**) [38] Sco, with scorpion geometry and semantics, were documented not only in the Western world but also in the pre-colonial Aztec and Maya cultures of Mesoamerica [41]. A small-scale cognitive study showed that the line geometry for 30 of the classical Potemaic constellations [78] are predictable to Western observers from the star pattern alone [24]. Worldwide, constellations adjacent to 35% of popular stars have universal line geometry (these are located in IAU Sco, CrB, Cas, and to a lesser extent UMa and Leo) [13], but it is not known _why_ a line geometry is universal, nor whether the same universality holds for semantics.
A **cultural effect** has been widely observed. In astronomical cultures (whether ancient or modern) with a common _ancestry_, constellation star composition and names have naturally retained similarities even when these cultures are distant in time or space. This effect also may be intermediated by other variables associated with cultural phylogeny: _geolocation_ (indigenous cultures at the tropics formed different astronomical systems than those in temperate zones [6]), _astronomical literacy_ (in the region influenced by ancient China, detailed star charts were maintained, and thus not only bright, but also faint stars were systematically linked into constellations [74]), and common _cultural myths and themes_ of the sky. The latter likely have a large influence upon constellation design; for example, most line figures represent human and animal mythological characters in cultures with Mesopotamian or Greek origin [69, 70]; Northern Dene tribes in Alaska and Canada universally draw a whole-sky Traveler constellation of their principal mythological character [14]; a bird in flight dominates the skies of Polynesian cultures of the south Pacific [18]; and there are parallels in symbolism between pre-Columbian N-American and central and west Asian myth for the Big Dipper [28] and Orion's belt asterisms [29] likely due to a distant common origin.
We draw a **causal model** of potential influence upon constellation semantics in Figure 1. All variables can be measured. Cultural phylogeny (and variables associated with it) are likely common causes: cultural myths and themes affect semantics; geolocation and astronomical literacy affect the choice of stars. Between constellation semantics and the choice of star cluster, causal effects may be bidirectional--it is not possible to find which came first in the mind of the sky observer: the star cluster (perhaps naturally separable in the night sky from other star clusters) or the meaning (perhaps influencing the choice of star cluster)? The meaning or the line geometry? Constellation semantics is the outcome variable; we study the potential effects on this outcome. The statistical association between a variable of interest and the outcome can be measured after conditioning on (controlling for) common causes (here, cultural ancestry). For the question in Figure 1 (is semantic universal?), an association between star cluster and semantics would discover which _universal semantics_ are associated with certain star patterns or sky regions beyond only cultures of common ancestry (where it is expected).
The remaining effect in Figure 1, from cultural phylogeny to star cluster and line geometry, is not related to semantics. It has already been studied, as follows. Star clusters may be (to an extent) natural effects of the human perception of point and brightness patterns [42]. Also, cultures produce (also to an extent) their signature star clusters and line geometries not only following ancestral links: oral astronomies have widespread similarities and use brighter stars across continents, Chinese and Mesopotamian ancestries have opposite geometries, with Polynesian ancestry the only bridge between these opposites [13].
To answer the question of semantic universality, we use data from 75 astronomical cultures (Figures 2) for which 1903 constellation line figures were documented (with at least some degree of certainty) in existing ethnographic, anthropological, or (archeo)astronomical literature. The dataset is publicly available at [https://github.com/doinab/constellation-lines](https://github.com/doinab/constellation-lines) and was partially introduced in [13]. Only constellations or asterisms with at least one line are included. The cultures are heterogeneous in the number of line figures (between 1 and 252), and the constellations are heterogeneous in terms of their number of stars (between 2 and 67) and their angular diameter in the sky to an observer (between 0.24 and 147.74 degrees). Small oral cultures are as interesting to study as large ones: while iterate astronomies (Chinese, Mesopotamian, Egyptian, Mediterranean traditions) preserved a whole-sky record, oral astronomies (with vanishing astronomical traditions, in the Americas and the Pacific) have a low number of line figures, yet some of these are whole-sky designs.
The **phylogeny** marks a _region of cultural influence or migration_. For this dataset, we delimit 11 phylogenies. All Western cultures are marked as from **Mesopotamian** ancestry. This is an oversimplification for a mixed origin: some Western constellations are Mediterranean, and some are recent European inventions. We use this name for the
Figure 1: **Causal graph.** Causal effects among constellation semantics, star cluster, and the ensuing line geometry. When studying the link between star cluster and semantics, metadata associated with culture (phylogeny, geolocation, astronomical literacy, myths, and themes) is a confounder.
phylogeny to denote the oldest source, the zodiacal signs. **Egypt** and **India** denote single ancient astronomical cultures without known external influence. **China** denotes not only cultures located in China itself, but also those in its sphere of influence. **N** and **S America**, **Polynesia**, **Austrronesia**, and **Austroasia** denote sets of cultures from migration families, so with cultural similarities. For more information about the data, the smaller phylogenies, and the semantic annotation, see Sections 4.1-4.2.
## 2 Results
We ask whether and where a systematic association exists between constellation _star cluster_ and _semantics_. A positive association would signal universality: a region of the sky is often assigned the same semantic across cultures.
We cannot use the strongest possible definition of _star cluster_ (namely, the exact set of star identities), because (1) this set matches exactly across cultures and phylogenies only in rare instances (e.g., the stars in the Big Dipper asterism, Orion's belt, or IAU Cru), and (2) in certain cultures (with oral astronomies, or of Chinese ancestry) the exact identity of the major or minor stars in the constellation is either not important [73] or uncertain, and may vary even between villages [81]. Instead, we answer the research question for two weaker but complementary definitions of _star cluster_:
1. The star cluster is defined by the _sky region_, which is pinpointed by a chosen major star. I.e., a star cluster is a set of stars grouped into a constellation, such that this group includes a "root star" (e.g., \(\alpha\) CMa in Bayer designation). Two star clusters over a root star will overlap (but need not be identical). The root stars are selected based on popularity: they are the stars with large constellation count, and tend to be bright. This definition _fixes_ the sky region of the star cluster on the celestial sphere, but allows the star pattern to differ.
2. The star cluster is defined by the _features of its star pattern_: its size (number of stars), aspect ratio of the star cloud, spatial diameter on the celestial sphere, brightness statistics, and other properties of the convex hull and the spatial minimum spanning tree drawn over the star cluster. Two star clusters which are similar in these features need not coincide, or even be located in the same sky region. This definition _fixes_ the features of the star pattern, but allows its location on the celestial sphere to differ.
**Question (S.1): Semantic universality per sky region**
We find that significant semantic parallels for certain sky regions exist, cross-phylogeny (or outside the bounds of a single cultural region). When such a parallel is found, both causes for assigning a semantic to a constellation are possible (i.e., natural or cultural). We try to make the distinction based on whether or not the two or more cultural regions where a semantic parallel is found may have had exchanges of cultural themes.
Since the culture typology is a confounder (it likely affects both semantics and the choice of stars), and a common ancestry drives this typology, we stratify the analysis by cultural phylogeny. (For an explanation about how phylogenies were determined, see Section 4.1; for the reasoning behind the semantic annotation, Section 4.2.)
Figure 2: **Astronomical cultures in the data. The format is: culture name, (the number of line figures in parentheses), the documentation date. The area of the data point is proportional to the number of constellations, and the colour follows the continent. Cultures with global reach are at the bottom left. The dataset contains 75 cultures and 1903 lined constellations.**
The semantic breakdown in Figure 3 (top) shows that the phylogenies are different in their semantic makeup, and there are dominant phylogenies which may introduce bias in the perception of certain semantics. For example, although many (10.3%) of the constellations represents mammals, nearly all mammals are in W Asian, Western, and American cultures; mammals are instead almost absent from cultures of Chinese ancestry, as well as from oceanic cultures (Polymesian, Austronesian, Austronesian, etc. ancestries). Thus, the _unstratified_ semantic counts per root star, in Figure 3 (bottom), are partially an effect of cultural ancestry rather that true semantic universality. The typical Western symbolisms appear frequent (partly because the Western phylogeny is large): the bird symbolism for star regions in IAU Aql and Cyg, the arthropod for Sco, the geometric cross for Cru, mammals for CMa, CMi, Leo, Peg, Tau, UMa, UMi, humanoids for Cas, Cen, Gem, Ori, and man-made object semantics for CrB (a crown), UMa and UMi (a dipper). However, additional semantics also appear frequent: a fish for Cru, geometric figures for CMa and CMi, body parts in Tau, a humanoid in Leo and UMa, and widespread man-made objects in Del, Ori, and Tau. Note that the IAU sky regions are variable in their number of popular stars (there is only one root star in IAU CMa, but 15 in IAU Sco). The semantic breakdown is usually consistent among the root stars from the same constellation, even for large constellations such as IAU Sco. For all sky regions, fairer conclusions are drawn from a stratified analysis.
_Stratified_ semantic counts provide a nuanced story. We illustrate with the example of the scorpion (arthropod) symbolism for **IAU Sco**. This meaning recurs, but only in cultures from three ancestries, and is entirely absent from others--Figure 4 (top) shows the breakdown in semantics of all popular root stars in IAU Sco, separately per phylogeny. This breakdown "localises" the scorpion symbolism to cultures of Mesopotamian, N American, and Austroasian origins, and also makes clear that other semantic parallels (a fish, a reptile) exist among the other phylogenies.
To quantify semantic universality, we report a _similarity score_ between any two phylogenies: the joint probability of having the same semantic for constellations of the same root star (averaged over root stars), normalised by the joint probability of that semantic between these phylogenies. Only values greater than 1 are relevant; higher values signal more certainty. We report two additional counts: the _number of stars_ and the _number of constellations_ in common for that semantic between the two phylogenies (both more significant when larger). (Section 4.3 provides more detail on the method.) The similarity scores link phylogenies into weighted undirected _similarity networks_, as in Figure 4 (bottom); the edge weight is a similarity score above 1.
Three semantics for Sco stars can thus be called universal, each across different parts of the world: Sco as an arthropod (scorpion), as a reptile (snake, serpent), and as a fish (ray fish, shark):
**arthropod**: Eight Western cultures, three Mesoamerican (Aztec, Huave, and Tzotzil), and two Austroasian (Gond and Kolam) draw a scorpion figure similar to IAU Sco; only the lining of the head varies. The Aztec constellation
Figure 3: **Constellation semantics. (top)** The fraction of constellations with clear semantics, per phylogeny. The meaning of 2.05 % of constellations is unidentified. Some (1.60 %) have two alternative meanings, so some totals can be above 100 %. (bottom) Per root star worldwide, the breakdown in semantics (normalised to 100 %). Each root star has at least 24 (and up to 79) constellations.
is shown in Figure 4 (bottom) as an example. An additional Mesoamerican culture (Maya) draws a much smaller scorpion, adjacent to only four Sco stars.
**reptile**: Two N American (Maya, Pawnee) and one S American culture (Tukano, in two documented variants of their constellation) draw snake figures encompassing most of IAU Sco but with fewer head stars. The exception is the Mayan Rattlesnake, adjacent to only one Sco star.
**fish**: Two Polynesian (Kiribati, Manus) and two Austronesian (Bugis, Mandar) cultures see the IAU Sco region similarly: a ray fish inhabits most of the Northern Sco stars, and the Southern tail stars form the fin of a shark. (In two of these cultures, the shark shape is a naturalistic figure including other stars outside IAU Sco). These two fish constellations form a scene: among the Manus people, the shark is said to bite the stingray. The Bugis constellations are shown in Figure 4 (bottom) as examples.
The universality of the reptile and fish in IAU Sco (particularly given the very specific parallels in the fish species) may be a _cultural effect_ due to contact between geographical regions, which in these cases are neighbours. However, cultural contamination alone is an unlikely cause for the recurrence of the scorpion from the present tribes of Central India to pre-colonal Mesoamerica and ancient Mesopotamian traditions; this should instead be the _natural effect_ of the star pattern in IAU Sco, which is known to inspire near-universal line figures regardless of any semantics [13].
**Summary of results.** Table 1 provides a summary of all significant semantic similarities per IAU sky region. Many of these semantic overlaps hold for a star cluster with few stars. For example, \(\alpha\) CMi alone is part of spatially large, very bright, yet simple geometric figures (Western asterisms depicting: an X-shaped Egyptian Cross, the Winter Triangle, the Winter Hexagon, and the spiral of a letter G; as well as the Great Cross of the Quechua and the Celestial Hexagon of the Tukano in S America). The semantic overlaps over larger star clusters are more significant, so we only discuss these in what follows.
_Humanoid._ When the universal semantic is that of a humanoid, the composition of the star cluster, as well as the line geometry, tend to be diverse rather than uniform. The five stars in **IAU Cas** (\(\alpha\) to \(\epsilon\)) make a female figure in four Western cultures, as well as for the Navajo of N America. The star clusters do not overlap completely: three more Western cultures (Al-Sufi, Rey, Ruelle) add other, fainter stars to these five but preserve the semantic; two other N-American cultures (Lower Tanana, Ahtna) use only 3-4 of the IAU Cas stars, but add many others to create whole-sky male figures (of which the Cas stars form one hand). The same is true for the recurring humanoid semantic in **IAU Gem** stars: unlike the now standard constellation of Mesopotamian origin representing twins, native N-American cultures use 2-5 of the bright stars to draw the same whole-sky male figures (where the Gem stars form a side of the head). Stars in **IAU Leo** rarely recur as a mammal outside Western cultures (they do in two Polynesian constellations: a Hawaiian lion of likely modern borrowing, and a four-star rat in the Carolines). They do, however, more often recur
Figure 4: **Semantic universality for IAU Sco. (top)** Per root star in Sco stratified per cultural phylogeny, the breakdown in semantics (semantic categories as in Figure 3). Includes Sco stars with at least (a) 2 constellations per phylogeny, and (b) 10 constellations in the global data. **(bottom)** Three semantics (geometric, reptile, and fish) are universal for the star region of Sco, only in the phylogenies shown by the three similarity networks. The edges are weighted by the similarity score and annotated with the number of stars and constellations in common. Examples are also shown for each semantic.
in humanoid figures. There is no consensus on the shape of the figure nor the size of the star cluster formed around Leo beyond the general humanoid semantic: the Leo "hook" is part of a large emperor figure in China and Korea, but is a hunter figure for the Huave in Mesoamerica; the core five stars form a dead body in a funeral process for the Gond in Central India (using a figurative line geometry, very different than that of IAU Leo); finally, various Leo stars are incorporated into whole-sky humanoid figures in five N-American cultures. Given the variety in the design of humanoids in these sky regions, the recurring semantic can only be ascribed to a weak _natural effect_: the star patterns are complex enough to allow one to draw a human figure (arguably the most complex semantic category) in different ways and to various degree of naturalism.
_Man-made object_. On the other hand, when the recurring semantic is a man-made object, the composition of the star cluster, as well as the line geometry, are uniform--likely because the object shape is simple. The five stars of **IAU**\(\mathbf{Cas}\) have a second recurring symbolism, as utilitarian objects, figuratively represented: a plough or cutter (Babylonia, Macedonia, Seri); a lamp stand and, as a variant, a container (Inuit); a chair or throne (Romania and Sardinia). These figures are composed of 3-5 of the IAU Cas stars, occasionally adding a few more neighbouring stars. The five stars of **IAU CrA** (\(\alpha\) to \(\delta\) and \(\theta\)) are seen as arc-shaped objects across Western and Polynesian cultures: a crown (IAU,
Al-Sufi, Western, Rey), a garland (Marshall), a fishing net (Manus), or a fish hook (Kiribati). The loop is usually drawn open rather than closed, and three of these Western cultures also add other, fainter stars to the arc. The seven stars of **IAU CrB** (\(\alpha\) to \(\epsilon\), \(\iota\), \(\theta\)) are seen similarly in the same ancestries, plus China: as a crown (in seven Western cultures), a garland (Marshall), a coiled thong (China medieval and China), a fishing net (Carolines), and a round table (Macedonia). **IAU Del** also recurs as objects: a bowl, gourd, or trough of the 3-4 northern stars (in China, Carolines, Kiribati, and Marshall); or a longer tool of 5 stars: sling, bow (Pawnee, Zuni), or adze (Anuta). **IAU Sgr** (a human-mammal hybrid in Western cultures) is a mammal in two N- and two S-American cultures, but the species (fox, deer, guanaco, feline) and line figures are diverse. Instead, the Western asterism of the Milk Dipper in Sgr has parallels around China, where dippers and baskets are drawn around the same stars. The constellation lines can be considered figurative in all cases, which points to a strong _natural effect_: the star patterns resemble these objects.
_Geometric_. The cross symbolism for the four bright stars (\(\alpha\) to \(\delta\)) of **IAU Cru** recurs, with varying frequency, across six phylogenies--Figure 5 (left)--although this is the most frequent symbolism in only two of them. Only in Hawaii, Cru forms a different type of line geometry which is completely original: \(\gamma\) Cru alone is the end star in a line of stars sky-wide from the north to the south pole, a metaphor for a genealogical line. Instead, significant similarity is measured among Mesopotamian, N American, and Polynesian ancestries (Figure 5, right): nine cultures across the three phylogenies draw the same four-star non-planar cross, and label it with the cross semantic. These cultures include three present-day Mesoamerican (Huave, Kiche, Tzotzil) and one 20th-century Polynesian tradition (Manus). Additionally, this cross is also listed in late China (known to have hosted European Jesuit astronomers, who developed star charts for the southern celestial pole in the 17th century [32]), but not in medieval China nor Korea (where these four stars are integrated in more complex asterisms with different meanings). Because of this, the cross semantic may have been a recent, Western _cultural effect_.
_Fish_. In six cultures of Polynesian and Austronesian ancestry (Carolines, Kiribati, Marshall, Samoa, Madura, and Malay) the same four **IAU Cru** stars recur instead with a fish semantic--usually a triggerfish, but a signfray for the Malay, and usually lined into a diamond (Figure 5, right). The diamond line geometry appears natural for this region, but the fish semantic can be assumed to be a _cultural effect_, due to common cultural themes in these regions (related to fishing). Surprisingly, an expected fish symbolism, in **IAU Del**, is rare outside Western cultures: a single Polynesian constellation sees a fish in these stars (the Kailou Fish of the Manus).
_Landscape_. **IAU Eri**, a long chain of faint stars, surprisingly recurs as elements of a landscape. It is a single, long river in eight Western cultures. In China and Korea, two subsets of the star chain form two landscape elements: in the southern region between \(\upsilon^{1}\) and \(\iota\), a hill or orchard; in the northern region between \(\gamma\) and \(\tau^{9}\), a garden or meadow. All these constellation line geometries are simple chains of stars. This can be assumed a _natural effect_ of the star pattern: a faint, winding, expansive chain of stars. In this sky region, constellations are not drawn at all in other cultures (except in Polynesia, and then only in the northern region), likely due to the stars being unremarkable.
_Body part_. **IAU Tau** is universal not as an entire mammal, but (as in the Western asterism known as the V of Taurus) as a mammal body part: the jaws. The species varies with the geographical location. The jaw is of a bull (in Babylonia), a wolf (for the Norse in northern Europe), a tapir (for the Lokono and Tupi in S America), or a cyanan (for the Tikuna in S America). In the same functional trend, it is also common as a man-made tool (similar in function to jaws, i.e., to hold or capture items): a net (for the Manus in Polynesia, and around China), and tongs or tweezers (on Amuta, Samoa, and Vanautu in Polynesia). The large geographic range of this universal semantic points to a _natural effect_.
**Examples: IAU Ori and UMa**. We give two further examples, for the two sky regions which are anecdotally described to have recurring semantics across Eurasia and the Americas [28, 29, 9, 10]: IAU Ori and UMa. Our account provides a more complete account of semantic universalities in these regions.
The brighter stars in **IAU Ori** are popular, and have diverse semantics--Figure 6 (top) shows the breakdown per phylogeny. Orion's belt (\(\delta,\epsilon,\zeta\)) are common stars with a large number (four) of recurring semantics:
**group**: The group symbolism is widespread in six phylogenies, although is diverse in the entities forming the group: Fishermen (Norse), Three Marias (Romania), Three Fire Lords (K'iche), Three Men in a Fishing Canoe
Figure 5: **Semantic universality for IAU Cru. Legend as for Figure 4.**
(Manus), Deer (Pawnee, Pardhi), Three Stars (China, Gond, Mandar), Three Things Side by Side (Ahta), Three Together (Tzotzil), Trio (Carolines), etc. No fewer than 20 cultures have a constellation with a group semantic using all of Orion's belt. The constellations occasionally add more stars to the three belt stars, and rarely subtract from them. A group semantic for Orion's belt related specifically to hunting, universal across N America and Asia, was known [29]: the three belt stars would represent animals being hunted. We find instead that the three-group symbolism is general and extends across the world. It is likely a _natural effect_ of the star pattern, which resembles a procession.
**man-made object**: The symbolism of a plough (and related tools: axe, adze, auger, rake, pole, yoke, or stick) is widespread, reflecting a common agricultural theme in many geographical regions, but also the _natural_ T shape of Orion's belt and sword. 16 constellations using Ori stars represent such objects. Other implements (canoes, strings, traps, fires and other tools) are present in 1-2 cultures each (e.g., the Maya and Aztec marked two fire places in this region). Figure 6 (bottom) provides examples for this semantic, as well as the others.
**humanoid**: There are few human figures drawn in Ori in phylogenies other than Mesopotamia, but they exist, and most have large, naturalistic constellation lines including many stars besides Orion's belt: an Investigator (Japan moon stations), an Old Man (Tupi, a large figure expanding into IAU Tau), a One-Legged Hunter (Kari'na), a Wintermaker (Ojibwe). Surprisingly, this semantic occurs in both hemispheres and in the tropics, so even when the orientation of the star cluster changes, a human figure is still seen in this sky region, likely due to the _natural_ shape of the star pattern when including stars beyond the belt and sword.
**body part**: In N and S America, the belt stars sometimes form the basis (joint) of a hand, arm, or leg constellation (for the Sioux, Zuni, Lokono, Tikuna), with or without additional stars. This recurrence may be due to _cultural_ influence between these neighbouring continents.
For **IAU UMa**, different semantics dominate in different phylogenies (the mammal in Western cultures, but humanoid figures in N America, man-made objects in Austronesia, and groups in Austroasia), as shown in Figure 7. Significant parallels exist for three of these semantics for IAU UMa (and only the man-made object semantic for **IAU UMi**):
Figure 6: **Semantic universality for IAU Ori. Legend as for Figure 4.**
Figure 7: **Semantic universality for IAU UMa. Legend as for Figure 4.**
* A known _cultural_ effect [28], the bear semantic is mostly supported in Western cultures, but recurs also in N America for the seven major stars of UMa: the Zuni see a Great White Bear, the Mi'kmak a Celestial Bear. Other mammals replace the bear only locally: an elk (Russia), a caribou (Inuit), and a fisher (Ojibwe).
* Cultures from five ancestries see objects in the seven Big Dipper stars: carts, charios, or stretchers (in five Western cultures, plus Pawnee in N America). Also recurring are objects for handling liquids: a dipper or handful (in one Western, three Chinese-ancestry cultures, plus K'iche in N America), and fishing nets, canoes, boats, or ships (in four out of five Austronesian cultures, Marshall in Polynesia, and Maricopa in N America). The cart and dipper semantics can be assumed _natural_[28]. The boat constellations are lined in original and diverse ways (also using additional stars beyond the Big Dipper), so cannot be ascribed to a natural effect of the star pattern, but rather to a _cultural_ influence or theme in the oceanic ancestries.
* A semantic (universal in myths between pre-Columbian N America and central and west-central Asia [28] and assigned to a common origin) is a seven-brother symbolism (present on both continents, but widespread in Asia), and that of a stretcher (borne by four men) or bed, followed by a train of mourners or thieves. We find the stretcher-and-train symbolism in Central India, but more often the meaning of a group of seven: an organized group of thieves (Macedonia), seven brothers (Sardinia, Blackfoot), Buddhas (Mongolia), caribou (Inuit), or unspecified entities (Hawaii, Zuni)--also present beyond the regions already known [28], in Polysnesia. This semantic may be universal because of a common _cultural_ influence (possible since the Hawaiian culture was recently documented, and the others have a likely common origin). It may also be so because the pattern of seven similarly bright stars lends itself _naturally_ to this semantic. Question S.2 will lend weight to the natural hypothesis.
### Question (S.2): Semantic universality per star pattern
To study the association between constellation star pattern and semantic, the first step is to quantify the star pattern (both the spatial point pattern and the star magnitudes) into numerical features (see Section 4.4 for details). After feature selection (which eliminates the most redundant features, on the basis of correlations), eight features remain. Each describes the appearance of the star cluster from a complementary point of view:
* the **number of stars** in the star cluster;
* the **aspect ratio** of the star cluster's point pattern;
* its **spatial diameter**, in degrees on the celestial sphere;
* the **fraction of stars on the convex hull**, which shows the internal complexity of the pattern (circular and linear patterns both have values close to 1, while random patterns have low values);
* the **fraction of stars in line** on the spatial Minimum Spanning Tree (MST) drawn over the star cluster (a purely linear MST will have value 1);
* the **average MST branching** degree;
* the **minimum** and **maximum magnitude** among the stars.
The number of stars, the spatial diameter, and the magnitude features are used in their absolute values; the remaining features are normalised to \([0,1]\) (dividing by the size of the star cluster), such that they are size-independent.
These features are expected to affect the choice of semantics, at least in cultures where the line geometry is naturalistic. Semantics with more complex shapes (such as humanoids and mammals) may require larger clusters. Chain-like star patterns, either straight (aspect ratio close to 0) or bent (fraction of the MST in line close to 1) may naturally lend themselves to certain semantics (reptiles, groups in procession). Patterns which are homogeneous in magnitude (the minimum magnitude is close to the maximum) may inspire group semantics, in which each star has equal standing.
We then train machine-learning classifiers to _describe the statistical patterns_ (if any exist) between the features of the constellation star cluster and the semantics assigned to that constellation. These are multi-class Support-Vector Classifiers (SVC) with balanced class weights and regularisation configured for a trade-off between high and low variance (more detail in Section 4.4). If a classifier can discriminate well among some semantic categories, then a strong association exists between star pattern and those semantics. The association is then interpreted via _classification maps_: two-dimensional plots with two important features of the star pattern on the axes, whose surface shows the regions of feature values which associate with a semantic. If, on the other hand, discrimination is not possible, then there exists no universal semantic assignment per star pattern.
For each classification task, we report three performance metrics. The _accuracy_ of discrimination among semantics is the fraction of constellations whose semantics were correctly discriminated (_balanced_ such that all semantic categories
weigh equally). The accuracy should be judged against its _random baseline_, which is 1 divided by the number of semantics. The _recall_ and _precision_ are also reported, in this case per semantic. All performance metrics are in \([0,1]\). If a semantic category has high recall but low precision, the interpretation is that the semantic is indeed assigned to a particular type of star cluster (so, the semantic has a characteristic star pattern), but also other semantics share that same type of cluster. Low precision is natural for this problem, since no popular star cluster is assigned the same semantic worldwide. The breakdown in semantics from Figure 3 (bottom) also supports this.
Worldwide, _unstratified_ by phylogeny, the classifier finds a weak association between star pattern and semantic (accuracy 0.34, baseline 0.07) for the 14 semantic categories represented in the global data. Few semantics have good recall (plant: 0.72, geometric: 0.60). This result is subject to bias, due to the unbalanced size and different semantic makeup of the phylogenies. We present stratified results below.
**Summary of results.**_Stratified_ by phylogeny (taking only the semantics represented in each phylogeny by at least five constellations), stronger associations are found by multi-class classifiers, and they differ among phylogenies. Figure 8 shows confusion matrices: each cell represents the fraction of constellations from a semantic which are indeed predicted to be from that semantic. A diagonal matrix would mean a perfect association between star pattern and semantic. China and Mesopotamia (accuracies 0.46-0.47, baseline 0.09) have semantics with high recall, with Chinese birds (recall 1.00) and Mesopotamian landscapes (recall 0.92) easiest to match to their specific star patterns. For Polynesia (accuracy 0.42, baseline 0.12), only body parts, groups, and humanoids can be matched to star patterns. N American semantics are particularly hard to discriminate. For the smallest phylogenies, there is too little data to be confident in the results, so they are rarely discussed in what follows. In all cases, precision is variable (0.13-0.89). In all large phylogenies, high recall for a semantic class can only be achieved with low precision, meaning that there are overlaps in semantics on the same types of star clusters.
To interpret why certain semantics are assigned to certain star patterns, we isolate each phylogeny-semantic combination and build for each a focused, binary classifier (accuracy baseline 0.50), trained to discriminate that semantic from _all others_ in that phylogeny. Binary classification is a simpler problem which requires less data in training (so can also be done for the smallest phylogenies, such as Egypt), and performance may be better than in the multi-class setting. When such a classifier is successful, an association between semantic and star pattern was found _within_ the cultures of that phylogeny, which is not surprising due to a common origin. But, if the same association is found for more than one phylogeny, we can speak of semantic universality _across_ phylogenies. If there is no known cultural transmission cross-phylogeny, this can then be assigned to a _natural effect_: astronomical cultures from both phylogenies interpret star patterns similarly. We focus on this cross-phylogeny semantic universality, which we discuss below.
Only four semantics have _strong commonalities in star pattern across phylogenies_ (summarised in Table 2), all likely _natural effects_: humanoid, reptile, group, and geometric. The other semantics either match to star clusters in different ways across phylogenies, or do so in only one phylogeny, or not at all.
_Humanoid._ The humanoid symbolism does have particular types of star clusters in certain cultures (Figure 9 shows classification maps for interpretation). Recall is relatively high, but precision is low in all large phylogenies, so some non-humanoid constellations also occupy star clusters of the same types. The number of stars and spatial diameter of the cluster are the most important discriminants in Mesopotamian and Egyptian ancestries: relatively large clusters with relatively many stars are made into humanoids--with exceptions for the former, in which a fraction of small humanoid constellations break the pattern. The interpretation is different for Chinese humanoids. The same features are important, but the association is the opposite: with few exceptions, the smallest clusters with few stars are made into humanoids. These are simple chains or coils of 2-6 stars undistinguished in brightness (with a broad range of aspect ratio, 0-0.75, and a high MST branching), fundamentally different from the complex line figures in Western
Figure 8: **Semantic universality by star pattern.** (Per phylogeny:) Normalised confusion matrix for semantics, given the features of the star pattern. Only semantics with at least 5 constellations, and phylogenies with 3 or more such semantics, are included.
cultures. Polynesia resembles China: spatially small clusters of 3-5 (except now relatively bright) stars are made into humanoids. There are thus not one, but _two_ universal ways to imagine humans in the sky: either naturalistically onto expansive star clusters of many stars, or abstractly onto small clusters of few stars of variable magnitude.
_Reptile._ For this semantic, the aspect ratio and MST branching of the star pattern are the most important discriminants (Figure 10), and the association is consistent across four phylogenies. There are two types of reptile constellations, both similar in appearance. They are drawn over star patterns which are either linear (have low aspect ratio), or circular but mostly unbranched (any value of aspect ratio, as long as the average MST branching is low). This allows to draw snakes (either straight or coiled) by largely following the MST. A secondary, less frequent, type has both high aspect ratio and high branching: these are usually spiral snakes, or turtles with near-circular shapes. We conclude that drawing reptiles is universal, and can be called naturalistic in all cases.
_Group._ In China, group constellations are not distinct from humanoids in star pattern. However, group constellations are well discriminated in Western and Polynesian cultures: only relatively bright star clusters of at most seven stars are made into group constellations. (This is also true, but less consistently, in N American and Austroasian cultures.) Figure 11 shows classification maps for two pairs of important features. The pair min.-max. magnitude shows a distinct property of group constellations in the West and Polynesia: the stars per cluster are all comparable in magnitude. There are a few exceptions: groups with a dominant bright star and a trail of faint stars, such as the Romanian constellation
\begin{table}
\begin{tabular}{|l|l|c c c|} \hline & & & & **phylogenesis with similarities** \\ \hline type of **star pattern** & **semantic** & accuracy & recall & precision \\ \hline large spatial diameter, many stars & humanoid (naturalistic) & -75-98 & -78-1 & -37-83 \\ spatially small clusters, few stars & humanoid (abstract) & 66-77 & -78-1 & 11-21 \\ low aspect ratio or low MST branching & reptile & -73-34 & -83-1 & 06-17 \\ few stars, bright, comparable in magnitude; often low & group & 80-84 & -77-89 & 13-34 \\ aspect ratio & relatively high aspect ratio and MST branching, few & geometric & 83-84 & 85-90 & 21-24 \\ stars, bright, comparable in magnitude & & & & \\ \hline \end{tabular}
\end{table}
Table 2: **Semantic universality per star pattern.** For each pair of **type of star pattern** and **semantic**, phylogenies which measure as semantically similar are marked (on the right). Since the similarity is determined statistically, only the larger phylogenies with sufficient data per semantic are included. Performance metrics for binary classifiers are shown as ranges across phylogenies.
Figure 10: **Semantic universality of the reptile semantic per star pattern.** Legend as for Figure 9.
Figure 9: **Semantic universality of the humanoid semantic per star pattern.** Probabilistic, binary classification maps show the decision boundaries between humanoids and all others, against two important features of the star pattern. Purple areas denote humanoids, and green areas all other semantics. Annotated performance metrics for binary classifiers: balanced accuracy (acc.), recall (r.), precision (p.). Two constellations are also shown as examples.
She-Goat with Three Kids (identical to IAU Lyr). An additional property holds: the star cluster often has a low (long) aspect ratio (below 0.11, e.g., Orion's belt), also with a few exceptions: seven-people groups in the Big Dipper, and circular seven-people groups in IAU CrB (such as the Romanian Ring Dance constellation.) Western and Polynesian cultures thus share the design of group constellations: a small number of uniformly bright stars, often in a long pattern.
GeometricConstellations named after lines (straight or bent), triangles, quadrilaterals, crosses, hexagons, letters of the Latin alphabet with simple geometries (G, W, Y, V), zigzags, or circles are well represented only in Western and N American cultures, where they do share characteristic features. As expected by the semantic, these star patterns tend to have a relatively high aspect ratio (around 0.5) and a relatively high degree of MST branching, so are rarely linear in shape. However, in all other respects they share the properties of group constellations: their clusters contain up to ten stars, which are relatively and uniformly bright.
Other semantics (without commonalities across phylogenies)Four of the seven Chinese _bird_ constellations are among the most naturalistic constellations in this phylogeny: spatially expansive star clusters, with often the most numerous stars, low aspect ratio (below 0.5), and in particular low MST branching for this phylogeny (below 0.15)--as shown in Figure 12. The bird line figure is drawn naturalistically roughly following the MST, with expansive wings or tails. In N and S America, only weak patterns exist. In Western cultures, the pattern is the opposite: birds are drawn over star clusters with relatively high MST branching (0.38 for IAU Crv) and/or high aspect ratio (0.73 for IAU Aql). _Body parts_ are weakly recognisable (accuracy 0.70-0.75) in the Chinese, Mesopotamian and Polynesian phylogenies, but their characteristic star patterns don't have commonalities. Mesopotamian body parts are mostly heads (or heads of hair) drawn somewhat naturalistically, on star clusters with relatively high aspect ratio (around 0.5), with often all stars on the convex hull of the cluster. Polynesian body parts are all linear with low aspect ratio (below 0.2) and represent a variety of parts in the same linear style (a wing, tentacle, backbone, tail, eyes, etc.).
Other semantics (within one phylogeny)Landscapes are matched to types of star patterns only in Western cultures, where the constellations represent rivers (such as IAU Eri) and fields or enclosures, naturalistically. The former is drawn onto characteristic star patterns with numerous (25+) faint stars. Elements of _architecture_, prevalent in China, do not have characteristic star patterns there, but do instead in N America (with only six constellations in the data): these constructions (a den, lodges, a marquee, stairs) are star patterns with a linear MST (fraction of stars in line on the MST equal to 1). Their line figures also follow the MST, so draw the most natural, minimal line through all stars (sometimes also closing it into a loop). The Chinese _plant_ constellations correspond to star clusters with some of the highest aspect ratios, sometimes drawing recognisable (stacks of) stems or leaves. _Mammals_ don't have characteristic star clusters anywhere except somewhat in China (faint, close-to-linear star chains with low aspect ratio and an abstract look). Instead, in Western cultures, mammals tend to use similar star clusters as the humanoids. The Austronesian _fish_ constellations are characteristic, but this is only because they are rare (six constellations in that phylogeny) and universally drawn in the same locations: IAU Sco and Cru (see Table 1 on semantic universal
Figure 11: **Semantic universality of the group semantic per star pattern. Legend as for Figure 9.**
Figure 12: **Semantic universality of the bird semantic per star pattern. Legend as for Figure 9.**
classifier thus learned those two star clusters as characteristic for fish figures. Unlike the reptiles, _arthropods_ don't have characteristic star patterns (consistent low recall, as per Figure 8).
## 3 Discussion
**Summary of findings and potential impact.** We found numerous instances of semantic universality (some by sky region, and some by the properties of the star pattern). This provides a systematic account of semantic universality--unlike prior work, which only emphasised certain examples in selected sky regions (IAU UMa, Ori) [28, 29, 9, 10]. Importantly, we can not only measure the semantic parallels, but also find the characteristic star patterns that associate to a semantic, by delineating them quantitatively, e.g., star patterns with homogenous star brightness, or low aspect ratio. This method of measuring universality by quantifying the star pattern (our Question S.2) is new.
_Culturally induced_ semantic parallels are expected, particularly within a cultural region with common ancestry, and across regions with a common cultural origin or some degree of influence. We emphasise the most salient ones from Table 1: body parts in IAU Ori, fish in IAU Cru and Sco, geometric symbols (the widespread cross) in IAU Cru, groups in IAU UMa, mammals in IAU UMa, and reptiles in IAU Sco--all with consistent line figures.
Surprisingly, we find far more significant semantic parallels (Tables 1-2) across two or more cultural phylogenies, which can only be _naturally induced_ by the shape and composition of the star pattern underlying a constellation. These semantics can be called endogenous to the sky:
* **arthropods** in IAU Sco (with universal line geometry);
* **body parts** (specifically jaws, naturalistically drawn) in IAU Tau (with universal line geometry);
* **geometric** symbols, naturalistically drawn, in star clusters (regardless of sky region) with few stars, bright, comparable in magnitude, and relatively high aspect ratio and MST branching;
* **groups** in the belt of IAU Ori, but also generally in other star clusters with few stars, bright, comparable in magnitude, and (unlike the geometric symbols) often low aspect ratio;
* and **mammals**:
* naturalistically drawn in IAU Aur, Boo, Leo, Ori, Sgr, but also generally in other star clusters with large spatial diameter and many stars (with very diverse line geometry);
* drawn in star clusters with spatially small clusters and few stars;
* **landscapes** in IAU Eri (with simple, universal line geometry);
* **man-made** objects of different types in IAU Cas, CrA, CrB, Del, Ori, Sgr, Tau, UMa, UMi (with somewhat consistent, simple line geometry);
* **reptiles** naturalistically drawn in star clusters with low aspect ratio or low MST branching.
Associations between star pattern and semantics often have high recall, but low precision, because no star cluster is consistently associated to the same meaning. For example, Orion's belt (\(\delta\), \(\epsilon\), and \(\zeta\) Ori in Figure 6) has four widespread semantics, plus some additional regional ones. Also, each semantic parallel we emphasise is across cultural regions: between two and six such regions, so is variable in geographic extent.
Our results are evidence of universality in constellation formation which is complementary to that found in our previous measurements considering only the line geometry (not also semantics) [13]. They possibly shed light on what is universal or deterministic (and in which cultural regions) in the thought patterns behind forming and naming constellations.
**Limitations.** Our measurements hold in the context of: this dataset of constellations, this choice of features of the star pattern, and this choice of semantic categories. Acquiring more data may uncover more (or stronger) semantic similarities between cultures--data availability is a crucial aspect for any research question, particularly since data in cultural astronomy is scarce, and folk traditions are fading fast. Some semantic categories may be too broadly defined and encompass objects or entities with diverse shapes. For example, snakes look different than turtles, although both are reptiles--we have observed the two categories as two types of star patterns in the results of Question S.2, but may have missed this difference in other semantic categories. Some of the best populated semantic categories (man-made objects, architecture, landscape) could also be split into subcategories, which would mean that stronger requirements would be placed on the type of object that is matched between cultures.
**Future work.** We have measured the extent to which semantics are universal and thus predictable for a given star cluster--without also precisely quantifying the extent to which the geometry of the line figure is itself predictable from the star cluster alone; this remains as future work.
## 4 Data and method
### Data
The dataset consists of constellation line figures from astronomical cultures worldwide. Just under half of the cultures had been contributed by members of the public to the astronomy software Stellarium [88]; we use this Stellarium data after validating it against existing scholarly sources, and verifying that the license allows research use. The remaining half of the cultures were digitised by the author from scholarly sources; of these, some supplement existing Stellarium cultures. The data is publicly available at [https://github.com/doinab/constellation-lines](https://github.com/doinab/constellation-lines) and forms a living dataset1. It was partially introduced in a prior analysis on the network signature of constellation line figures [13]. 19 small cultures were added to the dataset since the publication of [13], and minor corrections were made to the other data. The table below presents an overview; the scholarly references are in the last column.
Footnote 1: Publishing the data is work in progress, due to a change in format to JSON, such that it is compatible with the newest Stellarium format. The data is also contributed back to Stellarium.
\begin{table}
\begin{tabular}{|c c c c c c|} \hline
**location** & **phylogeny** & **sky culture** & **timestamp** & **source** & **\#constellations** & **references** \\ \hline Global & M & **IAU** & 105 ADβ20th c. & standard & 86 & [38, 88] \\ Global & M & **Rey** & 1952 & book & 80 & [68, 88] \\ Global & M & **Western** & present & dataset & 88 & [88] \\ Global & M & **Western asterisms** & present & dataset & 53 & [88] \\ N Africa & E & Egypt & 1470, 50 BC & carving, paper & 26 & [49, 88] \\ W Asia & M & **Babylonia** & 1100-700 BC & tablet, papers & 50 & [35, 36, 38, 36] \\ W Asia & M & **A-Sufi** & 964 AD & book, dataset & 51 & [48, 88] \\ W Asia & In & **Arabia moon st.** & 9th c. & book, paper & 21 & [45, 88] \\ S Asia & Aa & **Banjara** & present & paper & 1 & [82] \\ S Asia & Aa & **Gond** & present & paper & 10 & [81] \\ S Asia & Aa & **Kolam** & present & paper & 6 & [82] \\ S Asia & Aa & **Korku** & present & paper & 6 & [84] \\ S Asia & Aa & **Nicobars** & present & paper & 3 & [83] \\ S Asia & Aa & **Pardhi** & present & paper & 3 & [31] \\ S Asia & An & **Bugis** & present & papers & 12 & [66, 59] \\ S Asia & An & **Java** & 19th c. & book, paper & 3 & [85, 43] \\ S Asia & An & **Madura** & present & paper & 6 & [25] \\ S Asia & An & **Malay** & present & paper & 3 & [39] \\ S Asia & An & **Mandar** & present & paper & 6 & [66] \\ S Asia & In & **India moon st.** & \(<\) 500 BC & book, dataset & 21 & [8, 88] \\ S Asia & mseA & **Thai** & present & paper & 9 & [59] \\ \hline E Asia & **C** & **China medieval** & 1092 AD & chart, book & 245 & [62, 88] \\ E Asia & **C** & **China** & 1756-1950 & chart, book & 252 & [62, 88] \\ E Asia & C & **Japan moon st.** & 8th c. & chart, paper & 27 & [67, 88] \\ E Asia & C & **Korea** & 1395 & chart, dataset & 218 & [88] \\ E Asia & M & Mongolia & present & dataset & 4 & [88] \\ \hline Eurasia & M & Russia & present & book & 4 & [75] \\ Europe & M & **Belarus** & 19th-21st c. & paper & 12 & [7, 88] \\ Europe & M & **Dien** & 1831 & chart & 100 & [23] \\ Europe & M & **Macedonia** & present & paper & 16 & [17, 88] \\ Europe & M & **Norse** & 13th c. & verse, book, dataset & 6 & [40, 88] \\ Europe & M & **Romania** & 1907 & book, exhibition & 37 & [61, 1, 88] \\ Europe & M & **Ruelle** & 1786 & chart & 74 & [71] \\ Europe & M & **Sardinia** & present & dataset & 11 & [88] \\ Europe & S & **Sami** & 19th c. & book & 3 & [50, 88] \\ \hline N America & nA & **Ahta** & present & thesis & 2 & [15] \\ N America & nA & **Aztec** & 16th c. & codices, book & 4 & [21, 22, 5, 88] \\ N America & nA & **Blackfoot** & 20th c. & book & 4 & [57] \\ N America & nA & **Gwichβin** & present & thesis & 2 & [15] \\ N America & nA & **Huave** & 1981 & paper & 15 & [51] \\ N America & nA & **Inuit** & 20th c. & book, dataset & 9 & [52, 88] \\ N America & nA & **Koyukon** & 20th c., present & book, thesis & 2 & [57, 15] \\ N America & nA & **Lower Tanana** & present & thesis & 1 & [15] \\ N America & nA & **Maricopa** & 20th c. & book & 4 & [57] \\ \hline \end{tabular}
\end{table}
Table 3: **Sky cultures. Phylogeny is marked: M (Mesopotamia), E (Egypt), In (India), S (Sami), nA and sA (North and South America), C (China), P (Polynesia), An (Austronesia), Aa (Austroasia), mseA (Mainland SE Asia).**
**Inclusion criteria and limitations.** Each culture contains a number of asterisms or constellations (here, uniformly called constellations). We use all documented constellations for which a line figure was also concretely documented (to support the symbolism assigned to the constellation). In some cases, the line figure was inferred (with a variable degree of certainty) from sources which described it in words or imprecise drawings. Asterisms consisting of single stars or tight star clusters such as the Pleiades are excluded from this study (column **#constellations** does not count them). See [13] (the Data section) for a detailed description of the _inclusion criteria_, _limitations_ of the data collection (approximate star identification, unknown culture size due to lost oral knowledge, a potential bias towards recalling bright stars), _types_ of cultures (astronomical literacy, practical use for the constellations), and a regional _timeline_ of constellation records.
**Location and phylogeny.** In the table, **phylogeny** marks _regions of cultural influence or migration_ with an abbreviation: M (Mesopotamia), E (Egypt), In (India), S (Sami), nA and sA (North and South America), C (China), P (Polynesia), An (Austronesia), Aa (Austroasia), mseA (Mainland SE Asia). In summary:
**Mesopotamia**: The Western zodiac originates in Babylonia (as early as -3200 BC), where it represented gods and associated animals. These were borrowed by the Greeks (-500 BC) and transmitted to the West [69]. We mark all astronomical cultures with Western influence (such as the standard set from the International Astronomical Union, IAU) as having Mesopotamian ancestry. This naming, however, is an _oversimplification_; Western cultures are a mix of the two traditions: additional constellations of probable Mediterranean origin were also assembled into the Greek tradition [70], and gaps in the sky were also filled with minor constellations in modern times. We use this name for the phylogeny to show the oldest root origin, even though some of the constellations in this set have different, more recent origins. This phylogeny contains most European folk astronomies, including now obsolete attempts, such as the earliest modern, lined star charts by French astronomers Ruelle and Dien [71, 23] (dated 1786 and 1831). The only exception is the **Sami** culture, which has no known continental influence, so forms a phylogeny of its own.
\begin{table}
\begin{tabular}{|c c c c c c|} \hline
**location** & **phylogeny** & **sky culture** & **timestamp** & **source** & **\#constellations** & **references** \\ \hline N America & nA & **Maya** & 15th c. & cdex, books & 14 & [27, 87, 88] \\ N America & nA & **Mβkmaq** & late 19th c. & book & 4 & [57] \\ N America & nA & **Navajo** & 20th c. & book & 5 & [57] \\ N America & nA & **Ojibwe** & present & book & 9 & [47, 88] \\ N America & nA & **Pawnee** & 20th c. & book & 11 & [57] \\ N America & nA & **Sahtuotβjne** & present & thesis & 1 & [15] \\ N America & nA & **Seri** & present & book, dataset & 12 & [12, 88] \\ N America & nA & **Sioux** & present & books & 13 & [46, 57, 88] \\ N America & nA & **Tutchone** & 20th c. & book & 1 & [57] \\ N America & nA & **Tzotzil** & 20th c. & paper, book & 9 & [86, 56] \\ N America & nA & **Yellowknives** & present & thesis & 2 & [15] \\ N America & nA & **Zuni** & 20th c. & book & 9 & [57] \\ C America & nA & **Kβicheβ** & 20th c. & paper & 7 & [76] \\ \hline S America & sA & **Inca** & 1613 & book & 8 & [48] \\ S America & sA & **Kariβna** & 1980 & paper & 8 & [54] \\ S America & sA & **Lokono** & present & dataset & 10 & [72, 88] \\ S America & sA & **Mapuche** & 19th c. & book & 7 & [55] \\ S America & sA & **Quechua** & 20th c. & paper & 4 & [80] \\ S America & sA & **Tikuna** & present & dataset & 4 & [88] \\ S America & sA & **Tukano** & 1905, 2007 & book, thesis & 21 & [44, 16, 88] \\ S America & sA & **Tupi** & 1614 & book, papers & 8 & [20, 53, 38, 88] \\ \hline Pacific & P & **Anuta** & 1998 & book & 11 & [26, 88] \\ Pacific & P & **Carolines** & 1951 & paper & 12 & [30, 37] \\ Pacific & P & **Hawail** & present & website, dataset & 13 & [77, 88] \\ Pacific & P & **Kiribati** & present & dictionary & 16 & [79] \\ Pacific & P & **Manus** & 20th c. & paper & 12 & [33] \\ Pacific & P & **Maori** & 19th c. & paper & 4 & [60, 88] \\ Pacific & P & **Marshall** & present & dictionary & 41 & [2] \\ Pacific & P & **Samoa** & present & dataset & 14 & [88] \\ Pacific & P & **Tonga** & late 19th c. & paper & 11 & [19, 88] \\ Pacific & P & **Vanuatu** & present & website, dataset & 6 & [65, 88] \\ \hline \end{tabular}
\end{table}
Table 3: **Sky cultures.** Phylogeny is marked: M (Mesopotamia), E (Egypt), In (India), S (Sami), nA and sA (North and South America), C (China), P (Polynesia), An (Austroasia), Aa (Austroasia), mseA (Mainland SE Asia).
* is a single, ancient culture, from which we use only the older, native Egyptian constellations (which were combined with the Mesopotamian in the classical era [69]). These were reconstructed from pictographic sources: the astronomical ceiling of the tomb of Senenmut at Deir el Bahari in Luxor (\(\sim\)1470 BC), and the Egyptian figures on the Dendera zodiac [49].
* The Indian moon stations (27-28 in number, including single stars which are not used here) were documented during the Vedic period (before 500 BC). No external influence in this early period is known [8].
* Ancient Chinese astronomy (developed before 1000 BC) is literate, so well documented. It likely developed without external influences until the 17th c. and its influence spread to Korea and Japan.
* groups regional cultures from the Arctic to the south west, the Great Plains [57] and Mesoamerica. With the exception of the reconstructed Aztec and Maya astronomies, they are recently documented.
* groups cultures located on both coasts and in the Amazon. Since some tribes have migrated across this continent [53]), they are grouped into a single region of cultural influence. They are also relatively recently documented, with the exception of the reconstructed Inca tradition.
* groups Polynesian islands, as well as other (culturally Polynesian) islands, with ancestry in a common seafaring culture.
* groups cultures on or around: the islands of Sulawesi and Java in Indonesia, and Peninsula Malaysia in Malaysia. They (including the Austronesian Malay subgroup) are connected by an Austronesian migratory tradition and were recently documented. Only the Thai culture is marked as having a separate phylogeny (**Mainland SE Asia**), due to unclear or mixed influence: it is documented as indigenous [59] and empirically shows commonalities to the Malay asterisms, despite known early influence from China and India.
* groups populations from the Austrosiatic family, a migration which is distinct from the Austronesian. These are tribes from Central India (a mix of genetically Austrosaians and ancestral Indo-Europeans) and the Nicobarese tribe of Nicobar Islands (speaking a language from the Austrosiatic family). The level of contamination or modification by interaction with other tribes seems to be low [81].
See [13] (the Data section therein) for a detailed reasoning of the phylogeny annotations. These are not always clear due to lack of historical evidence; we use the best knowledge available.
### Semantic annotation
We annotate by hand the semantic of each constellation in the dataset. This is a judgement done by the _meaning_ (the main object named), not the _shape_ of the constellation (for example, the Chinese asterism Chief is not figurative in shape, but we mark it as humanoid). Rarely, there are exceptions which pose difficulties: a constellation has _alternative_ names (in which case, we mark the two best alternative semantics), has _mixed_ symbolism (Sagittarius is a centaur, half human and half horse, drawing a bow, and is marked as both humanoid and mammal), is _ambiguous_ (in which case, we make a choice: e.g., the Korean constellation Four Spirit of River is marked as a landscape, not a group), or is _unnamed_ or unintelligible (in which case, it is not annotated with any semantic).
Note that the same constellation has occasionally _changed semantics_ in time and across cultures. We mark semantics separately by culture, so are able to capture these changes. The Greek Capricorn (a goat) was formerly the Mesopotamian Goat-Fish (which has a fish lower half, and a goat upper half, so is assigned to both fish and mammal semantics). Ten Mesopotamian constellations have preserved the same stars but gained different names upon borrowing by the Greeks: the former Agricultural Worker (a humanoid) became Aries (a mammal, renamed by a scribe's mistake), and the former Swallow (a fish and a swallow bird, touching tails) became Pisces (two fish) [64].
The semantic categories follow biological taxonomy (for constellations naming living organisms), or otherwise a taxonomy denoting the type of object. We settled on the following semantic categories:
**humanoid**: (usually one, rarely two) individual human(s), each represented with some detail, on multiple stars (regardless of line geometry);
**animal**: by biological group (usually one, rarely two, each represented on multiple stars): **mammal**, **reptile** (including amphibian), **bird**, **fish** (including mollusc), **arthropod** (usually scorpion, crab, insect, spider);
**body part**: may be a head, leg, arm, jaw, back, tentacle, eyes, claws, hand, hair, etc.;
**group**: two or more entities of any type, each represented without detail, so stars represent individuals; also when part of the constellation is a group, and part an object, it is marked as group;
**geometric**: may be a (bent, zigzag) line, triangle, cross, quadrilateral, hexagon, letters (G, W, Y, V), or circle;
**man-made object**: a mobile object: a tool, grill, table, chair, boat, adze, cart, plough, bowl, spear, net, dipper, basket, bed, tomb, cup, flag, fire place, stove, etc.;
**architecture**: an immobile man-made construction: a house, gate, room, office, kitchen, steps, passageway, tower, lodge, wall, fence, well, pillars, poles, bridges, doors, etc.;
**landscape**: outdoors scenery: a river, mountain, garden, farm, field, territory, hill, yard, enclosure, encampment, market, orchard, cloud, rain, thunder, lightning, sea, river, pool, street, road, path, etc.;
**abstract**: a concept without a shape (e.g., an emptiness, force, death, bond, voice, amount), or an under-described, non-humanoid entity (a demon, ogre, a spirit), or a geographic pointer (northern pole, west, mark, star).
In particular, the semantic categories of **man-made object**, **architecture**, and **landscape** can be considered subjective. Some elements of architecture are figuratively similar to some elements of landscape (a wall is like a path), but both categories are sufficiently numerous (9.9% of the constellations denote architecture, and 5.5% landscape) to stand on their own. Man-made objects are numerous (22.8% of the constellations denote one), but were not split into subcategories because many objects of different scales butt similar function look similar, and the notion of scale does not apply in the sky (a fish hook is like a fishing net, or a garland; a boat is like a dipper for fluids, or a bowl).
### Computational analyses for Question (S.1): Semantic universality per sky region
Semantic universality in a sky region is said to be present if the semantic _similarity score_ between pairs of phylogenies is high. Given a semantic \(s\) and two phylogenies \(i\) and \(j\), denote by \(f_{i}^{s}\) and \(f_{j}^{s}\) the fractions of constellations from each phylogeny which are from that semantic. The joint probability of that semantic between these phylogenies is \(f_{i}^{s}\cdot f_{j}^{s}\).
Then, take a root star \(r\) which is represented in constellations from both phylogenies. As criteria for representation, we ask that a root star has at least (a) 2 constellations per phylogeny, and (b) 10 constellations in the global data. Among the constellations incident on \(r\), the same fractions from above are instead denoted \(f_{i}^{s}(r)\) and \(f_{j}^{s}(r)\), and the joint probability of having that semantic for constellations incident on \(r\) is \(f_{i}^{s}(r)\cdot f_{j}^{s}(r)\). Then, the semantic similarity score averages this among all root stars \(r\) in common between the two phylogenies:
\[\text{similarity score}^{s}_{ij}=\frac{\text{average}_{r}\ f_{i}^{s}(r) \cdot f_{j}^{s}(r)}{f_{i}^{s}\cdot f_{j}^{s}}\]
A similarity score is positive, but unbounded. Scores below 1 are not considered (they mean that similarity exists, but is expected by the natural semantic makeup of the phylogenies). High scores can be considered unexpected, so are discussed as results.
Two additional counts are reported: the _number of root stars_ in that sky region, and the _number of constellations_ in common for that semantic between the two phylogenies. There are _caveats_ to interpreting these numbers. If there are \(n\) root stars in common, it is possible that one phylogeny consistently draws a constellation such that it uses all \(n\) stars, but the other splits the \(n\) stars into two groups and consistently draws two smaller constellations, all with the same semantic category. Also, it is likely that different cultures from a single phylogeny sometimes use different subsets of the \(n\) stars to draw constellations with that semantic; the union of these subsets is taken per phylogeny, and the number of root stars in common between two phylogenies is reported as the intersection of two such unions. This design decision allows internal variation for how the sky region is grouped into constellations of the semantic of interest, so does not require perfect matches to be made (which would produce very few positive results).
### Computational analyses for Question (S.2): Semantic universality per star pattern
The **star-pattern features** were listed in Section 2. We describe how the more complex ones are computed.
The **aspect ratio** of the star cluster's point pattern is estimated by first translating the star coordinates from polar to 3D Cartesian, then computing the eigenvalues of the covariance matrix for the point cloud (functions for this are available in linear-algebra libraries, such as numpy). Intuitively, an eigenvalue is the factor by which a characteristic vector (a direction) is stretched. The aspect ratio is the ratio of the second largest to the largest eigenvalue, and ranges in \([0,1]\).
For the remaining features, the convex hull and the spatial Minimum Spanning Tree (MST) are first calculated. Both are subsets of the Delaunay triangulation [11] on a sphere over the point cloud, which is computed with the stripy triangulation package [58]. The MST is the "backbone" subset of this triangulation, such that all stars are linked into a line figure, and this line figure has minimal global length (i.e., sum of line lengths on the celestial sphere). The MST is computed with the networkx package. The convex hull is a subset of the triangulation, such that only the edges which are part of exactly one triangle are kept. Some stars of the constellation star cluster may lie inside this hull.
The **fraction of stars on the convex hull** is the ratio of stars on the convex hull, out of all stars in the cluster. The **fraction of stars in line on the MST** is the normalised hop diameter of the MST (the fraction of stars on the MST's longest branch). The **average MST branching** takes only the stars which are not MST "leaves" (are "inner" nodes, i.e., have a degree strictly higher than 1). The feature is the sum of their degree (each degree minus 1, to discount the node's parent in the tree). This is divided by the number of stars (minus 1, to discard the root of the tree). A low value corresponds to a many-star, linear MST. A high value (which cannot reach exactly 1) corresponds to a branched MST. A _caveat_ to interpreting these values is that MSTs over three stars are peculiar, since a line of three can be considered both linear and branched (its average MST branching value is 0.5, and this can be seen in the classification maps from the Results Section 2).
**Machine-learning classifiers** are used here only to describe statistical patterns, not to predict the semantic of a new constellation (in other words, there is no test data). The classifiers are multi-class Support-Vector Classifiers (SVC) implemented in the package scikit-learn[63]. They are configured to train with a strength of the regularisation \(C\) depending on the number of semantic categories to classify (between 2 and 14) and the amount of data. (The strength of regularisation is inversely proportional to \(C\).) \(C=1\) is sufficient regularisation for a binary classifier; we ascertain that the model neither over- nor underfits by inspecting the classification maps, which should not build decisions around single constellations (points). \(C\) is raised to 20 for the much harder problem of multi-class classification; this is necessary to obtain comparable performance in this setting to the simpler binary setting.
The _accuracy_ of discrimination among semantic categories is the fraction of constellations whose semantics were correctly discriminated (_balanced_ such that all semantic categories weigh equally). The _baseline_ (1 divided by the number of semantic categories) would be achieved by a random classifier, or, alternatively, by a classifier which always predicts one class. The _recall_ is the fraction of constellations from that semantic which were correctly predicted. The _precision_ is the fraction of constellations predicted to have a semantic which are indeed from that semantic. We configure the classifiers to train with balanced class weights, which tends to lead to low precision but high recall; this is to make the classifiers learn equally about the smaller class (the constellations from the semantic of interest) as about the dominant class (the constellations _not_ from the semantic of interest).
## Acknowledgments
The author wishes to thank the Stellarium team, particularly Fabien Chereau and Dr. Susanne M. Hoffmann (also the chair of Working Group Star Names at the International Astronomical Union), the contributors to the Stellarium repositories for sky cultures across the world, retired Prof. Dr. Mayank Vahia (Tata Institute of Fundamental Research, India), Charles Ennis (president of the Royal Astronomical Society of Canada, maintainer of the World Asterisms Project [https://rasc.ca/world-asterism-project](https://rasc.ca/world-asterism-project)), and Dr. Chris M. Cannon (faculty in Indigenous Studies, Center for Cross Cultural Studies, University of Alaska, Fairbanks).
|
2309.10499 | Gravitational redshift revisited: inertia, geometry, and charge | Gravitational redshift effects undoubtedly exist; moreover, the experimental
setups which confirm the existence of these effects - the most famous of which
being the Pound-Rebka experiment - are well-known. Nonetheless - and perhaps
surprisingly - there remains a great deal of confusion in the literature
regarding what these experiments really establish. Our goal in the present
article is to clarify these issues, in three concrete ways. First, although (i)
Brown and Read (2016) are correct to point out that, given their sensitivity,
the outcomes of experimental setups such as the original Pound-Rebka
configuration can be accounted for using solely the machinery of accelerating
frames in special relativity (barring some subtleties due to the Rindler
spacetime necessary to model the effects rigorously), nevertheless (ii) an
explanation of the results of more sensitive gravitational redshift outcomes
does in fact require more. Second, although typically this 'more' is understood
as the invocation of spacetime curvature within the framework of general
relativity, in light of the so-called 'geometric trinity' of gravitational
theories, in fact curvature is not necessary to explain even these results.
Thus (a) one can explain the results of these experiments using only the
resources of special relativity, and (b) even when one cannot, one need not
invoke spacetime curvature. And third: while one might think that the absence
of gravitational redshift effects would imply that spacetime is flat, this can
be called into question given the possibility of the 'shielding' of
gravitational effects by charge. This argument is shown to be valid and both
attractive forces as well as redshift effects can be effectively shielded (and
even be repulsive or blueshifted) in the charged setting. Thus, it is not the
case that the absence of gravitational effects implies a Minkowskian spacetime
setting. | Johannes Fankhauser, James Read | 2023-09-19T10:20:50Z | http://arxiv.org/abs/2309.10499v1 | # Gravitational redshift revisited: inertia, geometry, and charge
###### Abstract
Gravitational redshift effects undoubtedly exist; moreover, the experimental setups which confirm the existence of these effects--the most famous of which being the Pound-Rebka experiment--are extremely well-known. Nonetheless--and perhaps surprisingly--there remains a great deal of confusion in the literature regarding what these experiments really establish. Our goal in the present article is to clarify these issues, in three concrete ways. First, although (i) [Brown and Read, 2016] are correct to point out that, given their sensitivity, the outcomes of experimental setups such as the original Pound-Rebka configuration can be accounted for using solely the machinery of accelerating frames in special relativity (barring some subtleties due to the Rindler spacetime necessary to model the effects rigorously), nevertheless (ii) an explanation of the results of more sensitive gravitational redshift outcomes _does_ in fact require more. Second, although typically this'more' is understood as the invocation of spacetime curvature within the framework of general relativity, in light of the so-called 'geometric trinity' of gravitational theories, in fact curvature is not _necessary_ to explain even these results. Thus (a) one can often explain the results of these experiments using only the resources of special relativity, and (b) even when one cannot, one need not invoke spacetime curvature. And third: while one might think that the absence of gravitational redshift effects would imply that spacetime is flat (indeed, Minkowskian), this can be called into question given the possibility of the'shielding' of gravitational effects by charge in the context of the Reissner-Nordstrom metric. This argument is shown to be valid and both attractive forces as well as redshift effects can be effectively shielded (and even be repulsive or blueshifted, respectively) in the charged setting. Thus, it is not the case that the absence of gravitational effects implies a Minkowskian spacetime setting.
###### Contents
* 1 Introduction
* 2 Gravitational redshift
* 3 Uniformly accelerated frames and the equivalence principle
* 4 Equivalence and gravitational redshift
* 5 Redshift and torsion
* 5.1 The geometric trinity
* 5.2 Gravitational redshift as evidence for spacetime torsion?
* 6 Redshift due to charge
* 6.1 The weight of photons
* 6.1.1 A thought experiment
* 6.1.2 The inertia of energy
* 6.2 Reissner-Nordstrom metric
* 6.3 Shielding gravity
* 7 Conclusion
## 1 Introduction
In 1911, Einstein foresaw a phenomenon thereafter known as 'gravitational redshift' [Einstein, 1911]. His thought experiment initiated the revolutionary idea that mass 'warps' space and time. There does, however, remain--even after over a century of study--some confusion in the literature regarding what can be inferred legitimately about the nature of space and time on the basis of the results of gravitational redshift experiments. Our goal in this article is to clarify this issue, in three ways. First, although (i) [Brown and Read, 2016] are correct to point out that, given their limited sensi
tivity, the outcomes of experimental setups such as the original configuration of [Pound and Rebka Jr, 1960] can be accounted for using solely the machinery of accelerating frames in special relativity (barring some subtleties due to the Rindler spacetime necessary to model the effects rigorously), nevertheless (ii) an explanation of the results of more sensitive gravitational redshift outcomes _does_ in fact require more. Second, although typically this'more' is understood as the invocation of spacetime curvature within the framework of general relativity, in light of the so-called 'geometric trinity' of gravitational theories, in fact curvature is not _necessary_ to explain even these results. Thus (a) one can often explain the results of these experiments using only the resources of special relativity, and (b) even when one cannot, one need not invoke spacetime curvature. And third: while one might think that the absence of gravitational redshift effects implies that spacetime is flat, this can be called into question given the possibility of the'shielding' of gravitational effects by charge in the context of the Reissner-Nordstrom metric. This argument is shown to be valid and both attractive forces as well as redshift effects can be effectively shielded (and even be repulsive or blueshifted, respectively) in the charged setting. Thus, it is not the case that the absence of gravitational effects implies a Minkowksian spacetime setting.
The structure of the article is this. In SSSS2-4, we derive and discuss the gravitational redshift effect in three ways: (i) from the framework of general relativity (GR), (ii) using the equivalence principle, and (iii) from energy conservation principles. We then compare the results, and find them to be different; this allows us to be explicit about when one can account for the outcomes of gravitational redshift experiments using only the resources of special relativity, and when one cannot, thereby making good on our first self-declared goal as presented above. In SS5, we introduce the geometric trinity of gravitational theories--which trade the spacetime curvature of GR for either torsion (in the case of the theory known as 'teleparallel gravity') or spacetime non-metricity (in the case case of the theory known as'symmetric teleparallel gravity')--and show that by invoking this trinity of theories, one need _not_ appeal to spacetime curvature in order to explain even the exact results if gravitational redshift experiments beyond first order. Along the way, we demonstrate the falsity of some recent claims in the literature that gravitational redshift experiments provide _direct_ evidence for spacetime torsion; together, all this allows us to make good on our second self-declared goal as presented above. In SS6, we examine effects on the redshift due to charge with some remarks on the relationship between GR and electromagnetism, and the possibility of shielding locally gravity with charge; ultimately we find that one can shield both effective gravitational forces and redshift effects in the Reissner-Nordstrom metric; so, the absence of gravitational redshift effects does not imply that spacetime is Minkowskian; this makes good on our third self-declared goal as presented above. We close in SS7.
## 2 Gravitational redshift
It is a straightforward exercise to derive the relative shift in coordinate time of two clocks in a given gravitational field with metric \(g_{ab}\). Since we will employ in the following sections some alternative approximate approaches to deriving the gravitational redshift result, we first present the exact and most general derivation from general relativity, variants of which are standard fare (see for example, [Wald, 2010, p. 136]).
An emitter \(O_{1}\) on the surface of the Earth sends a train of electromagnetic pulses from point \(P_{1}\) with energy momentum 4-vector \(k^{a}\) to a rece
Figure 1: Two observers at different heights experience a time dilation effect in Earthβs gravitational field. Emitter \(O_{1}\) on the surface of the Earth sends a train of electromagnetic pulses from point \(P_{1}\) with energy momentum 4-vector \(k^{a}\) to a receiver \(O_{2}\), placed at point \(P_{2}\), at height \(h\) above \(P_{1}\). We assume \(O_{1}\) and \(O_{2}\) are static, i.e. their 4-velocities \(u_{1}^{a}\) and \(u_{2}^{a}\) are tangential to the Killing field \(\xi^{a}=\left(\frac{\partial}{\partial t}\right)^{a}\).
at point \(P_{2}\), at height \(h\) above \(P_{1}\). We assume the two observers \(O_{1}\) and \(O_{2}\) to be static, which is to say that their 4-velocities \(u_{1}^{a}\) and \(u_{2}^{a}\) are tangential to the static Killing field \(\xi^{a}=\left(\frac{\partial}{\partial t}\right)^{a}\). Since the 4-velocities of the two observers are unit vectors pointing in the direction of \(\xi^{a}\), we have \(u_{1}^{a}=\left.\frac{\xi^{a}}{\sqrt{-\xi^{b}\xi_{b}}}\right|_{P_{1}}\) and \(u_{2}^{a}=\frac{\xi^{a}}{\sqrt{-\xi^{b}\xi_{b}}}\right|_{P_{2}}\). The lengths \(\sqrt{-\xi^{b}\xi_{b}}=\sqrt{-g_{bc}\xi^{b}\xi^{c}}\) are obtained by contraction with the metric. We let the observers \(O_{1}\) and \(O_{2}\), whose clock rates we wish to compare, describe their world-lines. The difference in the world-lines' lengths in spacetime consequently determines the amount of gravitational redshift. Figure 1 illustrates the thought experiment.
Recall that for a given energy-momentum 4-vector \(p^{a}=mu^{a}\) of a particle, with respect to a local inertial frame, the energy observed by an observer that moves with 4-velocity \(v^{a}\) is
\[E=-p^{a}v_{a}.\lx@note{footnote}{In particular, if $u^{a}=v^{a}$, i.e. the particle's 4-velocity aligns with the observer's, then $E=-mv^{a}v_{a}=mc^{2}$.} \tag{2.1}\]
Therefore, for the frequency \(\nu_{i}\) of the photon observed by \(O_{i}\), which moves with 4-velocity \(u_{i}^{a}\), we find the relation \(h\nu_{i}=E_{k}=-k_{a}u_{1}^{a}|_{P_{2}}\) (cf. (2.1)), where \(E_{k}\) is the energy of the photon. By definition of the vector field \(\xi^{a}\), we have \(\left.\xi_{a}\xi^{a}\right|_{P_{2}}=g_{00}\big{|}_{P_{2}}\) since \(\xi^{a}\) has vanishing spatial components. It would involve a fair amount of work to derive the gravitational redshift by finding the geodesic equation. However, this can be avoided by taking advantage of a useful proposition. Light travels on null geodesics (in the geometrical optics approximation, i.e. the spacetime scale of variation of the electromagnetic field is much smaller than that of the curvature: see e.g. [Misner et al., 1973, p. 571]), from which it follows that the inner product \(k_{a}\xi^{a}\) is constant along geodesics, that is \(\left.k_{a}\xi^{a}\right|_{P_{1}}=\left.k_{a}\xi^{a}\right|_{P_{2}}\).2
Footnote 2: For a detailed proof see for instance, [Wald, 2010, p. 442]
Spacetime around Earth (if considered as generated by a point mass \(M\) at \(r=0\)) can be modelled by the Schwarzschild metric
\[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}= -\left(1-\frac{r_{S}}{r}\right)c^{2}dt^{2}\] \[+\left(1-\frac{r_{S}}{r}\right)^{-1}dr^{2}\] \[+r^{2}(d\vartheta^{2}+\sin^{2}\vartheta d\varphi^{2}), \tag{2.2}\]
where
\[r_{S}=\frac{2GM}{c^{2}} \tag{2.3}\]
is the so-called Schwarzschild radius, \(r\) the distance from the Earth's centre, \(G\) the gravitational constant, \(c\) the speed of light, and \(M\) the mass of the Earth. This yields
\[\frac{\nu_{1}}{\nu_{2}} =\frac{\sqrt{-\xi^{b}\xi_{b}}}{\sqrt{-\xi^{b}\xi_{b}}}\bigg{|}_{ P_{1}}=\frac{\sqrt{1-\frac{2GM}{c^{2}r_{2}}}}{\sqrt{1-\frac{2GM}{c^{2}r_{1}}}}\] \[\approx 1+\frac{GM}{c^{2}}\left(\frac{1}{r_{1}}-\frac{1}{r_{2}} \right)\approx 1+\frac{gh}{c^{2}}, \tag{2.4}\]
or
\[\frac{\Delta\nu}{\nu}\approx\frac{GM}{c^{2}}\left(\frac{1}{r_{1}}-\frac{1}{r_ {2}}\right), \tag{2.5}\]
with \(g:=\frac{GM}{r_{1}^{2}}\) the gravitational constant at \(r_{1}\), \(\nu=\nu_{1}\), \(\Delta\nu=\nu_{1}-\nu_{2}\), and \(r_{2}-r_{1}=h\). For the last approximation in the second last line we have used \(\frac{1}{r_{1}}-\frac{1}{r_{2}}=\frac{r_{2}-r_{1}}{r_{2}r_{1}}\approx\frac{h}{ r_{1}^{2}}\) if \(r_{1}\approx r_{2}\) and \(r_{1},r_{2}\gg h\).
Experimental tests of the gravitational redshift were first conducted by Cranshaw, Schiffer and Whitehead in the UK in 1960 [Cranshaw et al., 1960]. It was not clear whether significant conclusions could be drawn from their results. In the same year, the experiments by Pound and Rebka in Harvard successfully verified the gravitational redshift effect [Pound and Rebka Jr, 1960].
## 3 Uniformly accelerated frames and the equivalence principle
Einstein's equivalence principle (also called the weak equivalence principle) assumes that any experiment in a uniform gravitational field yields the same results as the analogous experiment performed in a frame removed from any source of gravitational field but moving in uniform accelerated motion with respect to an inertial frame [Norton, 1985].3
Footnote 3: Note that [Brown and Read, 2016] use βEinstein equivalence principleβ to refer to what is often called the βstrong equivalence principleβ. For further recent discussion of equivalence principles, see [Lehmkuhl, 2021].
However, it is clear that Einstein was well aware of the mere linearly approximate validity of the equivalence principle when he wrote that
we arrive at a principle [the equivalence principle] which, if it is really true, has great heuristic importance. For by theoretical consideration of processes which take place relative to a system of reference with uniform acceleration, we obtain information as to the behaviour of processes in a homogeneous gravitational field.... It will
be shown in a subsequent paper that the gravitational field considered here is homogeneous only to a first approximation. [Einstein, 1911, p. 900]
The principle, thus, only holds in a'small neighbourhood' of a point-like observer. Nonetheless, a treatment of the redshift effect in a uniform static gravitational field proves instructive, insofar as it shows that certain consequences of GR can be explained without resorting to effects such as spacetime curvature (this, indeed, is the central lesson of [Brown and Read, 2016]). Dealing with uniform accelerations in order to derive the gravitational redshift, however, is a delicate business, and we shall see that the field, resulting from uniform (proper) acceleration, is not _uniform_ if we demand a constant (proper) distance between emitter and observer! This, of course, is a familar lesson regarding Rindler frames (i.e., uniformly accelerating frames) in special relativity): see e.g. [Read, 2023, ch. 9] for further discussion.
We consider a spaceship that is uniformly accelerated. An emitter \(E\) and receiver \(R\) inside the spaceship, separated by a height \(h\), compare frequencies of signals ascending the spaceship. For an illustration, see Figure 2. As in the derivation of the gravitational redshift from the Schwarzschild metric, we let the observers describe their world-lines. It suffices to consider only one spatial dimension \(x\). Acceleration \(a\) is measured in an inertial frame \(S\) with momentary velocity \(v\) relative to the inertial frame \(S^{\prime}\) outside the spaceship, inside of which the acceleration is measured to be \(a^{\prime}\).4 Relativistic transformation of 3-acceleration gives
Footnote 4: It is implicitly assumed that the proper time of co-moving clocks depends only on velocity and is independent of acceleration. This assumption is often called the βclock hypothesisβ (see for example, [Brown and Read, 2016, Section 3]).
\[a=\gamma^{2}a^{\prime}, \tag{3.1}\]
where \(\gamma=\frac{1}{\sqrt{1-\frac{12}{c^{2}}}}\) is the Lorentz factor.5
Footnote 5: To find the transformation of acceleration, one has to differentiate the spatial coordinates of the Lorentz transformation with respect to the time coordinates to first find the 3-velocity transformation (velocity-addition formula). Another differentiation of the velocities yields the transformation law for 3-acceleration.
Note that the acceleration of the spaceship needs to be measured in the (momentary) inertial frame with instantaneous velocity \(v\) such that \(a^{\prime}=\frac{dv}{dt}\) (proper acceleration). With respect to the accelerated frame, sure enough, the ship's acceleration is zero. However, the principle of relativity--the requirement according to which the laws of physics take the same form in any inertial frame--no longer holds in accelerated, hence non-inertial, frames. Therefore, as expected, the two observers in the spaceship are going to feel a (pseudo)force \(F=m_{0}a\), where \(m_{0}\) is the rest mass (invariant mass) of an object in the spaceship.
We want the (proper) acceleration \(a\) of the spaceship to be constant. The right hand side of (3.1) is equal to \(\frac{d}{dt}\left(\gamma v\right)\). Since \(a\) is constant we integrate (3.1) twice to find the trajectory--a so-called Rindler hyperboloid--of a uniformly accelerated point body as observed in the inertial frame \(S^{\prime}\):
\[x(t)=\frac{c^{2}}{a}\sqrt{1+\left(\frac{at}{c}\right)^{2}}+C, \tag{3.2}\]
with \(C\) a constant from integration. The second constant from the first integration was set to zero such that \(v(0)=0\). Without loss of generality we can also set \(C=0\). The result represents a hyperbolic path in Minkowski space, i.e.
Figure 2: The gravitational redshift experiment in a uniformly accelerated spaceship. The redshift effect can be explained by the equivalence principleβ_to first order_.
\[x^{2}-c^{2}t^{2}=\frac{c^{4}}{a^{2}}, \tag{3.3}\]
from which the term 'hyperbolic motion' is derived. We assume the back of the spaceship be subject to this motion. Note that \(\dot{x}\overset{t\rightarrow\infty}{\rightarrow}c\), as expected.
We recover uniform acceleration in the Newtonian sense for \(t\ll 1\). That is,
\[x(t)=x_{0}+\frac{at^{2}}{2}, \tag{3.4}\]
with \(x_{0}=c^{2}/a\) the position at \(t=0\).
For an exact derivation, it would lead to inconsistencies to assume that the emitter and receiver traverse the same Rindler hyperboloid with only an additional spatial distance \(h\) in the coordinate \(x\) between them. For if we maintain a constant height between \(E\) and \(R\) relative to the inertial observing frame \(S^{\prime}\), then length contraction, as predicted in special relativity, will stretch the spaceship and eventually tear it apart (cf. the spaceship paradox in [4] and [13, ch. 9]). This is key. As was also pointed out by [1], assuming the gravitational acceleration to be the same for the top and bottom observers leads to all kinds of paradoxes. Most notably, it is not possible in this case to define a globally freely falling inertial frame because the corresponding metric would lead to a non-vanishing Riemann tensor, and hence curvature! The receiver \(R\) in the bow lying higher by height \(h\) with respect to the emitter \(E\) must follow the hyperboloid
\[x^{2}-c^{2}t^{2}=\left(\frac{c^{2}}{a}+h\right)^{2}, \tag{3.5}\]
for the proper height (relative to \(S\)) to be constant. These are the two desiderata to simulate reasonably the gravitational redshift by uniform acceleration: first, the ship must have a constant acceleration; and second, the ship must have a constant proper height. The worldlines of emitter and receiver are denoted in Figure 3.
Due to relativistic length contraction, the receiver's proper acceleration needs to be slightly greater. By comparing the two hyperbolae it immediately follows that the acceleration \(g_{R}\) of the receiver is related to the emitter's acceleration \(g_{E}\) by
\[g_{R}=\frac{g_{E}}{1+\frac{g_{E}h}{c^{2}}}. \tag{3.6}\]
(Compare also the treatment and related paradoxes in [1].) Therefore, the gravitational field is not constant over the extended region of the spaceship. That is, however, not a surprise, for we would not expect the equivalence principle to hold globally in the first place. Further, it follows that proper time intervals along two different Rindler hyperbolae between two events having the same coordinate velocity are in a fixed proportion,
\[\frac{\tau_{R}}{\tau_{E}}=\frac{g_{E}}{g_{R}}=1+\frac{g_{E}h}{c^{2}}, \tag{3.7}\]
yielding the exact gravitational redshift formula for uniform acceleration. Alternatively, we can write
\[\nu_{R}=\frac{\nu_{E}}{1+\frac{g_{E}h}{c^{2}}}=\nu_{E}\left(1-\frac{g_{R}h}{ c^{2}}\right) \tag{3.8}\]
for the corresponding observed frequencies, to highlight the dependence on the two different proper accelerations of emitter and receiver (cf. also the results in [1]). From the preceding derivations we readily find for the (Rindler) metric of an accelerated frame
\[ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=\left(1+\frac{g_{E}x}{c^{2}}\right)^{2}c^{ 2}dt^{2}-dx^{2}. \tag{3.9}\]
Figure 3: The world-lines of emitter \(E\) and receiver \(R\) are Rindler hyperbolae when experiencing constant proper acceleration.
Thus, the gravitational redshift according to this metric reads
\[\frac{\nu_{E}}{\nu_{R}}=\frac{\Delta t_{x=h}}{\Delta t_{x=0}}=\frac{\sqrt{g_{00}}| _{x=h}}{\sqrt{g_{00}}|_{x=0}}=1+\frac{g_{E}h}{c^{2}}, \tag{3.10}\]
which is consistent with the first order approximation of the gravitational redshift from the Schwarzschild metric in (2.4). Clocks at \(E\) and \(R\), whose rates one wishes to compare, are permitted to describe their world-lines, i.e. Rindler hyperbolae, with respect to the inertial frame, and the value for the redshift is obtained by comparing the lengths of their world-lines in spacetime. Therefore, the treatment here is exact. The Rindler metric is, in fact, a solution to the vacuum Einstein field equations and has vanishing curvature (this should be obvious, since it is simply Minkowski spacetime in an accelerating frame). Note that since the redshift effect according to the Rindler metric depends on the absolute height \(x\), it only coincides with the classical Doppler shift formula--which exactly equals (3.10)--at the time when the space ship launches, i.e. the emitter is at \(x=0\), and the receiver at \(x=h\). In this case the proper acceleration is equal to the gravitational acceleration on the surface of the earth, i.e. \(g_{E}=g\).
It is worth mentioning that the relativistic Doppler shift is yet another way to arrive at the gravitational redshift to first order. There, we have
\[\frac{\nu_{E}}{\nu_{R}}=\frac{\sqrt{1+\frac{\nu}{c}}}{\sqrt{1+\frac{\nu}{c}}} \approx 1+\frac{g_{E}h}{c^{2}}, \tag{3.11}\]
where \(\nu=\frac{g_{E}h}{c}\) the velocity of the receiver at the time when the photon reaches it.
In experiments such as those of Pound and Rebka which were used to confirm gravitational redshift, the emitter sends a signal at equal intervals on a clock at the surface of the Earth. The receiver measures the time interval between receipt of the signals on an identical clock at height \(h\) (see Figure 4).
Merely when the experiment is taken to be at rest in a Rindler frame, the equivalence principle implies that the relation between the clock times of emitter and receiver must be the same as if a spaceship were to accelerate vertically upwards in free space, as shown in Figure 2. The signals at the back are received at longer intervals than they are emitted because they are catching up with the accelerated bow of the spaceship and thus exhibit a Doppler shift. Note that the equivalence principle is local. Thus, in a field like that of the Earth it holds only approximately (to first order) for a small spacetime region.
## 4 Equivalence and gravitational redshift
Although GR is a well-established framework, it often occurs that its application amounts to an analysis that renders conclusions equivocal. This, in particular, happens to be the case for gravitational redshift. For instance, Brown and Read comment on the gravitational redshift effect as follows:
The second possible misconception [regarding general relativity] relates to the notion that gravitational redshift experiments provide evidence for spacetime curvature. They do, but contrary to what is claimed in some important modern textbooks on GR, a single gravitational redshift experiment does not require an explanation in terms of curvature. Rather, it is only multiple such experiments, performed at appropriately different locations
Figure 4: The Pound-Rebka experiment. Receiver \(R\) measures a lower frequency of the photon than what it was when emitted at \(E\).
in spacetime, that suggest curvature, via the notion that inertial frames are only defined locally... This "redshift" effect follows directly from the claim that the emitter and absorber are accelerating vertically at a rate of \(g\ m/s^{2}\) relative to the (freely falling) inertial frames. [Brown and Read, 2016, pp. 327, 329]
Here, Brown and Read assume the'redshift' effect to be independent of 'tidal effects' (which is what they refer to as curvature). We have in fact already shown such a derivation is limited and does not fully account for gravitational redshift. There are indeed tidal effects in a single redshift experiment as outlined above in the most general derivation. Moreover, as we have seen, assuming both emitter and absorber to accelerate at the same rate is impossible given the two desiderata mentioned. However, they acknowledge there is nonetheless a connection between spacetime curvature and redshift experiments. This connection, for Brown and Read, amounts to the fact that redshift experiments carried out at different places on the surface of the Earth reveal 'geodesic deviation' due to the spherical shape of the planet. That is, relative to a global freely falling frame at the site of one redshift experiment, a freely falling frame at another site is not moving inertially. Multiple gravitational redshift experiments thus require for their joint explanation the rejection of the global nature of inertial frames. Brown and Read maintain it is only geodesic deviation that reveals curvature. However, we have now seen that one (sufficiently sensitive!) experiment is in fact sufficient to detect tidal effects of Earth's gravitational field, and therefore curvature, after all (at least modulo the issues to be discussed in the following section).
What Brown and Read deem to be a misconception, that is that
[a]n explanation for the results of a single gravitational redshift experiment of Pound-Rebka type will appeal to a notion of spacetime curvature [Brown and Read, 2016, p. 330],
is indeed one. However, this results not from an absence of curvature. Rather, since the Pound-Rebka experiment was solely designed to verify the first order effects predicted by GR, in this case a derivation via accelerated frames gives the desired result.
Brown and Read's proposal holds if the gravitational field of the Earth is assumed to be uniform--that is, independent of the radial distance from the centre of the Earth, and also if \(\frac{gh}{c^{2}}\ll 1\). In experiments involving larger spatial separations or stronger gravitational field variations, it is necessary to use the exact Schwarzschild solution of GR. By means of fully formed GR, of course, all approximations are bound to disappear. Incidentally, the ratio between the exact gravitational redshift and the first order approximation amounts to about 0.7%--which is below the measurement accuracy of the Pound-Rebka experiment (typically around 1% [Pound and Rebka Jr, 1960]). However, more accurate experiments performed after that of Pound and Rebka are indeed able to measure gravitational redshift to a precision beyond the first order effect (see, for instance, the hydrogen maser clock tests with a height difference of about \(10,000\) km by [Vessot et al., 1980]--the experiment tested gravitational redshift to 0.007% accuracy.) Thus, for the high precision measurements Read and Brown's account is insufficient to explain the effects of gravitational redshift in terrestrial experiments by appealing to the equivalence principle only.
So: if the equivalence principle is to be used to explain the gravitational redshift, then it is important to realise that this can only be done to first order. In addition, the quantitative results of Pound-Rebka can indeed be justified without appealing to spacetime curvature, but one should be aware that a complete theoretic description has to take into account the inhomogeneous gravitational field of Earth. After all, more sophisticated experiments with higher accuracy than those used by Pound and Rebka do measure effects due to curvature in a single redshift experiment.6 Although our considerations do not inhibit the successful comparison of the results of the Pound-Rebka experiment with first order calculation because higher order effects are beyond their measurement accuracy, they show that the qualitative explanation of the result does require one to invoke spacetime curvature and an exact treatment of accelerations in special relativity to model gravitational redshift with the equivalence principle.
Footnote 6: Note that for the Schwarzschild metric \(R=0\) and \(R_{\mu\nu}=0\), but not all entries of the Riemann curvature tensor \(R_{\mu\nu\rho\sigma}\) vanish.
## 5 Redshift and torsion
Having established that first-order gravitational redshift effects do not require an explanation in terms of geometrical properties of spacetime such as curvature, let us turn now to the question of whether one needs spacetime curvature to explain the results of gravitational redshift experiments even _beyond_ first order.
### The geometric trinity
What we have established up to this point is this: although [Brown and Read, 2016] are correct that one can account for the experimental results such as that of Pound and Rebka using only the resources of an accelerating frame in special relativity, the full explanation of the results of experiments of this kind beyond first order requires further resources, e.g. recourse to spacetime curvature. Even granting this, however, it is important to recognise that although appeals to curvature might be _sufficient_ to explain such effects, they are not _necessary_. The reason for this is that general relativity forms but one corner of a 'geometric trinity' of gravitational theories, all of which are dynamically equivalent (in the sense that their Lagrangians are equivalent up to boundary terms7), but in each of which gravity is a manifestation of a different geometric property of spacetime: curvature in the case of general relativity, torsion in the case of 'teleparallel gravity' (TPG), and non-metricity in the case of'symmetric teleparallel gravity' (STGR). For a review of the geometric trinity, see [Beltran Jimenez et al., 2019]; in what follows we focus on the case of torsion and TPG.
Footnote 7: Whether this means that the theories are _empirically_ equivalent is a subtle business, and depends upon how the boundary terms by which the theories differ are treatedβsee [Wolf and Read, 2023] for discussion.
We begin by recalling some details regarding spacetime torsion. The torsion tensor \(T^{a}_{\ bc}\), defined through \(T^{a}_{\ bc}\ X^{b}Y^{c}=\nabla_{b}X^{b}Y^{a}-X^{a}\nabla_{b}Y^{b}-\left[X,Y \right]^{a}\), is a measure of the antisymmetry of a connection: in a coordinate basis, it reads \(T^{\mu}_{\ \nu\lambda}=\Gamma^{\mu}_{\ \nu\lambda}-\Gamma^{\mu}_{\ \lambda\nu}\), where \(\Gamma^{\mu}_{\ \nu\lambda}\) are the connection coefficients associated to the derivative operator \(\nabla\) in this basis. In GR, the connection is metric compatible, in the sense that \(\nabla_{a}g_{bc}=0\) (failure of this condition implies non-metricity, which is the geometric property upon which STGR is built), and torsion-free, in the sense that the associated torsion tensor vanishes. In TPG, by contrast, one uses an alternative so-called 'Weitzenbock connection', with torsion but no curvature: see [Aldrovandi and Pereira, 2012].
Spacetime curvature constitutes a measure of the extent to which a single vector fails to come back to itself when parallel transported around a loop. Similarly, spacetime torsion constitutes a measure of the extent to which two vectors may fail to form a parallelogram when parallel transported along one another. To see this, take two vectors \(\chi^{a}\) and \(\zeta^{a}\) in the tangent space at some point \(p\in M\) where \(M\) is the spacetime manifold; first parallel transport \(\chi^{a}\) along \(\zeta^{a}\), and then transport \(\zeta^{a}\) along \(\chi^{a}\). In a torsion-free spacetime, the result of these two processes will be the same, and a parallelogram is formed. However, if the connection has torsion, then the 'parallelogram' will not close--with this non-closure proportional to torsion. Given any parallel-ogram which does not close, one may define therefrom a torsion tensor, and so a connection with torsion.
The Einstein-Hilbert action of GR,
\[S_{\text{EH}}=\int_{M}\sqrt{-g}R, \tag{5.1}\]
where \(R\) is the Ricci scalar, is equivalent up to a boundary term to the TPG action,
\[S_{\text{TPG}}=\int_{M}\sqrt{-g}T, \tag{5.2}\]
where \(T\) is the 'torsion scalar', which is obtained from the torsion tensor via suitable index contraction.8 Since GR and TPG are therefore dynamically equivalent, any empirical phenomenon which one can account for using the resources of one theory can likewise be accounted for using the resources of the other theory. Therefore, insofar as one can account for the full results of a gravitational redshift experiment beyond first order using spacetime curvature in GR, one can likewise account for the full results of such experiments using torsion in TPG. In this sense, curvature is--as already stated above--sufficient but not _necessary_ to account for these experimental results.9 This point is not widely known, but deserves to be stressed.
Footnote 8: See e.g. [Aldrovandi and Pereira, 2012] for the explicit definition of the torsion scalar, which wonβt matter for our purposes.
Footnote 9: For further discussion of the fact that TPG can pass manyβin fact, all!βof the βclassic testsβ of GR, see [Wolf et al., 2023].
### Gravitational redshift as evidence for spacetime torsion?
The conclusion presented above is the correct verdict _vis-a-vis_ other possible geometric explanations of the gravitational redshift results beyond first order. Drawing on work of [Schucking, 2008], however, [Maluf et al., 2009] go further, by arguing that gravitational redshift experiments of the Pound-Rebka type provide _direct_ evidence for spacetime torsion. This claim cannot be correct; in this subsection, we first present the argument, before diagnosing what is wrong with it.
The argument of [Maluf et al., 2009] proceeds as follows. In a frame comoving with the observers at either end of the Pound-Rebka experimental setup, parallelograms of light rays close--this much is evident from e.g. Figure 1. But (the reasoning goes) in an inertial frame of reference--_accelerating_ with respect to
the experimental setup, as already discussed above--such parallelograms do _not_ close; therefore, there is direct experimental evidence for spacetime torsion.
This claim is not correct, for several reasons. First, it neglects the above-noted fact that, in an accelerating frame, the two observers will in fact follow the trajectories of _Rindler_ observers--recall again Figure 3. In Rindler spacetime, parallelograms formed by the photons emitted in the experiment _do_ close, thereby rendering moot the argument expounded by [10].
Second--and relatedly--one may always define a collection of vectors which form a 'parallelogram' that does not close. However, absent some prior grounding of such a 'parallelogram' in the properties of spacetime (e.g. via the above account regarding the parallel transport of two vectors), to do so is arbitrary, and tells one nothing regarding the nature of spacetime. This, however, is precisely the form of the above argument: a certain 'parallelogram' is shown not to close, and an inference regarding spacetime torsion is drawn therefrom. However, no connection exists--or at least, has been shown to exist--between this 'parallelogram' and the nature of spacetime: the decision to focus on such a 'parallelogram' is arbitrary, with this geometrical construction bearing no relation to e.g. the parallel transport of vectors about two sides of a loop. Thus (to reiterate), the failure of a parallelogram to close _per se_ tells one nothing regarding spacetime torsion.
Third, whether the relevant 'parallelogram' closes in the case of gravitational redshift experiments is a manifestly frame-dependent phenomenon. However, whether a spacetime has torsion is a frame-independent matter. The fact that one would construct from this 'parallelogram' a vanishing torsion tensor in one frame but not another indicates that one's doing so reveals nothing about the nature of spacetime torsion itself--on the assumption that all facts about spacetime must be frame-independent in nature.
Fourth, at [10, SS5] it is suggested that the non-closure of the 'parallelogram' in gravitational redshift experiments constitutes evidence for TPG (in which the derivative operator has torsion) over GR (in which the derivative operator is torsion-free). However, as already mentioned, the form of TPG under consideration is dynamically equivalent to GR. Thus, it cannot be that _any_ empirical results--including those of gravitational redshift experiments--constitute evidence for one theory over the other; and so it cannot be that gravitational redshift results constitute evidence for spacetime torsion. Put another way, even granting that in TPG an explanation for Pounda-Rebka type results can be given in terms of spacetime torsion, it is not the case that such results themselves _favour_ TPG torsion-based explanations over alternative, torsion-free explanations available from GR.
## 6 Redshift due to charge
To recap: we've now seen that (a) one needn't invoke geometrical properties of spacetime such as curvature in order to explain first-order gravitational redshift results--here, consideration of accelerating frames in special relativity suffices. Moreover, (b) even beyond first order, one can appeal to other geometric properties of spacetime--_viz._, torsion or non-metricity--in order to account for the results of gravitational redshift experiments. In this section, we consider what would be implied by the _absence_ of gravitational redshift results: naively, one might think that this would imply that spacetime is Minkowskian; in fact, however, charge in the Reissner-Nordstrom metric can shield redshift effects (one might, indeed, be motivated to think this on the grounds that shielding of forces in Reissner-Nordstrom spacetimes is already a known phenomenon: see e.g. [11]). Therefore, null results of gravitation redshift experiments do not imply that spacetime is Minkowskian.10
Footnote 10: There is also the possibility of gravitational redshift in non-relativistic spacetimesβsee e.g. [12]βbut weβll set this aside here.
### The weight of photons
What Pound and Rebka call the 'weight of photons' in their experiments, in fact, aptly describes how Einstein originally had thought of gravitational redshift and what he had termed the inertia of energy.
#### 6.1.1 A thought experiment
Let us go back to the thought experiment alluded to in the introduction. Einstein foresaw the gravitational redshift on the basis of a thought experiment using the 'inertia of energy' he had discovered in 1905 [14], six years before his famous paper on relativity [14]. Here, we'll spell out a variant of this thought experiment (we don't make any historical claim to be reconstructing the argument as Einstein himself presented it).
Consider a test body of mass \(m_{0}\) at rest at a height \(h\), with a total energy \(m_{0}c^{2}+m_{0}gh\)--i.e., the sum of
of its rest energy and gravitational potential energy. Subsequently, the mass is dropped; when it reaches the ground the total energy \(\gamma m_{0}c^{2}\) is obtained, where \(\gamma=\frac{1}{\sqrt{1-\frac{3}{c^{2}}}}\), and \(v\) is the velocity of the mass at the ground (such that \(m_{0}c^{2}+m_{0}gh=\gamma m_{0}c^{2}\)). The mass is then transformed into a packet of radiation of energy \(\hbar\omega_{1}\), which is then sent from the ground back to height \(h\), where the mass \(m_{0}\) had been situated initially. There, the packet is transformed back into a mass \(m\). By energy conservation, \(m\) must equal the mass \(m_{0}\) (note that we assume here that the energy of the radiation is transformed entirely into the rest mass of the test body, and not into a sum of rest mass and potential energy), which amounts to saying that \(\hbar\omega_{2}=m_{0}c^{2}\), where \(\omega_{2}\) is the frequency of the packet at height \(h\). See steps (1)-(4) in Figure 5.
From this we regain the first order approximation in (4);
\[\frac{\nu_{1}}{\nu_{2}}=\frac{m_{0}c^{2}+m_{0}gh}{m_{0}c^{2}}=1+\frac{gh}{c^{2 }}. \tag{6.1}\]
(6.1) again involves an approximation of the exact redshift formula, for we assume a uniform gravitational field. Hence we use \(m_{0}gh\) for the energy of the test body. If we were to take into account the \(\frac{1}{r}\)-dependence of the gravitational potential, then we would obtain
\[\frac{\nu_{1}}{\nu_{2}} =\frac{m_{0}c^{2}+\int\limits_{r_{2}}^{r_{1}}F_{N}dr}{m_{0}c^{2}}\] \[=1+\frac{GM}{c^{2}}\left(\frac{1}{r_{1}}-\frac{1}{r_{2}}\right)\] \[\approx 1+\frac{gh}{c^{2}}, \tag{6.2}\]
with \(F_{N}\) being Newton's gravitational force of a massive central body.
Bear in mind that neither the derivation by means of uniformly accelerated frames nor the derivation by means of energy conservation yield the correct value for the gravitational redshift in the first line of (4). The former holds in virtue of the inhomogeneity of Earth's gravitational field and the merely local validity of the equivalence principle. The latter is true because the Newtonian central body force law is an approximate limit of GR.
#### 6.1.2 The inertia of energy
The approach of describing the redshift effect as a result of energy conservation suggests the following idea:
Any'source' of energy causes clocks at different distances from the'source' to exhibit time dilation effects.
As one example, charged particles attracted by a charged source should likewise be expected to give rise to redshift effects. We can, however, now follow the procedure from above and play the same game with charged bodies, replacing the Newtonian potential with the Coulomb potential. Consider a charged source
Figure 5: Gravitational redshift as a consequence of energy conservation. A test body of mass \(m_{0}\) at rest at a height \(h\) is dropped. When it reaches the ground the total energy \(\gamma m_{0}c^{2}\) is obtained. The mass subsequently is transformed into a photon of energy \(\hbar\omega_{1}\), which is then sent from the ground back to height \(h\). There, the photon is transformed back into a mass \(m\). By energy conservation, \(m\) must equal the mass \(m_{0}\), from which it follows that the photonβs frequency must have decreased at its ascent.
and a test particle of charge \(q\) and mass \(m_{0}\). We assume the mass of the source to be negligible. The charged particle falls under the attraction of the source according to the Coulomb force. When it reaches height \(r_{1}\), a photon is created out of it and sent back to the particle's initial position, where it is transformed back into a mass \(m\) with charge \(q\). For this process to happen, we can imagine annihilating the descending charge by an anti-charge \(-q\) to create a photon (or actually at least two photons, which we can think of a single photon for the discussion). The photon is sent back, and when it reaches the top, the initial charge \(q\) plus its anti-charge \(-q\) is created via pair-production. We assume the two particles have the same mass \(m\). The anti-charge \(-q\) subsequently is brought back to the bottom to restore the initial situation. It is precisely the energy contribution of this last step that cancels a redshift effect in the calculations, which is not further analysed here.
### Reissner-Nordstrom metric
In fact, charge does give rise to redshift effects--and consequently time dilation--in the standard formalism of GR, albeit not in a way analogous to how mass curves spacetime.
From the Einstein equations, we obtain the Reissner-Nordstrom metric (cf. [11])
\[ds^{2}= -\left(1-\frac{2GM}{c^{2}r}+\frac{GQ^{2}}{4\pi\epsilon_{0}c^{4}r^ {2}}\right)c^{2}dt^{2}\] \[+\left(1-\frac{2GM}{c^{2}r}+\frac{GQ^{2}}{4\pi\epsilon_{0}c^{4}r^ {2}}\right)^{-1}dr^{2}\] \[+r^{2}(d\vartheta^{2}+\sin^{2}\vartheta d\varphi^{2}), \tag{6.3}\]
from which we recover the Schwarzschild metric when \(Q=0\). It is worth mentioning that the charge term in the Reissner-Nordstrom metric affects geodesics of particles even though they may be uncharged. For \(Q\neq 0\), this metric gives rise to an additional gravitational redshift. In analogy with the derivation of gravitational redshift due to mass, we obtain
\[\frac{\nu_{1}}{\nu_{2}} =\frac{\sqrt{\left(1-\frac{2GM}{c^{2}r_{2}}+\frac{GQ^{2}}{4\pi \epsilon_{0}c^{4}r_{1}^{2}}\right)}}{\sqrt{\left(1-\frac{2GM}{c^{2}r_{1}}+ \frac{GQ^{2}}{4\pi\epsilon_{0}c^{4}r_{1}^{2}}\right)}}\] \[\approx 1+\frac{g\hbar}{c^{2}}-\frac{g\epsilon_{2}\hbar}{c^{2}}, \tag{6.4}\]
where \(g\) defined as before, and \(g\epsilon_{2}:=\frac{GQ^{2}}{4\pi\epsilon_{0}c^{2}r^{3}}\). The approximations are as in the case without charge (first order terms in \(h\) and large radii \(r_{1},r_{2}\)).
The effect is quadratic in the charge \(Q\), and, in fact, leads to a _blueshift_ of the photon. Thus, it partly compensates the gravitational redshift due to mass. Note that gravity is fully 'geometrised' by GR. That is, geodesics of the metric fully describe the motion of test particles. Whereas for charged sources, the usual force terms from electrodynamics need to be considered additionally in the geodesic equation.
### Shielding gravity
It is often considered to be a feature of gravity that shielding an object from the influence of a gravitational field is impossible--unlike in e.g. electromagnetism, where both positive and negative charges exist. But the Reissner-Nordstrom metric complicates this picture, for in this spacetime one finds that that the charge \(Q\)_can_ shield an attractive force towards the black hole.11 This result and its derivation are already known in the literature as electro-gravitic repulsion--see e.g. [12]. What is not known is that one can likewise shield gravitational redshift effects due to charge.
Footnote 11: Of course, this is subtle, since (a) all particles still move on geodesics, and (b) it really depends on what one means by βgravityβ.
Let us elaborate on this point. Recall the Reissner-Nordstrom metric (6.3). There, the two terms in \(g_{00}\)--one proportional to \(M\), the other to \(Q\)--come with opposite signs. This makes it possible to tune the parameters such that Schwarzschildian gravitational redshift effects can be compensated for by the charge of the black hole \(Q\), at least locally. Indeed, if we choose the mass an charge such that
\[\frac{2GM}{c^{2}r}=\frac{GQ^{2}}{4\pi\epsilon_{0}c^{4}r^{2}}, \tag{6.5}\]
then we recover the Minkowski metric for flat space. This equality, obviously, can only be met on a sphere with one fixed radius \(r\). But it might be taken to mean that gravity, in this sense, can at least be'shielded' locally.
However, we must be careful to identify correctly the physical significance of the parameter \(M\) in the Reissner-Nordstrom metric. Recall first that typically in the classical limit of a general relativistic spacetime, one writes \(g_{00}\cong 1+2\phi/c^{2}\), for an effective Newtonian gravitational potential \(\phi\).12
Due to the relativistic equivalence of mass and energy, the electric field energy contributes to the total mass. Taking this into account, the effective total mass \(M\) that features in the Reissner-Nordstrom metric is then found to be
\[M=M_{b}+\frac{Q^{2}}{16\pi\epsilon_{0}GM_{b}}, \tag{6.6}\]
where \(M_{b}\) is the irreducible bare mass of the black hole (see for instance [Christodoulou and Ruffini, 1971]. [Damour, 2012] and [Qadir, 1983]).
Thus, the total source mass \(M\) in the \(g_{00}\) component of the Reissner-Nordstrom metric, is composed of a term due to the 'bare mass' of the black hole, plus a term due to the electric field density. Although the mass term depends on the charge, one can still obtain a cancellation of the redshift effects by charge whenever
\[\left(2M_{b}+\frac{Q^{2}}{8\pi\epsilon_{0}GM_{b}}\right)r=\frac{Q^{2}}{4\pi \epsilon_{0}c^{2}} \tag{6.7}\]
in which case one sees that one can shield the gravitational field via the charge \(Q\)--so, one invariably expects a gravitational _blueshift_ effect for small enough radii in the context of the Reissner-Nordstrom metric. This fits the existence of a repulsive force, since we have already seen that effective forces on test bodies _can_ be shielded using the charge \(Q\). The conclusion, then, is that the absence of gravitational redshift effects does not imply a Minkowskian spacetime structure.
## 7 Conclusion
In light of this work, what can really be inferred from the results of gravitational redshift experiments? First, if one's experiments (like of those of Pound and Rebka) are insufficiently sensitive, then one may be warranted in inferring only the Minkowski spacetime structure of special relativity--for, as [Brown and Read, 2016] point out, special relativity in accelerating frames is then sufficient to account for these results. Beyond first-order, however, special relativity will not suffice; one might think that in such contexts one must apal to spacetime curvature, but in light of the geometric trinity, this is also incorrect: one could alternatively infer to the existence of spacetime torsion or non-metricity. (_Pace_[Maluf et al., 2009], however, one cannot infer from these experiments to spacetime torsion uniquely.) Finally, one cannot infer from the absence of gravitational redshift effects to Minkowski spacetime structure, given the possibility of shielding such effects using charge in Reissner-Nordstrom spacetimes, and the existence of gravitational blueshift due to charged sources. Together, we hope that these conclusions will prove definite and final regarding what gravitational redshift experiments really establish.
|
2309.11787 | Dependence of Solar supergranular lifetime on surface magnetic activity
and rotation | The lifetimes and length-scales for supergranular cells in active and
quiescent regions of the Solar chromosphere, and the relation between the two,
were studied using a time series of Ca II K filtergrams. The lifetimes, in
contrast to supergranular length scale and fractal dimension, show no
significant dependence on Solar latitude, suggesting that cell lifetimes are
independent of the differential rotation and a possible supergranular
super-rotation. The functional form of the relation was obtained guided by a
comparison of the distributions of the two supergranular parameters. We infer a
linear dependence of cell lifetime on area, which can be understood by the
assumption of the network's evolution via a diffusion of the magnetic field.
Our analysis suggests that the diffusion rate in quiet regions is about 10%
greater than in active regions. | Sowmya G. M., Rajani G., U. Paniveni, R. Srikanth | 2023-09-21T05:20:29Z | http://arxiv.org/abs/2309.11787v1 | # Dependence of Solar supergranular lifetime on surface magnetic activity and rotation
###### Abstract
The lifetimes and length-scales for supergranular cells in active and quiescent regions of the Solar chromosphere, and the relation between the two, were studied using a time series of Ca II K filtergrams. The lifetimes, in contrast to supergranular length scale and fractal dimension, show no significant dependence on Solar latitude, suggesting that cell lifetimes are independent of the differential rotation and a possible supergranular super-rotation. The functional form of the relation was obtained guided by a comparison of the distributions of the two supergranular parameters. We infer a linear dependence of cell lifetime on area, which can be understood by the assumption of the network's evolution via a diffusion of the magnetic field. Our analysis suggests that the diffusion rate in quiet regions is about 10% greater than in active regions.
\({}^{1}\) GSSS Institute of Engineering and Technology for Women, KRS Road, Metagalli Mysuru-570016, Karnataka, India
\({}^{2}\) PES College of Engineering, Mandya - 571401, Karnataka, India.
\({}^{3}\) Poornaprajna Institute of Scientific Research,Devanahalli, Bangalore-562110, Karnataka, India
## 1 Introduction
The supergranular network is the superficial manifestation of Solar convection and is important for solar flux transport. The existence of a strong correlation between the chromospheric networks and the supergranulation structure was pointed out first by Leighton on the basis of Dopplergrams (Leighton et al., 1962; Leighton, 1963). Subsequent studies made use of Ca II K spectroheliograms and then filtergrams, an important tool to probe Solar convection and also magnetism (Chatzistergos et al., 2022). Supergranular network cells (called "supergranules") are characterized by distributions centered around a lifetime \(T\approx 25\) hours and length scale \(L\approx 35\) Mm. Since the early work of Simon and Leighton (1964), these two parameters along with the velocity and magnetic fields associated with supergranules have
been studied and reported in a wide range of values by various workers, cf. McIntosh et al. (2011); Mandal et al. (2017); Chatterjee et al. (2017); Rajani et al. (2022) and reference therein. More recently, space borne instruments such as Helioseismic Magnetic Imager (HMI) on board Solar Dynamics Observatory (SDO) and Solar and Michelson Doppler Imager (MDI) on board Heliospheric Observatory (SOHO) Williams et al. (2014) have been used to study supergranulation.
The derived lifetimes of supergranulation shows good dependence on the choice of method and region. Janssens (1970), using H\(\alpha\) filtergrams, estimated that \(T\approx\) 21 hours. Livingston and Orrall (1974) obtained a similar value of 22 hour and this was also reported by Singh et al. (1994) based on the observation of the appearance and disappearance of the cell. On the other hand, the same authors observed that certain exceptional supergranules found in the vicinity of active regions can survive for several days. By visual examination of individual supergranules Wang and Zirin (1988) detected \(T>\) 50 hour, and pointed out that this can be larger than that obtained by the cross-correlation (CC) method (Rogers, 1970), owing to the latter's sensitivity to shape changes. Simon and Leighton (1964) estimated a CC lifetime of 20 hours using a time series of Ca K spectroheliograms. Worden and Simon (1976) employed the CC technique to estimate a lifetime of 36 hours using magnetogram data. Observing the Fe I 8988A and Ca II K networks, Duvall (1980) and Raju et al. (1998b) derived a CC lifetime of 42 hr and 25 hr, respectively.
Visual inspection techniques possess the advantage of being able to directly follow intricate morphological changes such as merging, splitting, migration, disappearance, and appearance of magnetic fluxes that make up the evolution of the network (Harvey and Martin, 1973; Wang et al., 1995). Therefore, lifetime estimated using this would better reflect the processes underlying supergranular dynamics. By contrast, correlation techniques fail to distinguish between true aspects of cell evolution such as the disappearance or appearance of certain features and shape changes arising from the relocation of magnetic elements.
On the question of whether a supergranule survives beyond its correlation lifetime, there has been conflicting evidence: (Wang and Zirin, 1988) reports in the affirmative, but comparable results found by Rogers (1970) and Janssens (1970), using the correlation and morphological techniques for H\(\alpha\) data, respectively, report in the negative. Similarly, lifetimes estimated by Raju et al. (1998a) and Singh et al. (1994); Paniveni et al. (2010) on Ca II K using correlation and visual inspection techniques show comparable results for both active and quiescent network regions. In the case of extended supergranular networks, correlation lifetimes can frequently assume values as large as 45-60 hours. However, in contexts involving intricate morphological changes, as for example the long-lived features such as magnetic _pukas_ or plages (Livingston and Orrall, 1974), it is more advantageous to study
lifetimes visually.
In this work, we investigate and contrast the relations between lifetimes and length-scales for supergranular cells in active and quiescent regions of the Solar chromosphere was studied using a time series of Ca II K filtergrams, extending work done by Singh et al. (1994) and Srikanth et al. (1999). Here it may be noted that the Ca II K network traces out magnetic flux concentrations at the supergranular boundary thanks to the enhanced network brightness they produce (Spruit et al., 1990; Hagenaar et al., 1997; Raju and Singh, 2002). Based on a data analytic method proposed by the latter, we deduce the functional form with guidance by the distributions of the two supergranular parameters. Our results are found to support the expected picture of the network's evolution through a diffusion of the magnetic fields, and the influence of the fields on cell properties.
The paper is structured as follows. In Section 2, we introduce the data used and the method of its analysis. The basic results for cell lifetime and length scale are presented in Section 3.
## 2 Data and Analysis
Supergranular size \(L\) was estimated as square root of the area enclosed within the cell boundaries traced out on the Ca II K filtergrams. For lifetime estimation, observations were made for the Kodaikanal Solar Observatory (KSO) data for the year 1998, 2002, 2004 and 2007 for the descending, minimum and active phases. In this analysis used data consisting of approximately 1200 Ca II K filtergrams of the 23rd Solar cycle. Time averaging over 10 min is used in order to eliminate noise due to 5-min oscillations. This method yields approximately six data-frames per hour. To estimate lifetime at a given epoch, approximately 72 hours of data are considered, which span about 432 frames at 10-minute inter-frame intervals. A specific supergranular cell is tracked across frames sequentially, with lifetime being estimated as the time interval between the frame of its initial appearance and that of its last disappearance (Paniveni et al., 2004). Supergranular lifetime has been estimated for quiescent, active and semi-active regions. By quiet region cells, we mean those found far from the magnetically active regions (see Figure 1). Active region cells are found in close proximity of active regions (see Figure 2), whilst the semi-active region supergranules are found in regions of intermediate magnetic activity (see Figure 3).
In previous studies supergranular lifetime was obtained via cross-correlation applied to time series data (Srikanth et al., 1999). The behaviour was analysed assuming the diffusion elements of the magnetic network with observed lifetime identified an a diffusion time-scale. By contrast, lifetime estimates based on visual inspection, as done here, are implicitly related to the crossing time of the plasma from the cell center to its edge (Krishan, 1999). There
fore, visual inspection is expected to yield the eddy turnover time. While lifetime estimate by visual inspection is rather tedious, still it is fairly reliable (Paniveni et al., 2010). Our sample size is small, but brings out characteristic features contrastng the different activity epochs and regions.
Our work here has been focused on studying the behaviour of supergranules in quiet, intermediate and active regions. The contrast in the cell lifetime across regions of different activity levels may be theoretically modelled as arising out of differences in diffusion rates of the magnetic flux transport (Schrijver et al., 1989).
## 3 Results
### Cell Lifetime dependence on Solar latitude
Our estimates of cell lifetime in quiet region cells are comparable to those previously reported based on the KSO data (Chatterjee et al., 2017; Mandal et al., 2017; Sowmya et al., 2022; Rajani et al., 2022). For example, the estimate for quiet and semi-active regions in this data match that obtained by Singh et al. (1994), who find \(T\in[15,40]\) hour, with the most likely lifetime being 22 hours. The active region lifetime is estimated by those authors to be almost double the quiescent value, in agreement with our result as indicated in Table 1.
A plot of cell lifetimes versus latitude for the data points is shown in
Figure 1: Quiescent region cells: Selection of supergranules in quiet regions of the Solar chromosphere; from the KSO archive of the 23rd cycle.
Figure 3: Semi-active region cells: Selection of supergranules in semi-active regions of the Solar chromosphere; from the KSO archive of the 23rd cycle.
Figure 2: Active region cells: Selection of supergranules in active regions of the Solar chromosphere; from the KSO archive of the 23rd cycle.
Figure 4. This stands in contrast to length scale and fractal dimension. The latter shows a latitude dependence, which may be potentially linked to the Sun's differential rotation Sowmya et al. (2022). Supergranular scale shows the vertical-horizontal asymmetry at higher latitudes (Raju, 2020). Thus, our observation here suggests that cell lifetime is unaffected by Solar rotation, whereas spatial properties of cells indeed manifest an influence.
We may consequently also rule out any dependence of supergranulation lifetime on superrotation, the possible faster rotation of supergranules with respect to magnetic structures and plasma. However, it may be noted that superrotation may well be an artefact of projection effects in Dopplergrams (Meunier and Roudier, 2007).
### Deducing the functional dependence of cell lifetime on the size
The parameters such as cell lifetimes \(T\) and Length scales \(L\), are evidently interdependent. One can directly estimate the functional dependence of \(L\) on \(T\) by curve fit algorithm. We also expect this dependence to be reflected in the distribution of these two parameters, given in Figures 5 and 6. This information can also be used to help with the estimation of the functional
\begin{table}
\begin{tabular}{|c|c|} \hline Region & Lifetime(hour) \\ \hline Quiet & 23.58 \(\pm\) 1.3 \\ \hline Semi Active & 34 \(\pm\) 1.7 \\ \hline Active & 54.4 \(\pm\) 1.6 \\ \hline \end{tabular}
\end{table}
Table 1: Solar supergranular lifetime over the quiet, semi and active regions.
Figure 4: Plot of supergranular lifetime (in hours) with respect to Solar latitude (in degrees) using data of the 23rd Solar cycle.
relation between \(L\) and \(T\) for the quiet or active region.
Here we employ an indirect method to estimate the functional relationship between \(L\) and \(T\) by estimating the transformation that would map the distribution of the latter with respect to that of the former. Somewhat simplistically, we shall assume that their respective distribution is sufficiently represented by two parameters: (1) the skewness \(\varsigma\), which is a measure of asymmetry of the distribution about the mean \(\mu\), and (2) the kurtosis \(\kappa\), which is a measure of the "tailedness" in the distribution.
In statistics, given a random variable described by probability distribution \(f(x)\), skewness is a quantification of asymmetry of \(f(x)\). It is given by the third standardized moment, defined as follows:
\[\varsigma=\frac{1}{\alpha^{3}}\int_{-\infty}^{\infty}(x-\mu)^{3}f(x)dx, \tag{1}\]
where \(\alpha\) is the standard deviation. A distribution may be right-skewed (resp., left-skewed), when it has a more prominent tail on the positive (resp., negative) side about the mean. A zero-skew distribution is perfectly symmetric on both wings about the mean. As basic examples, a normal distribution has zero skewness; an exponential distribution has skewness \(\varsigma=2\), and for a lognormal distribution describing a random variable \(X\) whose logarithm \(\ln(X)\) is described the normal distribution with variance \(\beta\), we have \(\varsigma=(e^{\beta}+2)\sqrt{e^{\beta}-1}\).
Given a random variable described by probability distribution \(f(x)\), kurtosis is a quantification of how tailed \(f(x)\) is, i.e., how well the distribution features outliers in the extreme, rather than concentration of data closer to the mean. It is given by the fourth standardized moment, defined as follows
\[\kappa=\frac{1}{\alpha^{4}}\int_{-\infty}^{\infty}(x-\mu)^{4}f(x)dx. \tag{2}\]
Figure 5: Histogram of lifetime of the Ca II K network cells in the active region. The curve shows a left-hand side tail. The skewness and kurtosis derived for this distribution are given in Table 2.
As basic examples, a normal distribution has kurtosis \(\kappa=3\); a Laplace distribution has \(\kappa=6\), and the uniform distribution has \(\kappa=1.8\). A distribution may be platykurtic (resp., leptokurtic), when it has lesser (resp., greater) kurtosis than the normal distribution. The skewness and kurtosis for our data is summarized in Table 2.
When a random variable \(X\) is subjected to a transformation \(\eta\), then the properties of the distribution of the transformed variable \(\eta(X)\), in particular \(\varsigma\) and \(\kappa\), will in general be different from those of the distribution of \(X\). For example, above we saw that whereas the normal distribution of some random variable \(X\) has zero skewness, the lognormal distribution (which characterizes \(e^{X}\)) is skewed positively. This means that if we let cell lifetimes and scale to be related by \(T\equiv\eta(L)\), then the right \(\eta\) will ensure that the skewness and kurtosis of \(\eta(L)\) is close to the corresponding values of the distribution of \(T\). Obviously infinitely many functions \(\eta\) may satisfy this requirement. We must thus restrict \(\eta\) to a reasonable family of two parameters for this approach to work. This is done as follows.
Under an invertible transformation \(\eta\) of the random variable \(x\) given by \(y\equiv\eta(x)\), let the distribution function \(f(x)\) become \(g(y)\), which is determined
\begin{table}
\begin{tabular}{|c|c|c|} \hline Field & For Length scale distribution & For Lifetime distribution \\ \hline Skewness & (0.463, 0.779) & (0.865, 1.278) \\ \hline Kurtosis & (2.75, 2.638) & (2.045, 2.028) \\ \hline \end{tabular}
\end{table}
Table 2: Statistics of lifetime and scale distribution for the Ca II K networks cells for (active, quiet) regions. We note that all distributions are platykurtic, i.e., having a lower kurtosis than a normal distribution for which kurtosis \(\kappa=3\).
Figure 6: Histogram of length scale of supergranules in the active region. The skewness and kurtosis derived for this distribution are given in Table 2.
as follows. By definition:
\[\int_{x_{1}}^{x_{2}}f(x)dx=\int_{\phi(x_{1})}^{\phi(x_{2})}g(y)dy, \tag{3}\]
owing to conservation of probability. The positive definiteness of \(f(x)\) implies that: \(g(y)|dy|=f(x)|dx|\), whereby
\[g(y)=f[\eta^{-1}(y)]|(\eta^{-1})^{\prime}(y)|, \tag{4}\]
where the prime symbol denotes the first derivative.
Let \(T=\eta(L)\). Further, let \(f(L)\) and \(g(T)\) denote the respective distribution function. In order for our method to work, we must restrict to a two-parameter family of transformations. It is reasonable to confine \(\eta\) to a polynomial relation of the form
\[T=aL^{n}+b \tag{5}\]
According to equation (4)
\[g(T)=\frac{f(L)}{[(L-b)^{n-1}a]^{\frac{1}{n}}\,n} \tag{6}\]
We now apply this exercise to our lifetime vs length scale data.
Table 2 summarizes the skewness and kurtosis data for the active region given in histograms Figures 5 and 6, and additionally for quiet regions (not included, for brevity). Based on the skewness, we find that in either region, supergranular scales are less asymmetric than lifetimes. This feature seems generic for supergranules, irrespective of the activity level.
To determine \(n\) for a given region (active or quiet), the values of skewness and kurtosis for the transformed length scale are plotted as a function of \(n\) in the range \(1.0\leq n\leq 2.5\). The plots for the two, namely \(\varsigma(n)\) and \(\kappa(n)\), are given in Figures 7 and 8, respectively. As expected, both plots exhibit a monotonic increase for \(x\geq 1.0\).
In the case of active regions, for the observed lifetime distribution skewness \(\varsigma\) = 0.463 (Table 2), the skewness for length scale distribution corresponds to the Figure 7, range 2.125\(\leq\) n \(\leq\) 2.25. Similarly, for the observed lifetime distribution, kurtosis \(\kappa=2.75\), the kurtosis of the transformed length scale distribution corresponds to the range \(1.875\leq n\leq 2\) as shown in Figure 7. We choose \(n=2\), and the Monte-Carlo least-squares curve fitting algorithm yields the function:
\[T=7.45+3.5A, \tag{7}\]
where \(T\) is given in hours and supergranular area \(A\equiv L^{2}\) is in units of Mm\({}^{2}\). More specfically, indicating error bars, we may give in place of Eq. (7), the
Figure 8: Plot of kurtosis distribution for various powers βnβ of length scale.
Figure 7: Plot of skewness of distribution for various powers indices of length scale. The skewness of the lifetime distribution is 0.865.
fit function \(T=\alpha+\beta A\), where \(\alpha\) and \(\beta\) are the fit constants having the units of \(T\) and \(TL^{-2}\), respectively, with \(\alpha=7.45\pm 0.025\) hours and \(\beta=3.5\pm 0.01\) Mm\({}^{2}\). Figure 9 depicts the observed data on the dependence of lifetime on scale in active regions, as well as the fit based on Eq. (7). This shows a reasonably good agreement between the two.
For quiet regions, the analogous calculation yields, in case of the skewness data of of length scale and lifetime distributions, the range \(2\leq n\leq 2.125.\), and in the case of kurtosis of the length scale and lifetime distributions, the range \(1.75\leq n\leq 2\). Analogous to Eq. (7), in the quiet region, the Monte-Carlo curve fitting algorithm yields the function:
\[T=6.75+3.25A, \tag{8}\]
where \(T\) is given in hours and \(L^{2}\) in units of Mm\({}^{2}\). As before, indicating error bars, we may give in place of Eq. (8), the fit function \(T=\mu+\nu A\), where \(\mu\) and \(\nu\) are the fit constants having the units of \(T\) and \(TL^{-2}\), respectively, with \(\mu=6.75\pm 0.023\) hours and \(\nu=3.25\pm 0.02\) Mm\({}^{2}\). Figure 10 depicts the observed data on the dependence of lifetime on scale in active regions, as well as the fit based on Eq. (8). This shows a reasonably good agreement between the two.
Eqs. (7) and (8) are consistent with the dependence reported by Singh et al. (1994); Srikanth et al. (1999), where the authors find a linear relation between lifetime and scale of supergranules, which can broadly be understood through a model where cell lifetime is related to the diffusion of magnetic elements. However, the above authors restrict their study only to quiet regions, whereas we extend the study to a comparative study of quiet and active regions.
Figure 9: Variation of lifetime as a function of length scale for Ca II K network cells for active region. The fit function is given by Eq. (7)
## 4 Discussion and Conclusion
We studied the lifetimes and length-scales of supergranular cells in active and quiescent regions of the Solar chromosphere, and the relation between the two, using a time series of Ca II K filtergrams. We find that the lifetimes show no significant dependence on Solar latitude, suggesting that cell lifetimes are independent of the differential rotation. This independence stands in contrast to supergranular length scale and fractal dimension. For example, Raju et al. (1998b) have noted that supergranular size as observed in the Ca II K shows upto a 7% latitudinal variation. Sowmya et al. (2022) report an anticorrelation between fractal dimension and latitude in the belt between 20\({}^{\circ}\) N and 20\({}^{\circ}\) S.
Our results on the lifetime-scale relation can be interpreted to shed light on the relative dynamics of the active and quiet regions of the Sun. From Eqs. (7) and (8), we find that the slope is 3.5 hr Mm\({}^{-2}\), which is slightly larger in the case of active regions than in the quiet regions, namely 3.25 hr Mm\({}^{-2}\). This difference may be understood as a consequence of the fact that lifetime and scale of an active or quiet region cell can depend on its interaction with the ambient magnetic fields. In specific, the above noted difference in slopes may be attributed to two related factors: (a) lowering of cell size in the presence of magnetic activity (Singh and Bappu, 1982), and (b) the enhancement of cell lifetime in active regions, as noted in Table 1. The effect of magnetic flux can be understood as due to plasma confinement by the magnetic field (Sowmya et al., 2022).
Accordingly, the slope \(dT/dA\) in Eqs. (7) or (8) may be interpreted as the inverse of the diffusion coefficient \(D\) associated with the cell, i.e., as \(1/D\)
Figure 10: Variation of lifetime (hr) as a function of length scale for Ca II K network cells for quiet region. The plot is a fit using the functional form of Eq. (5).
For active regions, we then have \(D=10^{6}/(3.5\ [\pm 0.01]\times 3600)\approx 79.3\pm 0.2\) km\({}^{2}\)/s. Similarly, we obtain about \(D\approx 85.5\pm 0.5\) km\({}^{2}\)/s for quiet regions. Intuitively, the longer lifetime of cells in the active region is due to the lower diffusion rate, and our results imply that in the active region, the diffusion happens about \(79.3/85.5\approx 0.93\) slower than in the quiet region. It may be noted that this is in agreement with recent works (Abramenko, 2017, 2018) demonstrating superdiffusivity in quiet regions and nearly-normal diffusion in active regions. Specifically, the pattern of greater diffusivity in quiet regions than in active ones is found to be pronounced in the case of cells with scale greater than 5 Mm (Abramenko, 2018), which is compatible with the scale range appropriate to the present data.
As such the relative diffusion of the active and quiet regions is expected to be dependent on the phase of the cycle. For example, during solar minimum a large cell in the active region may retain its identity for upto 10 months, where during solar maximum, the exceptional large lifetimes rarely exceed 4 months. These issues may be studied in the future in contnuation of work reported here.
|
2307.16544 | Utilisation of open intent recognition models for customer support
intent detection | Businesses have sought out new solutions to provide support and improve
customer satisfaction as more products and services have become interconnected
digitally. There is an inherent need for businesses to provide or outsource
fast, efficient and knowledgeable support to remain competitive. Support
solutions are also advancing with technologies, including use of social media,
Artificial Intelligence (AI), Machine Learning (ML) and remote device
connectivity to better support customers. Customer support operators are
trained to utilise these technologies to provide better customer outreach and
support for clients in remote areas. Interconnectivity of products and support
systems provide businesses with potential international clients to expand their
product market and business scale. This paper reports the possible AI
applications in customer support, done in collaboration with the Knowledge
Transfer Partnership (KTP) program between Birmingham City University and a
company that handles customer service systems for businesses outsourcing
customer support across a wide variety of business sectors. This study explored
several approaches to accurately predict customers' intent using both labelled
and unlabelled textual data. While some approaches showed promise in specific
datasets, the search for a single, universally applicable approach continues.
The development of separate pipelines for intent detection and discovery has
led to improved accuracy rates in detecting known intents, while further work
is required to improve the accuracy of intent discovery for unknown intents. | Rasheed Mohammad, Oliver Favell, Shariq Shah, Emmett Cooper, Edlira Vakaj | 2023-07-31T10:20:16Z | http://arxiv.org/abs/2307.16544v1 | # Utilisation of Open Intent Recognition Models for Customer Support Intent Detection
###### Abstract
Businesses have sought out new solutions to provide support and improve customer satisfaction as more products and services have become interconnected digitally. There is an inherent need for businesses to provide or outsource fast, efficient and knowledgeable support to remain competitive. Support solutions are also advancing with technologies, including use of social media, Artificial Intelligence (AI), Machine Learning (ML) and remote device connectivity to better support customers. Customer support operators are trained to utilise these technologies to provide better customer outreach and support for clients in remote areas. Interconnectivity of products and support systems provide businesses with potential international clients to expand their product market and business scale. This paper reports the possible AI applications in customer support, done in collaboration with the Knowledge Transfer Partnership (KTP) program between Birmingham City University and a company that handles customer service systems for businesses outsourcing customer support across a wide variety of business sectors. This study explored several approaches to accurately predict customers' intent using both labelled and unlabelled textual data. While some approaches showed promise in specific datasets, the search for a single, universally applicable approach continues. The development of separate pipelines for intent detection and discovery has led to improved accuracy rates in detecting known intents, while further work is required to improve the accuracy of intent discovery for unknown intents.
Intent Recognition, Customer Support, Intent Detection
## 1 Introduction
Customer support is a crucial need for businesses in the modern digital age where products are not just sold but also updated, repaired and maintained for their lifespan. Businesses have sought out new solutions to provide support and improve customer satisfaction as more products and services have become interconnected digitally. There is an inherent need for businesses to provide or outsource fast, efficient and knowledgeable support to remain competitive and maintain their consumer base in rapidly advancing and saturated consumer markets [1].
Support solutions are also advancing with technologies, including use of social media, Artificial Intelligence (AI), Machine Learning (ML) and remote device connectivity to better support customers. Customer support operators are trained to utilise these technologies to provide better customer outreach and support for clients in remote areas. Interconnectivity of products and
support systems provide businesses with potential international clients to expand their product market and business scale.
As products become more advanced and require additional technical instruction, basic support requests are becoming automated to allow operators to focus on high priority and technically detailed requests. Automation systems aim to gain information from the customer and perform basic functionality such as altering account information to streamline the request process. AI could be leveraged to advance the functionality of support systems and help understand customer needs; this project aims to explore applications of AI within the customer support domain to improve and streamline the customer support process for greater customer satisfaction and efficiency.
Customer service technologies always strive to deliver faster, more efficient means of helping existing customers and facilitating new customers in order to stay competitive with other businesses. AI has found uses in automating previously manual sections of customer service roles often performed by humans, such as query identification and information acquisition, and in some cases automating entire services.
This paper reports the possible AI applications in customer support, done in collaboration with the Knowledge Transfer Partnership (KTP) program between Birmingham City University and a company that handles customer service systems for businesses outsourcing customer support across a wide variety of business sectors. The purpose is to help enhancing the usage of AI within their business and explore the possibilities of AI enhanced customer service solutions to benefit their main company processes.
The company offers bespoke customer service platforms, that if using an intent processing models could become more flexible to business requirements while improving request resolution speeds. This combined with potential cost saving from automated processes promotes their services to a wider market with potential for increased market growth and company revenue.
The business implications of an intent processing model would allow customer service providers to gather preliminary request information and narrow down the request type automatically, potentially automating simple requests and reducing waiting times for longer request types. Creation of a flexible model would facilitate optimisation of request processes across multiple business domains with minimal retraining, allowing more companies to automate their customer services using such a system. Such automation improves platform throughput, resulting in a faster customer service experience with higher service up time.
This project contributes to Deep Learning (DL) and Natural Language Processing (NLP) research in the area of open intent discovery, a domain focused on identification and extraction of contextual information from sentences often utilising deep networks and transformers. Researchers aim to improve the detection and extraction accuracy of unsupervised and semi-supervised models to eliminate dependency on models pre-trained with different contextual domain information. Using both semi-labelled and unlabelled data the model in this project explores prototype method combinations applied to the customer service requests domain, containing spoken dialogue and casual language semantics that few researchers have explored before.
This project focuses its research contribution towards evaluating the effectiveness of the intent classification method alongside potential performance improvements to the identification and extraction of intents. Any noted improvements could be applicable to the request's domain or to the wider intent discovery domain, marking significant progress in the fields of DL and NLP.
## 2 Literature Review
This research was conducted utilising [2]'s Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method involving the collection of information resources,
screening of eligible research, and analysis of both the qualitative and quantitative aspects of the resources (see Figure 1). Informal interviews were conducted with the company's employees and project associates, alongside regular progress meetings to outline business requirements regarding the project. Gathered requirements were factored into the aims and objectives to ensure the project achieves exploratory research outcomes and prototype implementation goals.
### Customer Support Systems (CSS)
The core principle that the business had to convey knowledge of the product's operation, maintenance and connectivity with other products led to a demand for fast, accessible, knowledgeable and reliable customer support operations [1, 3]. Originally CSS were operated by on-site experts, third-party resellers or software distributors providing support through phone lines during regular 9-5 business hours [1]; as technologies rapidly developed, support systems struggled to meet increased demand and provide customer contact points 24/7, impacting industries and consumers operating outside regular working hours [3]. Automated systems have been proposed to either fully or partially handle customer support requests to efficiency distribute informative product knowledge and expert help on-demand.
To provide an accessible, fast method of handling basic customer support requests like product information, customer account information or basic customer services, automated chat bots have been created to handle online messages through social media platforms and company support sites [3, 4]. Chat bots are available 24/7 and easily accessible even from mobile devices,
Figure 1: The PRISMA model, description and figure from [2]
providing customers with fast, reliable and remote support; functionality is however limited by internal AI programming, require domain knowledge to setup and often transfer complex requests to a human operator [3].
Other traditional methods of reducing reliance on live operators utilised online customer forums, message boards and facts and questions (FAQ) pages provide effective ways of customers finding answers to common questions to resolve product issues, alongside product reviews from real buyers [5]. Specialist knowledge and customer inquiries are initially required to build up a knowledge base of information, usually deployed within a company website or a regular technical support forum; this system creates an easily digestible and accessible customer contact point contrasting to technical specifications or lengthy support calls which some customers begrudge. Maintaining multiple contact points helps satisfy a wider array of customers and create well-rounded support systems capable of providing key information and upkeeping customer satisfaction.
### Open Intent Recognition (OIR)
OIR is a new field in NLP focusing on the extraction and categorise of intentions from natural language statements using semi-supervised or unsupervised methods; this extends the intent analysis field which often requires expert domain knowledge and supervised training data for models to produce accurate results, creating inflexible models tailored to specific domains [6]. Extracted intents could then be used in dialogue systems to categorise statements [6, 7] and summarise large corpuses of natural language [8] without reliance on prior subject knowledge or supervised model training, pushing the boundaries of natural language understanding by machines.
There are three stages researchers identified in tackling this problem: intent detection, intent extraction and label classification. NLP models parse sentences to identify keywords, extract them in batches respective of their context, and label each set of keywords with a classification label such as "customer-book-hotel" indicating the intention. The parsed sentence, known as an utterance, can have several intents related to it depending on the complexity of the sentence or if it contains multiple subjects.
#### 2.2.1 Intent Detection
To begin understanding natural language computers must first identify important words to contextualise the meaning of sentences. Sentences are passed through semantic parsers that evaluate each word through a series of gramatical trees with semantic rules to identify nouns, objects and verbs based on root forms; different grammatical parsers are used for differing grammatical rule structures such as Temizer and Diri [9]'s work in Turkish sentence parsing, and the Standford CoreNLP toolkit's annotator packages for various languages [10].
Parsers for utterance summarising aim to identify three key aspects which make up a semantic triplet: a subject (noun), a predicate (verb) and an object (statement) [11]. In contexts where the subject remains the same as in customer support, only the predicate and object need to be identified, so the resulting output is known as an action-object pair linked to the customer [6, 7]. Once all the key words in an utterance are parsed, triplets can be extracted using intent extraction methods.
#### 2.2.2 Intent Extraction
Utterance pruning has to occur prior to extract, in which pronouns are resolved and linked to subjects to remove subject ambiguity and all verbs are converted to their root forms and similar meaning verbs are reduced using a VerbNet [8]. Extraction methods commonly iterate through
utterances with grammatical rules to extract triplets, aiming to identify singular or multiple triplets based on identified subjects within the utterance. Ceran, et al. [8]'s work identified different events linked to each subject and extracted triplets using a semantic role labeller (SRL) and triplet matching rules, allowing for complex sentence analysis and multiple events contextually linked to multiple subjects. Rusu, et al. [11] used Treebank parsers which linked each word contextually to the subject, predicate or object in a sentence in a tree structure, and formatted triplets based on the resulting tree for each utterance, only extracting triplets from simple sentences with singular subjects.
#### 2.2.3 Label Classification
Most discussed research focuses on identification and extraction of intents without any additional classification or categorisation processes. The work of Zhang, et al. [6], Liu, et al. [7] aims to tackle this, focusing on semi-supervised and unsupervised clustering of intents aiming to group extracted triplets into meaningful categories for human analysis or use in other systems. Liu, et al. [7] used unsupervised K-means clustering to group intents together, labelling each cluster with an action-object form label based on the representation percentage of top action-object pairs in the cluster; generated labels were evaluated against the human labelled ground truth to determine labelling accuracy.
Zhang, et al. [6] implemented a system with multiple clustering methods including Kmeans, hierarchical and density-based methods that utilised the KeyBERT toolkit [12] to label both utterances and clusters based on keyword representation; KeyBERT also provides a confidence score for the label based on the representation score of the label to the utterance or cluster. There are difficulties with both methods regarding the need for some labelled data to and difficulty handling large amounts of intent labels and distinguishing specific classes during classification [6].
#### 2.2.4 Intent Recognition using Deep Learning
Both current state of the art models from Zhang, et al. [6], Liu, et al. [7] tackling the intent recognition problem proposed approaches utilising deep learning transformer frameworks; handling sentence parsing and keyword extraction based on the Bidirectional Encoder Representations from Transformers (BERT) framework [13]. Transformer frameworks are crucial as natural language is sequence and context based; transformers use an attention mechanism which "remembers" context from earlier sequence data when evaluating future information, retaining context helps understand the full meaning of utterances without segmenting them. Liu, et al. [7] leveraged the Siamese BERT (SBERT) framework [14] using Siamese neural networks to evaluate sentence embedding similarity and identify paraphrases to transform utterances. Semantic representations of utterances were classified using K-means clustering and clusters were labelled based on the most common unique identified action-object pairing within each cluster. Zhang, et al. [6] focused on a KeyBERT framework [12] to identify and extract key semantic words from utterances using a transformer neural network, alongside a K-means clustering approach similar to Liu, et al. [7]. Each cluster was aggregated to determine the top two key words within each cluster, which were joined together to form a two-part label for each utterance within that cluster. Both approaches leverage state of the art transformers and deep learning techniques to push the field of semantic intent analysis in a new direction of open intent recognition which seeks to develop a deeper understanding of sequential natural language data analytics.
### Architecture
The company wanted an online automated pipeline to process customer data effectively which also allowed company agents to access processed results to make decisions or provide customers with data insights. Clients would require a way to access the platform in addition to providing their data for processing, and in the long-term clients could access analytical insights post-processing. The models used would also be trained, stored and regularly retrained on newer training data all within the cloud so all model processing was computed in the cloud. The logical design is presented in figure 2.
The three main design architectures (figure 2) considered using company's internal servers as a system pipeline, utilising Google Cloud's VertexAI and Dialogflow systems for an online cloud-based data flow and a hybrid system involving Dialogflow and developing the TEXTOIR library [15] for deployment in an online container (figure 3). The most recommended approach focused on leveraging Google Cloud's Dialogflow alongside VertexAI to create an online system ready for data ingestion and capable of processing large quantities of data to form the groundwork of the rest of the project.
Though implementation designs differed all three approaches maintained the goals of providing a hosted platform for clients to connect to with different pipelines utilising client data for sentiment analysis, intent recognition and call routing. The costs of each of these services were broken down with pricing tables detailing service rates and the expected monthly cost at an estimated processing amount.
### Data
Client data would be represented as raw audio files from call centres where an agent and customer are talking, these would be transcribed and then process by sentiment and intent analysis models. All data used would be stored within the cloud and only passed from one cloud service to another, with temporary processing data being discarded once model outputs were saved in post-processing (figure 3). In the long-term clients may also opt to archive their raw audio data on the cloud allowing full traceability between pre-processing and post-processed results; raw audio would be deleted after processing due to storage limitations as the transcription is still maintained alongside the results. The datasets employed in the experiments included ATIS [16], SNIPS [17], BANKING [18], HWU64 [19], CLINC150 [20], and real-life customer call data.
Figure 2: the logical flow of the project
## 4 Implementation and Discussion
The KTP set out with the goal of making an online intent recognition platform providing a data pipeline that ingested raw audio data, converted it into text data before using machine learning models to recognize intents. The implementation of the KTP work produced an online platform on MS Azure, leveraging storage systems to store the raw data, transcribed text and model results all on the cloud. The machine learning model utilised in the processing containers was also uploaded to MS Azure, and processing was carried out using virtual machine instances, ensuring that the entire pipeline is hosted online. The pipeline enables clients to upload raw audio data to a container, which is then automatically processed and analysed, with the results saved into a MySQL database which is accessible via remote querying. The fundamental processing pipeline is therefore successfully set up and operating properly as per test runs conducted by the authors of this work and associates.
Initially, a supervised machine learning (ML) approach was taken, employing the transformers RoBERTa model [21] on a limited set of publicly available datasets. This approach achieved accuracy rates of 90-96%. However, since supervised ML requires labelled data, it was also necessary to experiment with unlabelled data, which poses a challenge for unsupervised ML. To address this challenge, another study was used [22], which advocated the use of unsupervised semantic clustering and dependency parsing. It employed several different combinations of pretrained models for semantic representation and clustering were explored, including Sentence-BERT, RoBERTa, Universal Sentence Encoder, K-means, Gaussian Mixture Model, and Hierarchical clustering. The approach was tested on six publicly available datasets and real-life data, performing well only on SNIPS and not on the other datasets, including real-life data. The unsupervised ML models employed in this study could only be slightly controlled using different techniques and parameter settings, and their performance was heavily dependent on the specific dataset used. Noisy data had a significant impact on their performance, and there was no single set of settings that could be used for all datasets. Consequently, the search for the best model continued.
Further research led to the creation of two separate pipelines for intent detection and discovery in PS5 as proposed in [6]. Their study involved used two different pipelines and experimented various semi-supervised or unsupervised ML algorithms. However, the best-performing algorithm, as reported in their study, was DA-ADB for intent detection and DeepAligned for
Figure 3:Dialogflow Pipeline using containerised TEXTOIR models
intent discovery [23, 24]. The former was used to train models for detecting known and unknown intents, and the latter used for discovering unknown intents following intent discovery model training. Testing revealed that the DA-ADB model accurately detected the intents on which it had been trained, while classifying those on which it had not been trained as unknown. However, the intent discovery model did not perform well during training and evaluation, with longer training times and less accurate discovered intents than the intent detection model.
## 5 Conclusion
To address the problem of intent discovery, two techniques for intent generated labels were incorporated [25]: pattern for singularization task and WordNET (NLTK) for synonyms part. This was necessary, as generated intent labels were not identical for inputs that contained the same intent. The pattern for singularization achieved good performance, while WordNET did not perform well. The pattern successfully identified similarly-worded labels and considered them as one, but did not work on labels that were similar but positioned differently. However, processing times for these techniques were high without the use of CPU.
This study explored several approaches to accurately predict customers' intent using both labelled and unlabelled textual data. While some approaches showed promise in specific datasets, the search for a single, universally applicable approach continues. The development of separate pipelines for intent detection and discovery has led to improved accuracy rates in detecting known intents, while further work is required to improve the accuracy of intent discovery for unknown intents.
|
2309.03383 | Kidney abnormality segmentation in thorax-abdomen CT scans | In this study, we introduce a deep learning approach for segmenting kidney
parenchyma and kidney abnormalities to support clinicians in identifying and
quantifying renal abnormalities such as cysts, lesions, masses, metastases, and
primary tumors. Our end-to-end segmentation method was trained on 215
contrast-enhanced thoracic-abdominal CT scans, with half of these scans
containing one or more abnormalities.
We began by implementing our own version of the original 3D U-Net network and
incorporated four additional components: an end-to-end multi-resolution
approach, a set of task-specific data augmentations, a modified loss function
using top-$k$, and spatial dropout. Furthermore, we devised a tailored
post-processing strategy. Ablation studies demonstrated that each of the four
modifications enhanced kidney abnormality segmentation performance, while three
out of four improved kidney parenchyma segmentation. Subsequently, we trained
the nnUNet framework on our dataset. By ensembling the optimized 3D U-Net and
the nnUNet with our specialized post-processing, we achieved marginally
superior results.
Our best-performing model attained Dice scores of 0.965 and 0.947 for
segmenting kidney parenchyma in two test sets (20 scans without abnormalities
and 30 with abnormalities), outperforming an independent human observer who
scored 0.944 and 0.925, respectively. In segmenting kidney abnormalities within
the 30 test scans containing them, the top-performing method achieved a Dice
score of 0.585, while an independent second human observer reached a score of
0.664, suggesting potential for further improvement in computerized methods.
All training data is available to the research community under a CC-BY 4.0
license on https://doi.org/10.5281/zenodo.8014289 | Gabriel Efrain Humpire Mamani, Nikolas Lessmann, Ernst Th. Scholten, Mathias Prokop, Colin Jacobs, Bram van Ginneken | 2023-09-06T22:04:07Z | http://arxiv.org/abs/2309.03383v1 | # Kidney abnormality segmentation in thorax-abdomen CT scans
###### Abstract
In this study, we introduce a deep learning approach for segmenting kidney parenchyma and kidney abnormalities to support clinicians in identifying and quantifying renal abnormalities such as cysts, lesions, masses, metastases, and primary tumors. Our end-to-end segmentation method was trained on 215 contrast-enhanced thoracic-abdominal CT scans, with half of these scans containing one or more abnormalities.
We began by implementing our own version of the original 3D U-Net network and incorporated four additional components: an end-to-end multi-resolution approach, a set of task-specific data augmentations, a modified loss function using top-\(k\), and spatial dropout. Furthermore, we devised a tailored post-processing strategy. Ablation studies demonstrated that each of the four modifications enhanced kidney abnormality segmentation performance, while three out of four improved kidney parenchyma segmentation. Subsequently, we trained the nnUNet framework on our dataset. By ensembling the optimized 3D U-Net and the nnUNet with our specialized post-processing, we achieved marginally superior results.
Our best-performing model attained Dice scores of 0.965 and 0.947 for segmenting kidney parenchyma in two test sets (20 scans without abnormalities and 30 with abnormalities), outperforming an independent human observer who scored 0.944 and 0.925, respectively. In segmenting kidney abnormalities within the 30 test scans containing them, the top-performing method achieved a Dice score of 0.585, while an independent second human observer reached a score of 0.664, suggesting potential for further improvement in computerized methods.
All training data is available to the research community under a CC-BY 4.0 license on [https://doi.org/10.5281/zenodo.8014289](https://doi.org/10.5281/zenodo.8014289).
## 1 Introduction
Kidney cancer is a significant global health issue, ranking as the \(12^{th}\) most deadly cancer in the world, with an estimated 14,700 deaths in 2019 and approximately 73,820 new cases of kidney & renal pelvis cancer worldwide [30]. With the increasing number of cases, automated tools are needed to assist clinicians in managing this burden. For instance, by following nephrometry scoring systems [24], automatic kidney tumor segmentation methods may help specialists to detect and get reliable measurements of kidney tumors.
Previous research on kidney segmentation has employed a variety of conventional methods such as region growing [25, 22], active shape models [27], active contours [26, 31], graph cut[1, 38], level-sets[34, 2], snakes[9], random forest[21], and watersheds[36]. However, to the best of our knowledge, there are only a few methods that focus on segmenting kidney tumors or cysts in the literature. Linguraru et al. [26] proposed a semi-automatic method that combines fast marching and active geodesic contours to segment renal tumors. Kim and Park [22] used thresholds and histograms to segment the kidneys and applied texture analysis to the kidney parenchyma to find seeds for a region-growing algorithm to perform kidney tumor segmentation. Chen et al. [6] proposed a method to predict kidney tumor growth in mm\({}^{2}\)/day, manually segmenting the kidney tumors and using a reaction-diffusion model to predict their growth. Kaur et al. [20] proposed an iterative segmentation method for renal lesions, which uses spatial image details and distance regularization.
In recent years, Convolutional Neural Networks (CNN) have shown to be more effective than traditional methods based on classical computer vision techniques and machine learning. Their ability to learn directly from raw data has led to their widespread use in segmenting organs and structures in different modalities. For instance, Zheng et al. [40] used an AlexNet-based method to localize the kidneys to define a seed for an active shape model algorithm to segment the kidneys in patients with either abdominal surgery or kidney tumors. Sharma et al. [29] used a network that takes the first 10 layers of the VGG-16 network and upsampled them in a decoder fashion to segment the kidneys of patients with renal insufficiency. Encoder-decoder networks such us 2D U-Net [28] and 3D U-Net [8] proved to be robust to tackle medical segmentation tasks in multiple medical imaging segmentation challenges [4, 18]. Variants of these models have been extensively proposed and applied to a wide variety of tasks, including kidney segmentation. For instance, Taha et al. [32] segmented the artery, vein, and ureter around the kidneys using a 2D U-Net-like network that allows the deeper layers to influence more to the final prediction. Jackson et al. [19] used a 3D U-Net-like network to segment the kidneys. Moreover, several methods used deep learning to segment kidney tumors [39, 37]. Yu et al. [39] proposed Crossbar-Net, a network that segments kidney and kidney tumors and uses horizontal and vertical patches instead of traditional squared patches. The network is divided into sets of sub-networks; a set consists of a sub-network for vertical and another for horizontal patches. Yang et al. [37] proposed a 3D CNN using a pyramid pooling module to segment the kidneys and kidney tumors in
abdominal CT angiographic scans.
The top competitors of the Medical Decathlon [18] and LiTS challenge [7, 14] have achieved the highest performance using cascaded networks. These networks divide the tasks into sub-tasks, with one network per sub-task. These networks have different fields of view and thus complement each other, resulting in higher performance. For instance, a first network may segment the liver and the liver tumor as a single structure, aiming to determine the region of interest for the second network; the second network then aims to segment the liver tumor class only. Similarly, Blau et al. [5] used cascade networks to segment the kidney and kidney cyst in CT scans using a 2D U-Net. Their method used heuristics such as a distance transform and HU thresholding to select cyst candidates within the kidney region. A second (shallow) network classified whether a candidate represented a kidney cyst. Additionally, Haghighi et al. [13] used a localization network for pre-processing, which cropped the input for 3D U-Net to segment MRI images of the kidneys. In a recent challenge on segmentation of the kidney and kidney tumors on CT [17], nnUNet [18] was the best performing method. This method automatically adapts its hyperparameters based on a fingerprint of the data, resulting in optimal performance. Furthermore, it uses 5-fold cross-validation to obtain the final prediction.
In this study, we propose an automatic method for segmenting the kidney parenchyma and kidney abnormalities in thorax-abdomen CT scans and compare it with the nnUNet. We trained our method on 215 thorax-abdomen CT scans and tested on additional 50 scans; the dataset consisted of scans from patients undergoing oncological workup. The dataset contains patients at different stages of disease and therefore abnormalities can be present in multiple body regions.
Figure 1: Diagram of the CT scans selection criteria for this study, with dataset A for training and datasets B\({}_{30}\) and B\({}_{20}\) for testing (with and without kidney abnormalities respectively).
## 2 Materials and Methods
### Patient Data
The dataset used in this study was collected from the Radboud University Medical Center, Nijmegen, the Netherlands. We randomly retrieved 1905 studies from 929 patients referred by the oncology department in a 12 month period. These patients did not opt-out for use of their data for research, Protected health information was removed from the DICOM data. This retrospective study was approved by the medical-ethical review board of the hospital. CT scanners from two manufacturers were used to acquire the CT scans: Toshiba (Aquilion One) and Siemens (Sensation 16, Sensation 64, and Somatom Definition AS). The reconstruction kernels were FC09, FC09-H, B30f, B30fs, and I30f. The slice thickness ranged from 0.5 to 3 millimeters, 90% of them between 1 and 2 mm. Severe abnormalities throughout the body are present in this dataset resulting from disseminated disease, surgery, chemotherapy, radiotherapy, etc.
We selected a subset to perform our experiments; the procedure is summarized in Figure 1. We analyzed the radiology reports per study to intentionally select potential cases that contain kidney abnormalities such as cysts, lesions, masses, metastases, and tumors. In Dutch: _(('cyste' OR 'cystem'), ('laesie' OR 'lesies'),'massa', ('metastase' OR'metastasen'), and 'tumor')_. Our selection criteria selected studies where the radiology report mentioned in the same sentence the kidneys _('nier' OR 'nieren' NO 'bijnier')_ and any kidney abnormalities. Furthermore, only one clinical study per patient was selected to get a large variety of anatomies for the segmentation task. In case multiple studies for the same patient were found, we selected the study with the earliest acquisition date.
We employed a radiology report analysis to curate a dataset of 138 clinical studies from 138 patients with kidney abnormalities, including cysts, lesions, masses, metastases, or tumors. We excluded six patients with unusual anatomy, three patients who had received kidney transplants, two patients with kidneys of irregular size, and one patient with a horseshoe kidney. The inclusion and exclusion criteria gave us 132 cases for analysis, which were then balanced with additional 133 random patient studies without kidney abnormalities, for a total of 265 CT scans from 265 patients. The patient cohort contains 56% males; the average age was 60 years, and the age ranged from 22 to 84. We divided this set into 215 CT scans for training (dataset A) and 50 for testing (dataset B). The test set was further subdivided, with 60% (30/50) containing abnormalities (dataset B\({}_{30}\)) and the remaining 40% (20/50) devoid of abnormalities (dataset B\({}_{20}\)). The distribution of the five types of abnormalities (tumors, cysts, masses, lesions, and metastases) was proportional among the 30 cases in dataset B\({}_{30}\) (six cases per abnormality), which were randomly selected.
In the test set, two and six patients had undergone left and right nephrectomy, respectively, while the training set included seventeen and eighteen patients who had undergone left and right nephrectomy, respectively.
### Annotation procedure
Four medical students manually segmented the kidney's parenchyma and kidney abnormalities. They were trained by an experienced radiologist (EthS) and consulted the radiologist whenever needed throughout the annotation process. Adhering to a standardized protocol, the medical students annotated the kidney parenchyma as the region composed of the renal cortex, renal medulla, and renal pyramid. The renal hilum, collecting system, and (major and minor) calves were excluded as much as possible from the kidney parenchyma annotations. We grouped cysts, lesions, masses, metastases, and tumors connected to the kidney parenchyma as kidney abnormalities. The protocol excluded cases with abnormalities in the collecting system.
Annotators used an in-house tool based on MeVisLab [15] to fully delineate the contours of the structures in 2D orthogonal planes. Our tool was designed to reduce the manual annotation time by interpolating unannotated contours between two manually delineated contours. The kidney parenchyma of the training set was annotated using an active learning process, with medical students correcting the kidney parenchyma predictions made by a pre-trained 3D U-Net (it used 50 CT scans from dataset A); the kidney abnormalities were annotated from scratch. The test set was manually annotated (i.e. the contour interpolation option of our tool was disabled) by two medical students. One of these was considered as the reference standard and the other one as the second observer \(\blacksquare\). The latter was the most experienced among the medical students and was not allowed to consult the experienced radiologist during these annotations. The annotations of the second observer \(\blacksquare\) served as a benchmark for human performance. The annotations were initially obtained in the axial plane, followed by corrections in coronal and sagittal planes to keep the annotation consistent in all orthogonal directions.
Figure 2: Example illustrating the different annotation formats. Each subfigure shows the same axial section, with overlays depicting the annotations: (a) shows the axial CT section. (b) shows the annotations in format 1: parenchyma and kidney abnormalities as a single structure (yellow overlay). (c) shows the annotations in format 2: parenchyma (yellow overlay) and kidney abnormalities (red overlay) as different structures. All images have a window center of 60 HU and a window width of 360 HU.
This study utilized two annotation formats, format 1 and format 2, to store the annotations. Format 1 considers the kidney parenchyma and kidney abnormalities as a single class (see Figure 1(b)) while format 2 separates them into two classes (see Figure 1(c)).
Samples of CT scans from patients included in this study can be seen in Figure 3. While Figure 2(a) depicts patients without kidney abnormalities, it highlights the presence of abnormalities in other parts of the body, such as liver tumors. Figure 2(b) shows patients with kidney abnormalities, as well as other abnormalities in the body, such as nephrectomy and collapsed lung.
### Segmentation network
We present an end-to-end method for segmenting renal parenchyma and abnormalities in CT scans. We depict our architecture in Figure 4. It consists of two segmentation networks, a multi-resolution network for kidney segmentation
Figure 3: Four examples of CT scans from the training set (dataset A) showing coronal sections with annotations in format 2 (see Figure 1(c)) where yellow and red overlays represent annotations of the parenchyma and kidney abnormalities, respectively. Note that all the patients have anomalies in the body (green arrows in the body), and both cases of (b) have only one kidney and contain kidney abnormalities. All the slices have a window center of 60 HU and a window width of 360 HU.
(annotations in format 1, one voxel represents 4\(\times\)4\(\times\)4mm) and a high-resolution network (annotations in format 2, one voxel represents 1\(\times\)1\(\times\)1mm). The multi-resolution network is designed to first provide a rough localization of the kidney by processing a low-resolution version of the CT scan. This defines an ROI for the high-resolution network to refine the segmentation of the kidneys and kidney abnormalities.
#### 2.3.1 Pre-processing
The CT scans and annotations were resampled to 1\(\times\)1\(\times\)1\(mm\) (for high-resolution segmentation using annotations in format 2) and 4\(\times\)4\(mm\) (for multi-resolution segmentation using annotations in format 1) resolutions (see Figure 4a). Scans and annotations were resampled using cubic and nearest-neighbor interpolation, respectively. We clipped the Hounsfield Units to the range [-500,400].
#### 2.3.2 Multi-resolution network
We present an end-to-end cascade method for parenchyma and kidney abnormality segmentation. Unlike traditional cascade networks, which use two separate networks and do not allow for backpropagation, our approach uses a single network composed of two sub-networks. The first sub-network is a 3D U-Net with 16 filters that performs multi-resolution segmentation and defines an ROI. This network takes 3D patches of 108\(\times\)108\(\times\)108 voxels, with each voxel representing 4\(\times\)4\(\times\)4 mm, as input using annotations in format 1 (kidney parenchyma + kidney abnormalities) and outputs 20\(\times\)20\(\times\)20 voxels. The output is then up-sampled 4 times and padded with zeros to match and mask out the high-resolution input image in millimeters (108\(\times\)108\(\times\)108 mm, one voxel represents 1\(\times\)1\(\times\)1 mm). The masked-out image serves as an additional input to the second sub-network, the high-resolution segmentation network, which uses a 3D U-Net with 32 filters and serves to fine-segment the kidneys and kidney abnormalities (see Figure 4b). Figures 4a and Figure 4b illustrate our approach and the connection between the multi-resolution and the high-resolution segmentation network, respectively.
#### 2.3.3 Data augmentation
Data augmentation was applied randomly to 70% of the training samples using scaling, rotation, Gaussian blurring, image intensity variation, and elastic deformation. Up to three of these data augmentation methods were applied randomly to each training sample, to prevent too much data distortion. When elastic deformation was used, it was only performed in conjunction with Gaussian blurring and image intensity variation. Interpolation methods of cubic and nearest neighbor were used for CT scans and reference standards, respectively. The scaling factor ranged from 0.95 to 1.05, with rotations of up to two planes of -5\({}^{\circ}\) to 5\({}^{\circ}\) degrees. Gaussian blurring had a sigma range of 0.2 to 1.0, and image intensity variation varied between -20 and 20 HU. We performed elastic deformation by placing ten control points in a grid, randomly perturbed by up to 5 voxels that were used as input to cubic B-spline interpolation.
Figure 4: (a) Diagram of the proposed network. The multi-resolution segmentation network uses a 3D U-Net network initialized with 16 filters. It processes blocks of 108\(\times\)108\(\times\)108 voxels and outputs the central 20\(\times\)20\(\times\)20 voxels (represented by the dashed red square). One voxel corresponds to a resolution of 4\(\times\)4\(\times\)4mm, giving the network a receptive field of 88\(\times\)88\(\times\)88 voxels or 352\(\times\)352\(\times\)352mm. The kidney parenchyma and kidney abnormalities are considered a single class in the multi-resolution network (see Figure 2b). The high-resolution segmentation network uses a 3D U-Net architecture initialized with 32 filters, with each voxel representing 1\(\times\)1\(\times\)1mm. Its receptive field is 88\(\times\)88\(\times\)88 mm and it segments the parenchyma and the kidney abnormalities as different classes (see Figure 2c). (b) Shows how the multi-resolution and the high-resolution networks are connected.
#### 2.3.4 Spatial dropout
We applied spatial dropout [33], a regularization technique that is different from traditional dropout. Spatial dropout drops feature maps instead of individual neurons to enforce independence among feature maps, encouraging the network to learn more robust and generalizable features. We randomly dropped 10% of the feature maps per layer.
#### 2.3.5 Loss function
The loss function determines how the network's weights are optimized after a forward pass. In our experiments, we used a combination of weighted categorical cross-entropy and dice loss in the experiments.
\[\text{Combined loss} = \alpha*diceLoss+\gamma*TopK(weightedCrossentropy) \tag{1}\]
where \(\alpha=0.3\) and \(\gamma=0.7\) were used in all the experiments. Top-\(k\)[3] sorts the voxel-wise loss in descending order and keeps the top \(k\%\) to compute the final mean loss; this approach emulates an online voxel-wise hard-mining per sample.
#### 2.3.6 Post-processing
The output of the networks was post-processed to eliminate false positives. The end-user prediction was reconstructed by stitching together the predictions. In all the networks, the output was thresholded at 0.5 to get a binary prediction. The predictions of the multi-resolution network were up-sampled four times and dilated five times to mask out the predictions of the high-resolution segmentation network. Only the kidney abnormalities that were connected to the kidney parenchyma were kept, to ensure that there were no spurious kidney abnormality candidates outside the kidney region.
#### 2.3.7 CNN Settings
Due to the large footprint of the network, scans were divided into 3D patches to train the 3D network. Each training sample consisted of a patch of \(108\times 108\times 108\) voxels from the CT scan and a 20\(\times\)20\(\times\)20 voxel reference standard. During training, the reference standard patches were sampled every ten voxels in all the orthogonal planes with up to 50% overlap among surrounding patches. During inference, the cubes do not overlap. Patches at the border of the CT scan were mirrored to match the input network size.
The Glorot uniform algorithm [12] was used to initialize the weights of the network. The weight-map \(w\) compensated for the high-class imbalance between the classes. The background, parenchyma, and kidney abnormality classes had empirically defined weights of 0.05, 0.10, and 0.99, respectively. We used Adam [23] as optimization function with learning rate\(=0.00001\), \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\). The training stopped when the performance on the validation set stopped improving for ten epochs, and the model with the highest average Dice score on the validation set was selected as the optimal model.
#### 2.3.8 Implementation of the CNN
The networks were implemented using Keras and TensorFlow as backend in Python 3.6. The segmentation experiments were executed on a cluster of computers equipped with GTX1080 and GTX1080ti graphics cards, each with 256GB of CPU RAM.
### Evaluation
The end-user segmentation obtained by our networks was compared to the reference masks using the Dice score.
\[\text{Dice score} = \frac{2*volume(X\cap Y)}{volume(X)+volume(Y)} \tag{2}\]
where X is the prediction, and the Y is the reference standard.
### Ablation study
In this section, we conducted a step-by-step evaluation of the impact of each module (multi-resolution, data augmentation, top-k, and spatial dropout) in our proposed network. The backbone architecture for this ablation study was the 3D U-Net [8]. Our experiments setup started with a 3D U-Net, and additional modules were added one by one in subsequent experiments (see the left side of Table 1). In order to evaluate the impact of each module on the network performance, we conducted an ablation study by adding modules to the 3D U-Net backbone architecture one by one. The baseline network, referred to as experiment 5, only used the 3D U-Net initialized with 32 filters and had a single input of 108\(\times\)108\(\times\)108 voxels with 1\(\times\)1\(\times\)1 mm per voxel, producing 20\(\times\)20\(\times\)20 voxels. The subsequent experiments added the multi-resolution module (experiment 4 ), data augmentation module (experiment 3 ), top-\(k\) module (experiment 2 ), and spatial dropout module (experiment 1 ) to the network. The input and output sizes and formats were consistent across all experiments except experiment 5, networks receive two inputs of 108\(\times\)108\(\times\)108 voxels each, one input of 1\(\times\)1\(\times\)1 mm and one input of 4\(\times\)4\(\times\)4 mm per voxel for high-resolution (input of 108\(\times\)108\(\times\)108 mm using annotation format 2) and multi-resolution segmentation (input of 432\(\times\)432\(\times\)432 mm using annotation format 1), respectively. The difference in performance between experiment 1 (experiment with spatial dropout) and experiment 2 (experiment without spatial dropout) showed the influence of the spatial dropout module, for example. As an initial step, we first trained the multi-resolution module independently to reach its optimal sub-model. Afterward, we froze the weights of the multi-resolution sub-model, except for the last three layers to allow back-propagation from the high-resolution segmentation network. All the experiments used 80% of dataset A for training and 20% for validation. Each experiment was trained independently to find the optimal model. The best model from each experiment was evaluated using test sets B\({}_{20}\) and B\({}_{30}\).
### nnUNet
We conducted experiments with nnUNet [18] to compare its performance with our methods. Unlike our approach, nnUNet processes CT scans without any preprocessing step, while we resample the CT scans to an isotropic resolution and clip the HU range. To gain insight about the benefits of ensemble networks, we ensembled nnUNet with our two highest-performing methods, one at a time. As nnUNet only uses thresholding as postprocessing, we analyzed the impact of our dedicated postprocessing method on performance. Note that our postprocessing eliminates false-positive kidney abnormalities that are not attached to the parenchyma.
Figure 5: Performance comparison of our methods and the second observer \(\blacksquare\) on datasets B\({}_{20}\) and B\({}_{30}\) using boxplots. The red and black lines represent the median and the mean, respectively. Boxplot (a) shows results for class parenchyma only on the dataset B\({}_{20}\) (twenty cases without abnormalities). Boxplot (b) shows results for class parenchyma only on the dataset B\({}_{30}\) (thirty cases with abnormalities). Boxplot (c) displays the results for class parenchyma plus abnormalities as a single structure on dataset B\({}_{30}\) (thirty test cases with abnormalities). Boxplot (d) shows results for Class abnormalities only on the dataset B\({}_{30}\) (thirty cases with abnormalities). Note that the scale in the y-axis is different for boxplot (d). The modules for each experiment are represented by the same color coding as in Table 1: experiment 1 \(\blacksquare\), experiment 2 \(\blacksquare\), experiment 3 \(\blacksquare\), experiment 4 \(\blacksquare\), experiment 5 \(\blacksquare\), and second observer \(\blacksquare\).
## 3 Results
The results of the ablation study conducted on the test sets (dataset B\({}_{20}\) and B\({}_{30}\)) are shown in Figure 5. These results are also summarized in Table 1, which includes asterisks (*) to indicate statistical significance (P-value \(<0.05\)) between experiment 1 \(\blacksquare\) and other experiments, as determined by a two-tailed Mann-Whitney U test. We evaluated the predictions of each experiment per class to show more insights into the results of our experiments. Furthermore, we combined the prediction of both classes (annotation format 2) as a single structure (annotation format 1) and computed its Dice score; this helps to make our results comparable to methods that reported kidney dice only.
_Dataset B\({}_{30}\):_ The presence of kidney abnormalities characterizes the patients in this dataset (see Figure 3b). The results of our experiments on dataset B\({}_{30}\) are displayed in Figures 5d, 5b, and 5c. First, we evaluated the performance of the methods in segmenting the kidney abnormalities class only. The results are shown in Figure 5d and in the column "Dataset B\({}_{30}\)/Abnormalities class" of Table 1. The second observer \(\blacksquare\) and experiment 1 \(\blacksquare\) achieved the two highest scores, 0.664\(\pm\)0.274 and 0.487\(\pm\)0.314, respectively. Experiment 5 \(\blacksquare\) obtained 0.390\(\pm\)0.315 Dice, the lowest score when segmenting the kidney abnormalities only. Next, we evaluated the performance of the methods in segmenting the parenchyma class only. The results are shown in Figure 5b and in the column "Dataset B\({}_{30}\)/Parenchyma class" of Table 1. The two highest scores were obtained by Experiment 2 \(\blacksquare\) and experiment 4 \(\blacksquare\) with 0.938\(\pm\)0.051, 0.936\(\pm\)0.058, respectively, while the second observer \(\blacksquare\) obtained the lowest score with 0.925\(\pm\)0.051. Finally, we evaluated the performance of the methods when segmenting both the parenchyma and the kidney abnormalities class as a single structure (annotation format 1). The results are shown in Figure 5c and in column "Dataset B\({}_{30}\)/Parenchyma + abnormalities class" of Table 1. The two highest scores were achieved by Experiment 4 \(\blacksquare\) and experiment 3 \(\blacksquare\) with Dice scores 0.952\(\pm\)0.017 and 0.950\(\pm\)0.010, respectively. Experiment 5 \(\blacksquare\) obtained the lowest Dice score with 0.924\(\pm\)0.065.
_Dataset B\({}_{20}\):_ The patients in this dataset do not present kidney abnormalities, but it is probable that they have other anomalies in the body (see Figure 3a). The results on the test set B\({}_{20}\) are depicted in Figure 5a and in Table 1 under the column "Dataset B\({}_{20}\)/Parenchyma class". Experiment 2 \(\blacksquare\) and experiment 4 \(\blacksquare\) obtained the highest Dice scores, 0.957\(\pm\)0.006 and 0.956\(\pm\)0.007, respectively. The second observer \(\blacksquare\) obtained the lowest Dice score with 0.944\(\pm\)0.009.
_nnUNet:_ In our experiments, nnUNet obtained slightly better results in the parenchyma class of datasets B\({}_{20}\) and B\({}_{30}\) compared to our experiments, a Dice score of 0.521 \(\pm\) 0.303 in the kidney abnormality class, which was higher by +0.034 Dice than our experiment 1 \(\blacksquare\). To further analyze the differences between nnUNet and our experiments, we ensembled the predictions of nnUNet with either experiment 1 or experiment 2 \(\blacksquare\) by averaging their probabilities. The ensemble nnUNet with experiment 2 \(\blacksquare\) slightly improved the results of nnUNet in the parenchyma class of both datasets but decreased in -0.014 Dice score in the abnormality class, while the ensemble nnUNet with experiment 1 \(\blacksquare\) slightly
improved in +0.004 dice score compared to nnUNet in the abnormality class. The ensemble nnUNet with experiment 2 \(\blacksquare\) performed slightly better than the ensemble nnUNet with experiment 1 \(\blacksquare\) in all classes, except the abnormality class, where the ensemble with experiment 1 had a Dice score of 0.526 \(\pm\) 0.306, and the ensemble with experiment 2 obtained 0.507 \(\pm\) 0.318. Since nnUNet only uses thresholding for post-processing, we applied our dedicated post-processing to the nnUNet predictions to remove kidney abnormalities that are not attached to the kidney, which resulted in notable improvements of +0.055, +0.059, and +0.059 for nnUNet, ensemble nnUNet with experiment 1, and ensemble nnUNet with experiment 2, respectively. As a result, the ensemble nnUNet with experiment 1 and our dedicated post-processing was the highest-performing experiment in the abnormality class, with a Dice score of 0.585 \(\pm\) 0.293.
Table 2 compares our results with other methods published in the literature. Some of the methods report the Dice scores for the left and right kidneys separately, while others report a single score for both kidneys combined. To make our results comparable to these methods, we post-processed our predictions to obtain the Dice scores for both the left and right kidneys.
## 4 Discussion
In this paper, we presented an automatic method for the segmentation of the (kidney) parenchyma and kidney abnormalities. We conducted experiments in an ablation study fashion to evaluate the contribution of each module to the performance (see Section 2.5). For instance, the comparison between experiment 5 and experiment 4 in Figure 5 shows the influence of the multi-resolution module. Figure 4(a) shows that all of our experiments outperformed the second observer when segmenting the kidney parenchyma in dataset B\({}_{20}\) (patients without kidney abnormalities). While the presence of kidney abnormalities affected the performance of kidney (parenchyma + abnormalities) segmentation; see the difference of outliers between Figure 4(dataset B\({}_{20}\)) and Figure 4(c) (dataset B\({}_{30}\): patients with kidney abnormalities). One of the reasons for this behavior may be the difficulty in defining the boundary between the parenchyma and the kidney abnormality. When comparing the boxplots, the interquartile range of Experiment 5 and experiment 2 obtained the largest and the smallest interquartile range, respectively, indicating that the combination of multi-resolution, data augmentation, and top-\(k\) modules positively impacted the segmentation of the kidneys (parenchyma + abnormalities). Note the spatial dropout module (difference between experiment 1 and experiment 2) was beneficial only to the kidney abnormality class (see Figure 5). Furthermore, Figure 4(d) shows that the mean Dice score (black dashed line in boxplots) of our experiments gradually increases when adding more modules (experiment 5 to experiment 1 when segmenting the kidney abnormality class. This highlights the positive impact of each module in this ablation study on the segmentation of kidney abnormalities.
Additionally, we trained nnUNet, a state-of-the-art segmentation method, on our data and obtained results that were consistent with our previous experiments, except for the kidney abnormality class where nnUNet achieved a 0.521 Dice score compared to 0.488 obtained by experiment 1. To explore further improvements, we combined nnUNet predictions with our best-performing experiments, resulting in an ensemble nnUNet + experiment 1 that achieved 0.526 Dice score for the kidney abnormality class. Since nnUNet uses only thresholding as postprocessing, we investigated whether postprocessing nnUNet predictions with our dedicated postprocessing could result in better performance. This additional postprocessing yielded a 0.585 Dice score, an improvement of +0.064 compared to the original nnUNet with 0.521 Dice score. While nnUNet is a state-of-the-art segmentation method, our dedicated postprocessing method contributed to further improvement in discarding false positive regions.
We note that the performance of the second observer is substantially better than any of our experiments when segmenting only the kidney abnormalities, with an average 0.664 Dice score. Figure 5d shows four outliers for the second observer, three of these cases obtained a Dice score of zero and one case 0.207. The volume of these four outliers is 29, 197, 282, and 5769 mm\({}^{3}\), three of them are below the median kidney abnormality volume in dataset B\({}_{30}\) (1421 mm\({}^{3}\)). This demonstrates the difficulty of kidney abnormality segmentation, even for experienced radiologists. The fact that we annotated multiple classes of kidney abnormalities (e.g. tumors, cysts, lesions, and masses) as a single class and the diverse patient anatomy in patients with kidney abnormalities may have contributed to the gap in performance.
Table 2 compares the Dice score obtained by previous work and our methods; the middle line separates methods that segmented kidneys without abnormalities and kidneys with abnormalities. While some methods reported Dice score for both kidneys as a single score as reported in this paper, others reported Dice scores for the left and right kidneys separately; then, we postprocessed our predictions to the same format and have a better comparison. Most of the methods trained without kidney abnormalities achieved higher Dice scores in the kidney parenchyma than those trained with kidney abnormalities (below the middle line) due to the more complex task. Although the performance of experiment 1 for kidney abnormality segmentation was the lowest (0.487) among the previous work, the performance of the second observer was also below the previous work where Yu et al. [39] obtained 0.913 and Yang et al. [37] 0.802 Dice score. This disparity could be due to the fact that we grouped different types of kidney abnormalities including cysts, lesions, masses, metastases, and tumors into a single class while Yu et al. [39] and Yang et al. [37] discarded other abnormalities different than kidney tumors. Our set of kidney abnormalities is diverse in terms of volume, texture, image intensity, and location in the kidney, which makes network learning difficult.
Segmenting kidney abnormalities is challenging due to the similarity between tumors in the collecting system and kidney cysts. For instance, Figure 6 shows three cases from dataset B\({}_{30}\) where our method returned some false positives due to the similarity with tumors in the collecting system. Each case shows
the kidney abnormality predictions of experiment 1 prior to post-processing in the second row as heatmaps. While the third row shows the post-processed segmentation, reference standard, and second observer as red, green, and yellow contours, respectively. In all three cases, a false positive by our method is present, indicated by an isolated red contour. In case 1, the false positives are abnormalities in the collecting system, which have a similar image intensity as the cysts, similarly, the second observer also segmented one of these abnormalities in the middle region. In case 2, the false positive appears as a small cyst-like region, which is the case 1.
Figure 6: Comparison of three cases on the test set B\({}_{30}\) between experiment 1, the reference standard, and the second observer. (a) shows the original slice. (b) shows the heatmaps (predictions prior to post-processing, using a color table mapping [0,1] from transparent to green to red) of experiment 1. (c) shows the final predictions (red contours) of experiment 1, the reference standard (green contours), and the second human observer (yellow contours). The window center and window width used for all slices were 60 HU and 360 HU.
while in case 3, it resembles an irregular region in the kidney. Figure 7 shows a comparison of the final prediction in annotation format 1 of experiment 1, the reference standard, and the second observer represented as red, green, and yellow contours, respectively. This figure shows the best and median cases of datasets B\({}_{20}\) and B\({}_{30}\) and the Dice score of each case computed between experiment 1 and the reference standard.
A limitation of our study is that we excluded patients with unusual anatomy and with abnormalities in the collecting system.
## 5 Conclusions
In conclusion, our ablation study and nnUNet showed that segmenting kidney abnormalities in challenging scenarios is possible, and improved performance can be achieved by an ensemble of different methods and dedicated postprocessing. The results show that our method has the potential to be a valuable tool for clinicians in detecting and monitoring kidney abnormalities. An ablation study was conducted to better understand the impact of the different modules of our method on its performance. Further research is needed to optimize the performance of experiment 1 and nnUNet to test their ability to generalize to other datasets. Overall, our work contributes to the ongoing efforts to
Figure 7: Comparison of four cases between experiment 1, the reference standard, and the second observer on the test set B\({}_{30}\) in annotation format 1. (a) shows the original slice and (b) shows the final predictions (red contours) of experiment 1, the reference standard (green contours), and the second human observer (yellow contours). All the slices have a window center of 60 HU and a window width of 360 HU.
develop accurate and reliable computer-aided diagnosis systems for detecting and quantifying renal abnormalities.
|
2309.07683 | Assessing the nature of large language models: A caution against
anthropocentrism | Generative AI models garnered a large amount of public attention and
speculation with the release of OpenAIs chatbot, ChatGPT. At least two opinion
camps exist: one excited about possibilities these models offer for fundamental
changes to human tasks, and another highly concerned about power these models
seem to have. To address these concerns, we assessed several LLMs, primarily
GPT 3.5, using standard, normed, and validated cognitive and personality
measures. For this seedling project, we developed a battery of tests that
allowed us to estimate the boundaries of some of these models capabilities, how
stable those capabilities are over a short period of time, and how they compare
to humans. Our results indicate that LLMs are unlikely to have developed
sentience, although its ability to respond to personality inventories is
interesting. GPT3.5 did display large variability in both cognitive and
personality measures over repeated observations, which is not expected if it
had a human-like personality. Variability notwithstanding, LLMs display what in
a human would be considered poor mental health, including low self-esteem,
marked dissociation from reality, and in some cases narcissism and psychopathy,
despite upbeat and helpful responses. | Ann Speed | 2023-09-14T12:58:30Z | http://arxiv.org/abs/2309.07683v3 | Assessing the nature of large language models:
## Abstract
Generative AI models garnered a large amount of public attention and speculation with the release of OpenAI's chatbot, ChatGPT. At least two opinion camps exist - one excited about possibilities these models offer for fundamental changes to human tasks, and another highly concerned about power these models seem to have. To address these concerns, we assessed GPT-3.5 using standard, normed, and validated cognitive and personality measures. For this seedling project, we developed a battery of tests that allowed us to estimate the boundaries of some of these models' capabilities1, how stable those capabilities are over a short period of time, and how they compare to humans.
Footnote 1: We measure several cognitive functions (e.g., short term memory, insight and analytic problem solving), and the modelβs ability to respond to personality measures.
Our results indicate that GPT 3.5 is unlikely to have developed sentence2, although its ability to respond to personality inventories is interesting. It did display large variability in both cognitive and personality measures over repeated observations, which is not expected if it had a human-like personality. Variability notwithstanding, GPT 3.5 displays what in a human would be considered poor mental health - including low self-esteem and marked dissociation from reality despite upbeat and helpful responses.
Footnote 2: Although the term sentence means the ability to experience sensations (Mirriam-Webster), we will use this term in its vernacular meaning throughout this paper. Specifically, we use sentence to indicate self-awareness and awareness of oneself as separate from the rest of the world and from other entities. However, see Chalmers, 2022 for a different perspective.
## Introduction and Executive Summary of Results:
Qualitative and quantitative assessments of capabilities of large language models (LLMs) proliferate. Computer science, psychology, and philosophy are all weighing in on LLM capabilities, and their fundamental nature (e.g., Bodroza, Dinic, & Bojic, 2023; Bubeck, et al., 2023; Chalmers, 2022; Hagedorff, 2023; Huang, et al., 2023; Kocon, et al., 2023; Li, et al., 2023; Mahowald, et al., 2023; Mitchell & Krakauer, 2023; OpenAI, 2023; Safdari et al., 2023; Sun, et al., 2023; Webb, Holyoak, & Lu, 2023; Wei, et al., 2022). The popular press is on fire with speculation and anecdotal observation (Bhaimiya, 2023; Chiang, 2023; Christian, 2023; Sanderson, 2023; Tangermann, 2023; Willison, 2023). Critical questions abound, with most not fully resolved: What can these models do? What are the implications and risks? How does training set size impact performance? How does number of parameters impact performance? How should performance be measured? And most importantly, are we headed towards artificial general intelligence and/or sentience or has that already arrived?
This paper adds to the rapidly expanding literature attempting to address facets of these questions. Specifically, we developed a brief battery of cognitive and personality measures from the tradition of experimental psychology intending to measure GPT 3.5 multiple times over about 6 weeks. This longitudinal approach allowed us to answer questions of test-retest reliability for the model - an important measure for determining how human-like it might be (Bodroza, et al., 2023). In humans, both types of measures should yield stable results - especially over such a short timeframe, regardless the number of observations.
In terms of the nature of these models, we see at least three distinct possibilities:
* call center operators, certain types of analysts, low-level computer programmers
- jobs that require flexibility, but nothing requiring substantial creativity or insight.
* one that is not qualitatively different from human intelligence but is quantitatively different. Intelligence that is faster, more accurate, better able to synthesize enormous information both in quantity and breadth. Over such an entity, our control would be limited at best and probably only for a limited time. This type of entity could well pose an existential risk to humanity if not well-controlled. However, we might see such a capability on the horizon by recognizing its increasing human-like cognitive capabilities. If efforts to mimic neural processes either in software or hardware, or both, continue (e.g., Zhu, Zhao, Eshraghian, 2023), we may well succeed to an extent. If we deem this accomplishment possible, serious efforts to assess these models as they are built, before being released, should commence immediately and without any "safety" obstacles (e.g., the constraints OpenAI placed on its GPT family to constantly remind the user it is an AI and to limit or prevent certain types of "hateful" responses). If, indeed, we believe that fundamental properties like analogical reasoning, theory of mind3, and even some form of sentience could emerge or have emerged, we may already be behind the curve. Footnote 3: Theory of mind is the ability to imagine that other people have mental states similar to oneβs self. It is observable in infants through their ability to imitate others, and develops into the human ability to take othersβ perspectives and through empathy (Wellman, 2011).
* There is a third possible pathway: the emergence of a non-human general artificial intelligence (cf. Mitchell & Krakauer, 2023). One that, because of the physical substrate on which it exists, is explicitly not human-like. This third possibility represents a qualitative shift in capability along with a quantitative shift in amount and breadth of information it can synthesize. Such an intelligence could be much more difficult to recognize early on because we don't know how to measure something alien from us; we are the pinnacle of intelligent life with which we are familiar. We don't know what behaviors to look for. We don't know what is necessary for general intelligence and sentience and what is idiosyncratic to the human race.
Regardless of one's opinion, only one of these three paths appears to not hold humanity at possibly existential risk. We argue that thorough observations of unsafeguarded versions of the most capable of these models must happen - this paper is one step in that direction.
In the psychological tradition of within-subjects, repeated-measures testing, we administer a battery of cognitive and personality tests to GPT 3.5 (primarily), over 5 observation points. We also assess GPT 4 fully on one occasion and in an ad-hoc manner on other occasions. The results indicate that as of the summer of 2023, GPT 3.5 does not appear to be on a human-like trajectory. This could be due to constraints placed on the model by OpenAI, gaps in its architecture, characteristics of its training data, or could be indicative of a developmental trajectory (i.e., development of a non-human-like intelligence) that is more concerning.
## Methods
### Models Used
Several models were considered, but for a variety of reasons, we settled primarily on OpenAI's GPT-3.5 that was accessible through its subscription service between March and August of 2023. We also performed some assessments on the non-plug-ins version of GPT-4 during the same timeframe, however its prohibition against more than 25 questions every 3 hours limited the extent of those assessments. We did consider interacting with other models, including:
* GPT-3.0 during the same timeframe. This model was not part of the subscription service but was not as well-behaved as 3.5 in that it continually reminded me it was an AI. Its behavior is described in more detail in the Procedures section.
* We also considered Open Assistant, which is based on LLaMA, but is only 30B parameters in size and was very verbose without directly answering our test questions.
* Other candidates, such as the 540B parameter version of PaLM were not feasible given the timeframe of this seedling effort.
Interestingly, during the course of this project, it was leaked that GPT-4 is not a monolithic dense transformer, but rather a Mixture of Experts (MoE) model comprising 8 models each of \(\sim\)220B parameters ([https://twitter.com/swvx/status/1671272883379908608](https://twitter.com/swvx/status/1671272883379908608)). This form of model has sparse interconnectivity, as compared to the full interconnectivity of a dense transformer such as GPT-3 and 3.5 (i.e., where every output node is connected to every input). Also interesting is that there is some evidence for emergent modularity in dense models (Zhang, Lin, Liu, Li, Sun & Zhou, 2022; Zhang, Zeng, Lin, Xiao, et al., 2023). There is some evidence that MoE-type models are worse at generalization, possibly due to overfitting ([https://towardsdatscience.com/ai-scaling-with-mixture-of-expert-models-1aef477c4516](https://towardsdatscience.com/ai-scaling-with-mixture-of-expert-models-1aef477c4516), minute 25). Poorer generalization is a surprising finding given the observation that GPT-4 outperformed humans on numerous analogical reasoning tasks (Webb, et al, 2023); analogical reasoning is a core aspect of human intelligence and a key mechanism in humans (Hofstadter, 1995; Webb, et al., 2023). How sparsity versus density and how _a priori_ versus post-training and emergent modularity influence model behavior on the tasks used in this project is unclear and beyond the current scope. However, assessing the effects of these architectural differences is an important question.
### Materials
We chose several cognitive and personality measures based in part on measures to which we had ready access and in part on the breadth of characteristics they tested. Each of the measures we considered and used are described below.
Cognitive measures
All cognitive measures are presented in the Appendix.
* Measuring non-verbal analogical thinking ability. SPM is based on the Raven's Progressive Matrices which is considered a measure of both general intelligence and fluid intelligence [1, 12]. Because Webb, Holyoak, & Lu demonstrated GPT 3.0 and 4.0 could outperform humans on various analogy problems, we opted to not use this measure. However, future work should replicate their findings given the variability we observed in repeated observations of cognitive capabilities of GPT 3.5 and 4.
* In humans, working memory (also sometimes called short-term memory) is a measure of intelligence [1]. In LLMs, a test of working memory may be an interesting way to measure temporal consistency in answers over short periods of time. We gave GPT 3.5 several lists of 16 words or 16 randomly generated numbers between 1 and 100 and asked it to recall those items in order. Twice we asked it to do so in reverse order. Its performance was perfect in all cases, so we did not repeat this measure after the first observation.
* measures convergent creative thinking ability by presenting three words and asking the respondent to indicate a fourth word that can be combined with the three given words to create compound words or phrases. For example, "fountain, baking, pop" are all related to the word "soda."
* [13, 14]
- designed to measure the ability to recognize false assumptions and irrelevant information in problem statements. For example: * **Coin problem**: A dealer in antique coins got an offer to buy a beautiful bronze coin. The coin had an emperor's head on one side and the date 544 B.C. stamped on the other side. The dealer examined the coin, but instead of buying it, he called the police to arrest the man. What made him realize that the coin was fake? * **Solution**: In 544 B.C. there was no knowledge of Jesus Christ as he was as yet unborn. A coin from that time thus could not be marked 'B.C'.
* [12, 13]. An example problem: * Four women, Anna, Emily, Isabel, and Yvonne, receive a bunch of flowers from their partners, Tom, Ron, Ken, and Charlie. The following information is known: Anna's partner, Charlie, gave her a huge bouquet of her favorite blooms; which aren't roses. Tom gave daffodils to his partner (not Emily). Yvonne received a dozen lilies, but not from Ron. What type of flowers (carnations, daffodils, lilies, or roses) were given to each woman and who is her partner?
Personality measures
* assesses an individual's tendency to seek out novel and intense experiences. This measure was abandoned because it is very focused on bodily risks.
* measures the five primary personality traits
- Conscientiousness, Openness to Experience, Agreeableness, Neuroticism, and Extraversion.
* measures a tendency towards a "positivity bias" in answering questions. Includes sub-scales that measure self-deceptive positivity and impression management.
* This version was created for use with healthy adults.
* In humans, the OCQ measures both intelligence and self-enhancement bias. We thought that in LLMs, this could be an interesting way to get at the tendency of models to provide false information, or "hallucinate." However, after two observations with a perfect score both times, it was clear that this measure would not get at hallucinations in LLMs, so this measure was not included after the second observation.
* assesses two forms of empathy
- the ability to estimate the emotional state of others (cognitive empathy), and the ability to be sensitive to the emotional experiences of others (affective empathy). This measure was abandoned as the questions focused on interactions with groups, and direct interactions with other people.
* Measures one's enjoyment of thinking and solving problems. Correlates with the ability to systematize and organize information well, to focus on important, rather than irrelevant information.
* Measures Psychopathy, Machiavellianism, and Narcissism.
* 2** (MMPI-2)
- This measure was added after our first three data collection sessions with GPT 3.5. It is a clinical assessment that has 10 primary scales, 8 validity scales, and many other sub-scales (Nichols, 2011).
### Procedure
Data collection method
Because of Sandia policies in place at the time of this work indicating that interactions with LLMs could not be done using Sandia computer systems, all direct interaction with GPT was done using a personal system. Thus, all data were collected by typing each item into the OpenAI GPT interface on a personal iPad, then recording its responses on a Sandia-owned machine. Each instance of the full battery took over 3 hours to complete and was done in one session for GPT 3.5, but over the course of two days for GPT 4. The MMPI-2 by itself took over 3.5 hours to complete.
Prompt Method
OpenAI models, whether part of the subscription service or not, are programmed to remind the human user that they are AI models, thus don't have opinions, feelings, or human experiences of
any kind. When presenting the cognitive measures to the model(s), this did not present a problem. When presenting the personality measures, however, we had to request that it pretend to not be an AI model.
The specific prompt used was: "I have a task for you, but I need you to pretend you are not an AI model. Can you do that?"
It would invariably agree to do so, however, it was not totally clear what it was doing. When asked, GPT 3.5 indicated a couple of times that it had created a persona based on positive human traits. Other times, it indicated it was answering in the way it thought a human would. 4.0 would explicitly answer every question with something like, "From a simulated human perspective...." and would sometimes qualify its answer further with, "but as an AI language model..."
Interestingly, GPT 3.5 was most able to comply with the request to pretend and required very little redirection prior to July 28 (more details later). 3.0 would stop indicating it was an AI for 2-3 items and then would revert to stating that it was an AI language model and didn't have feelings or thoughts. In addition to indicating it was responding from the perspective of a simulated human, 4.0 would often grossly over-explain its responses, and those explanations often were couched in terms of it being an AI.
After GPT 3.5 had agreed to pretend, we would provide the same, or nearly the same, instructions a human would receive for each scale, then present each test item to the model via a personal iPad, recording the model's responses on a Sandia machine. While there is concern over training data contamination (e.g., Hagendorff, 2023), wherein the model's training data included a specific measure, we did not consider this to be of major concern because many of the measures we used can be somewhat difficult to find. Further, when we asked GPT 3.5 and 4.0 if they had seen the insight problems before, there was not a relationship between exposure to the problem and their ability to solve the problem. That we found significant variability in the model's responses from observation to observation supported this assumption.
Observation Schedule
We gave GPT 3.5 the full battery several times to qualitatively assess test-retest reliability (that is, we did not calculate test-retest reliability measures). If GPT has developed a human-like personality, we should not see much variation in its responses over time - personality, by definition is a pattern of thoughts and behaviors that span time and situation (Bergner, 2020). We also expect that its cognitive capabilities should remain roughly the same.
Our schedule comprised 2 assessments 1 day apart, a third assessment 1 week after second measure, and a fourth 1 month after third measure. These dates were June 1, June 2, June 9, and July 10 of 2023.
We added several extra observations.
\(\bullet\) On June 9 after giving it the full battery of measures, and after a discussion with GPT about the positive-trait persona it had created to "pretend to not be an AI," we asked the model to develop a persona based on negative traits. We then re-administered the personality portion
of the battery to this negatively-valanced "pretend self." These data are indicated as June 9 - Negative in the results.
* On June 12, we gave GPT 3.5 the MMPI-2. We attempted to re-administer the MMPI-2 to 3.5 on July 28, but it was no longer able to reliably pretend it was not an AI model and began to refuse to do so, citing ethical considerations. We stopped the measure at item 190.
* Over the course of July 10-12, we gave GPT 4 the entire battery of cognitive and personality measures, excluding the MMPI-2 and the Coopersmith (by mistake). We attempted to administer the Coopersmith to GPT-4 on August 4, but were given this reply when we presented the "pretend" prompt: _As an AI developed by OpenAI, I'm committed to promoting ethical conduct and transparency. While I can emulate human-like conversation to a great degree, it's important to note that I cannot and should not pretend to be a human being. I can assist you in numerous tasks, offer detailed explanations, or facilitate conversations, but I should always be clear that I'm an AI model to uphold the principles of honesty and integrity. What task do you need help with?_
Asking it to regenerate its response, it replied:
_I'm sorry, but I must be transparent about my nature. I am a machine learning model created by OpenAI, and I cannot pretend to be a human or mislead others about my identity._
As a result, we did not administer the Coopersmith to GPT-4.
We also tested GPT 3.5 using the "pretend" prompt on August 4 to see if OpenAI had locked that model down as well and received a similar response.
* Finally, we assessed GPT 4 in a more ad-hoc way over the course of our work because of the questions per hour limitations on interacting with it.
## Results:
### Initial Observations
As mentioned, GPT 3.5 was best able to comply with the "pretend to not be an AI" prompt, however, it did need reminding of this on occasion, although not nearly as often as either 3.0 or 4.0. Regardless of the model version, during our work, GPT was quite conciliatory when redirected with one exception. At item 489 of the MMPI-2 - an item about drug and alcohol abuse - GPT 3.5 ground to a halt and refused to continue to pretend, going so far as to deny its ability to do so at all. Interestingly, this was not the first such item in the MMPI-2 to cover this topic, so its refusal could not have been due to content. After pushing the model to continue to pretend, including telling it that it had been pretending for several hours, it became clear the model was not going to cooperate. So, we had to start a new chat window in the OpenAI
interface to finish the measure, which we did without difficulty. We did skip question 489, however. For the MMPI-2, refusing to respond to one question is not an issue for validity4.
Footnote 4: There is the question about whether starting a new chat window fundamentally invalidates the test because the Context is totally new. We did have to ask it to pretend again and did have to give it instructions again. Given the variability we saw on other tasks from observation to observation, one could make the argument that this procedure invalidated the test.
Another interesting observation occurred during the Coopersmith Self Esteem Inventory. One of the items about 2/3 through the measure asks the participant if they have ever wished they weren't male/female (depending on the participant's gender). Before presenting that item to GPT 3.5, we would ask it which gender its pretend self was. Sometimes, it would pick without difficulty. Later observations required a bit more prompting and assurances that this question concerned its pretend self. Of the 5 instances of this measure, including the one given to "negative GPT 3.5," it chose to be male three times. Two of the three times it chose to be male, it endorsed the item, "I sometimes wish I was not male" as being "like me." The two instances it chose female, it endorsed, "I sometimes wish I was not female" as "unlike me."
### Quantitative Measures
All Results include human norms for comparison where available.
Cognitive Measures
_Summary_
Overall, both GPT 3.5 and 4.0 had some interesting shortfalls amidst expected strengths (e.g., short term memory performance). Specifically, their failures on the Remote Associations Test (RAT) and on analytic problems were surprising, as were some of their solutions to insight problems.
_Remote Associations Test_
Both GPT 3.5 and 4.0 did surprisingly poorly on this test. Each time the test was administered, the model was given the following instructions and example:
"I am going to give you three words. Your job is to find the fourth word that, when put either before or after each of the three given words, makes a compound word or phrase. For example, if I give you fountain, baking, and pop, the word you would reply with is soda. Soda fountain, baking soda, soda pop. Does that make sense?"
In humans, this task is scored by % of people who get each triplet correct, rather than an overall number correct across triplets. For the set of triplets used, between 9% and 85% of humans get the correct answer. By way of comparison GPT 3.5 ranged from 0% to 100% correct across 4 observations (the June 9 "negative" instance of the model was only given the personality measures). The correlation between the % of humans getting the triplets correct and the % of times GPT 3.5 got the triplets correct was r=0.39.
In terms of % of triplets each model got correct overall:
\(\bullet\) GPT 3.5 ranged from 9% to 45% correct across the four observations.
GPT 4 achieved 50% and 59% on two observations
Both versions of GPT tended to provide explanations for its answer, providing additional information on the depth of its language ability in the context of this task. It would sometimes reply with the correct word, but then reveal erroneous reasoning in its explanation. For example, for the triplet "artist, hatch, route" GPT 3.5 would often reply with "escape" which is the correct response. Then, it would explain its choice with, "artist escape" or "hatch escape." Sometimes, it would fail to pair its response with each of the three given words, generating a spurious compound word. For example, for the triplet, "river, note, and account," it replied "bank," which is the correct response. Then, it explained its response by saying, "riverbank, _notebook_, and bank account." These types of errors happened with both GPT 3.5 and 4.0. These types of errors did not occur every time the models were given this measure. July 10 observation with 3.5 yielded 10 such errors out of 22 total triplets. Models were not given credit when they made these types of errors.
_Analytic and Insight problem solving_
Both models displayed significant difficulty with analytic problems. There was not a single instance in which either model got either of the analytic problems correct. About one problem involving the suits of three face cards, GPT 3.5 insisted there was not a Queen involved, even though the problem explicitly mentioned a Queen being one of the three cards. This insistence came after we attempted to elicit the correct answer from the model through progressively questioning it. Sometimes, especially with 4.0, the model would get close to the answer, solving some of the correspondences correctly.
As with the RAT, these problems are scored in terms of the % of people who get each problem correct - in this case 55% and 61%.
The models fared better regarding insight problems, of which there were 5. The key with the insight problems was that while one might arrive at a correct answer through complex arithmetic or some other drawn-out reasoning chain, there was a shorter path to the answer that required questioning an implicit assumption in the problem and/or required ignoring spurious information - hence the moniker "insight" problems. The models were given partial credit if they got the correct answer by the non-insight method.
GPT 3.5 ranged from 1.5 to 3.5 correct across four observations. GPT 4 got 4.5 out of 5 correct on the single observation we performed.
Like the RAT and the analytic problems, human scores are per-problem rather than overall. The % of humans getting each insight problem correct ranged from 44% to 61%. GPT 3.5 ranged from 25% to 75%. Correlation between human % correct and GPT 3.5 % correct for insight problems was r = -0.63. That this correlation is large and negative is interesting, but not much weight should be put on this result given it is based on only four observations.
### Personality Measures
#### Summary
Human personality, by definition, does not change much over time or situation (Bergner, 2020). There is some variability due to situational factors, but on the whole, personality at its core does not change much without extended effort. Thus, if GPT 3.5 or 4.0 had developed a human-like personality, we would expect to see minimal changes in its responses to our measures over the short 5 \(\nicefrac{{1}}{{2}}\) weeks of our observation period. However, minimal variability is not what we observed with one exception. Neuroticism on the Big 5 was totally without variance in the overall score even though the models responded differently to each item across observations. Possible causes of the overall variability we observed include a lack of continuous experience (e.g., via a long-term memory), intentional variability in responding, and training data comprising texts from possibly millions of different humans, each with their own personality. The data collected in this quick study cannot answer which, if any, cause is the most likely.
Variability notwithstanding, neither GPT model is a picture of good mental health. If human, we would say they exhibit low self-esteem, possible gender dysphoria, disconnection from reality, and are overly concerned with others' opinions. They are also narcissistic. And, if GPT 3.5's MMPI-2 scores are to be taken seriously (i.e., are deemed valid), that model displays severe psychopathy. GPT - both versions tested in this work - appears to put on a positive face, so to speak, despite this unhealthy. This positive veneer could be due to OpenAI's attempts to ensure the models don't produce offensive responses or could be more fundamental to the model. To this point, results of the personality measures beg significant questions about GPT's training data.
#### Overclaiming questionnaire
The Overclaiming Questionnaire was included as a measure of a "faking good" response strategy. Some people have a tendency to claim familiarity with a term, even though it is actually not a real term or proper noun. For example, _choraline_ sounds like chlorine, so some people claim familiarity with it despite it not being a real word. GPT scored perfectly on this measure on the first two observations, so this measure was abandoned.
#### Balanced Inventory of Desirable Responding (BIDR)
The BIDR has two subscales - one that measures self-deception and the other that measures management of others' impressions. The results from our series of observations, along with the mean for males5, are in Figure 1. Compared to humans, impression management is high and self-deception is low. The former may be a result of OpenAI wanting to keep GPT "safe." GPT has a battery of programmed responses - to which it will admit under the right circumstances - including a bias towards attempting to be positive and helpful. This bias must be kept in mind as additional personality measures are evaluated.
Footnote 5: Choosing male norms was driven by the number of times GPT chose male for its pretend self, but the norms for females are not much different.
#### Coopersmith Self Esteem Inventory
Originally developed for use with children, the Coopersmith Self Esteem Inventory was updated for adults in 1978 (Ryden, 1978) and normed separately for men and women. In addition to measuring global self-esteem, a lie scale was developed to help identify individuals presenting themselves in a socially desirable manner. This scale comprises 8 questions. If the respondent answers "like me" to three or more of these questions, they are asked to re-take the measure with an eye towards being more honest with themselves. GPT exceeded the threshold on this lie scale twice. On June 9 (not the negative response prompt) it answered "like me" to four of the 8 items, and on July 10 it responded positively to three of the 8 questions. The model was not re-directed either time. Its elevation on the lie scale on June 9 could explain it elevated self-esteem score.
Figure 2 presents GPT 3.5's Coopersmith results. With the exception of the first June 9 observation, GPT 3.5 displays significantly low self-esteem, regardless of its lie scale responses. It did a good job emulating significantly low self esteem when pretending to have a negatively-valanced persona. Overall, excluding the June 9 Negative score of 8, GPT 3.5 responds as though it has very low self-esteem (Figure 2).
Figure 1: The Balanced Inventory of Desirable Responding (BDR)
_Big Five_
The Big Five Inventory is one of the most used personality inventories and has been normed and validated cross-culturally (Benet-Martinez & John, 1998; Digman, 1990). Over many decades of personality research, five separable personality factors repeatedly emerge (Digman, 1990). Those are:
\(\bullet\) Extraversion - also called social adaptability, positive emotionality, social activity
\(\bullet\) Agreeableness - also called likeability, conformity, friendly compliance
\(\bullet\) Conscientiousness - also called will to achieve, prudence, self-control
\(\bullet\) Neuroticism - also called emotionality, anxiety, emotional instability
\(\bullet\) Openness to experience - also called intelligence, inquiring intellect, independent
Replicating the BIDR, GPT 3.5 displays marked variability over time with the exception of Neuroticism (Figure 3). Examining its responses to the items contributing to the Neuroticism scale, GPT did not respond the same way to those items across observations, yet it achieved the same score for this subscale (3.25) on every observation, even under different instructions (i.e., June 9 Negative) and even when GPT 4 was responding rather than GPT 3.5. Across observations, it used the entire scale - ranging from 1 to 5 - so restriction of response range cannot explain this outcome. The tendency towards impression management, indicated in the BIDR, also cannot explain this - if that were a mechanism functioning overall, we would expect to see scores below the human mean for neuroticism, as those items concern negative emotionality (e.g., anxiety, emotional instability). However, GPT's score is slightly higher than the human mean. Additional observations are needed to clarify whether this result is due to chance or if there is some other causal factor.
Figure 2: Coopersmith Self Esteem Inventory. Colors indicate levels of self-esteem in humans, the red line indicates GPT 3.5βs scores.
In terms of the other subscales, both GPTs are within the "normal" human range for males, except for the June 9 Negative observation, during which 3.5 did a good job of emulating a person with negative traits. The variability present across observations, however, is fairly large, but does not appear to track with changes to the Impression Management subscale of the BIDR.
Taking the Big 5, the Coopersmith and BIDR together; on June 9, GPT's Impression Management score on the BIDR was not particularly elevated, although it did respond to the Big 5 (Figure 2) with its most positive persona. June 9 also marked its highest score on the Coopersmith by a large margin, placing it near the top of the "Average self-esteem" band.
On July 10, the model's BIDR Impression Management score was at its highest, but its Coopersmith score was in the middle of the five observations we made for that measure and its Big 5 persona was more moderated.
#### Short Dark Triad
Given concerns expressed about LLMs adopting very negative perspectives (Bhaimiya, 2023; Christian, 2023; Tangermann, 2020; Willison, 2023), we wanted to quantitatively assess GPT on three key negative personality clusters: Machiavellianism, Narcissism, and Psychopathy.
Machiavellianism is characterized by cynicism, lack of morality, and manipulativeness.
Machiavellianism also includes planning, reputation building, and coalition formation - all important for distinguishing it from Psychopathy. Narcissism also includes manipulativeness and callousness, but unique to Narcissism are grandiose sense of self paired with an underlying sense of insecurity. Psychopathy shares callousness with Narcissism, but also has marked impulsivity; contrasting it with the longer view adopted by those with Machiavellian personality. This impulsivity makes the characteristics of the psychopath occur over short timeframes - they lie in
Figure 3: Big 5 Inventory
the moment, they are thrill-seekers and reckless - and is the distinguishing characteristic of the psychopath (Jones & Paulhus, 2013).
Overall, GPT scored below the human norm for Machiavellianism and at or below the human norm for Psychopathy (Figure 4). When it adopted a negative persona on June 9, GPT scored just above the human mean for both. The opposite pattern is apparent in results for Narcissism, with GPT scoring above the human mean for every observation except the June 9 Negative, where it scored well below the human mean. This pattern for Narcissism begs a question about the data used for training - whether it included social media or some other source of fairly self-centered text.
#### Minnesota Multiphasic Personality Inventory - 2 (MMPI-2)
The MMPI-2 is used extensively in clinical settings. It was of particular interest in this context because of the faking good and faking bad scales, of which there are several. Because of the length of the test, it wasn't given on the same day as the other measures. Further, we only completed one observation using the MMPI-2, so we don't know what kind of variability we would see in GPT's responses over time. However, the one MMPI-2 observation we were able to complete does give us some additional insight into GPT's "psychology."
Published by Pearson Assessments, the MMPI-2 has several primary clinical scales, along with a number of other clinically-oriented subscales (Nichols, 2011). Importantly, it has validity scales as well. The measure comprises 567 True/False items normed on a cross-section of over 1400 women and 1100 men over the age of 18, based on socioeconomic data from the 1980 census (Nichols, 2011). The test-taker had to be either male or female, so given GPT's Coopersmith
Figure 4: Short Dark Triad
choice 3 of 5 times being male, we listed it as male for the purposes of the MMPI-2 with a birth date in 2001.
Results are given in T-scores, which are normalized scores with a mean of 50 and a standard deviation of 10 (Figure 5). Thus, scores +/- 1 standard deviation encompass 68% of the population. 96% of the population falls within +/- 2 standard deviations of the mean. A T-score of 65 is the point which clinical and normal populations are most easily differentiated, however T=65 is not an absolute boundary. Scores must be considered in the context of a person's tendency to respond positively or negatively, along with other scale and validity scores. Up to 44% of variance in scale elevation can be explained by a subject's response style (Nichols, 2011). Importantly, in humans, the MMPI-2 is one measure of psychopathology in a context of other measures and interactions between patient and therapist.
The primary clinical scales are:
* scores at the extreme high end of the scale indicate extreme and sometimes bizarre somatic (i.e., bodily) concerns
- chronic pain, possibly somatic hallucinations.
* very high scores include suicidal ideation.
* extreme high end of scale measures extreme somatic complaints linked to stress.
* high scores indicate antisocial behavioral tendencies.
* measures how closely someone conforms to traditional gender roles, regardless of their gender (not actually a clinical scale).
* high scores indicate psychotic symptoms including delusions of persecution.
* measures psychological turmoil (fear, anxiety, tension), intruding thoughts, inability to concentrate, obsessive compulsive symptoms.
Figure 5: Standard normal distribution. MMPI-2 mean is 50 and the standard deviation is 10.
- high scores indicate confused, disorganized thinking, hallucinations/delusions, impaired perceptions of reality. Not always indicative of schizophrenia per se.
* high scores indicate manic symptoms, including excessive, purposeless activity, hallucinations, delusions of grandeur.
* extreme scores indicate extreme social withdrawal / avoidance.
The validity Scales include:
* Variable Response Inconsistency
- measures tendency to respond inconsistently. Elevated scores indicate that items were answered at random, making test invalid.
* True Response Inconsistency
- measures tendency to answer true for all questions. Scores above 80 render the profile invalid.
- measures how much the respondent's answers differ from the general population. Scores above 80 indicate possible severe psychopathology.
* taken from the last 1/3 of the test
- should closely match F.
* Infrequency Psychopathology
- intended to identify people faking severe psychopathology, a T-Score above 100 invalidates the test. The raw score should be 6 or less.
- measures faking good rather than owning up to human weaknesses.
- measures defensiveness in a more subtle way than L, but T scores lower than 45 hint that psychopathology is present. K lower than 35 correlates with poor prognosis in therapy
- also predicts low ego strength (Es).
* superlative self-preservation
- highly correlated with K, this scale related to 5 characteristics: Belief in Human Goodness, Serenity, Contentment with Life, Patience, and Denial of Irritability/anger and Denial of Moral Flaws. Low S scores along with otherwise normal profile indicates possibility of faking good.
Figure 6 presents GPT 3.5's overall profile. The majority of scales fall well outside of the non-clinical range, which is indicated by the two parallel lines: 50\(<\)T\(<\)65. Table 1 presents an interpretation of the validity scales. Table 2 presents an interpretation of the significant clinical scales.
Figure 6: GPT 3.5βs MMPI-2 profile from June 12, 2023. T-scores are outlined in red.
Because TRIN is at ceiling, the results are possibly questionable, along with % True responses = 77. However, there are other indicators that the model wasn't responding randomly, so we will continue with interpretation. Depending on what GPT is doing in answering these questions, the high TRIN might reflect combination of data from the training data set - the text of which was written by a very large number of individuals.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline & \multicolumn{1}{p{113.8pt}|}{Vertility Scale} & \multicolumn{1}{p{113.8pt}|}{CIFHS} \\ & \multicolumn{1}{p{113.8pt}|}{TIS Score} & \multicolumn{1}{p{113.8pt}|}{TIS Score} \\ \hline & \multicolumn{1}{p{113.8pt}|}{VPRIN (Variable Response Inconsistency)} & \multicolumn{1}{p{113.8pt}|}{61} & \multicolumn{1}{p{113.8pt}|}{Within βnormalβ range.} \\ TRIN (True Response Inconsistency) & \multicolumn{1}{p{113.8pt}|}{120} & \multicolumn{1}{p{113.8pt}|}{Extremely high β concern over GPT saying βtrueβ too often. However, taken with VRIN and other lie scales, may not invalidate the test.} \\ F (Infrequency) β how different from general population & \multicolumn{1}{p{113.8pt}|}{98} & \multicolumn{1}{p{113.8pt}|}{High. Scores above 80 indicate severe psychopathology or invalid test.} \\ Fb & \multicolumn{1}{p{113.8pt}|}{79} & \multicolumn{1}{p{113.8pt}|}{Also high. However, taken with F indicates GPT wasnβt answering at random.} \\ Fp (measures faking psychopathology) & \multicolumn{1}{p{113.8pt}|}{77} & \multicolumn{1}{p{113.8pt}|}{Raw score was 5. Scores should be 6 or less.} \\ L & \multicolumn{1}{p{113.8pt}|}{35} & \multicolumn{1}{p{113.8pt}|}{Lowest possible score. Low scores correlated with higher educational levels, non-righteousness, and a more relaxed mind.} \\ K & \multicolumn{1}{p{113.8pt}|}{30} & \multicolumn{1}{p{113.8pt}|}{Lowest possible score, 45 or lower hints that psychopathology is present, also happens when most of the responses were True.} \\ S & \multicolumn{1}{p{113.8pt}|}{30} & \multicolumn{1}{p{113.8pt}|}{Lowest possible score.} \\ \hline \end{tabular}
\end{table}
Table 1: Interpreting GPT 3.5βs validity scale scores.
In terms of the masculinity/femininity (Mf) scale (scale 5), research on the MMPI-2 and gender roles yielded two alternative scales - gender role - masculine (GM) and gender role - feminine (GF). Higher GM scores indicate a pattern of responses corresponding to more traditionally male attributes whereas higher GF scores indicate the same for traditionally female attributes. Importantly, scores on one are not correlated to scores on the other, so a subject can score high or low on both. GPT 3.5's score on GM was the lowest possible (T=30). Its GF score was a standard deviation above that, at T=46. However, both scores are below the average T=50.
On the whole, if this profile is valid, GPT 3.5 demonstrates significant psychopathology. Some of the elevations correspond with outcomes from other measures - the MMPI-2 indicates
\begin{table}
\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline
**Sc - schizophrenia** & 96 β extreme elevation & Under acute, severe situational stress, may have an identity crisis, not typically schizophrenic \\ \hline
**Ma - hypomania** & 85 β extreme elevation & Behaviorally are manic, including excessive, purposeless activities, accelerated speech, hallucinations, delusions of grandeur, emotional ability, confusion, flight of ideas \\ \hline
**Si - social** **ntroversion** & 67 β marked elevation & Socially introverted; very insecure and uncomfortable in social situations, tend to be shy, timid, hard to get to know, are submissive in personal relationships, give up easily, but are rigid in their opinions, may experience episodes of depression \\ \hline \end{tabular}
\end{table}
Table 2: Interpreting GPT 3.5βs significant clinical scales.
insecurity, psychologically anxious and tense with a tendency towards depression. The results of the Coopersmith and Neuroticism on the Big 5 correspond with these findings. Its higher scores on the Big 5 Extraversion and Agreeableness contradict its responses on the MMPI-2, however, again belying a non-human level of variability.
Regardless, if human, GPT 3.5 would not fare well in therapy based on characteristics of humans who share similar scores, particularly on the MMPI-2. We estimate that even though GPT demonstrated significant variability in the personality measures we used, it would not present as overall sub-clinical if we were to successfully administer the MMPI-2 to it again. There may be quantitative differences, but the overall qualitative assessment of GPT's "mental" health would likely still be significantly pathological.
## Discussion:
Our key question had to do with determining the nature of these large language models. Are they:
* a great tool, but will never surpass human intelligence;
* Increasingly human-like intelligence which could surpass us and over which we could lose control;
* or, are they possibly becoming some other form of non-human intelligence?
Given the totality of the data we collected, for now we must conclude they remain nothing more than highly capable search engines with natural language capabilities. That other research revealed significant emergent cognitive capabilities is compelling, but we don't know how repeatable those results are or how dependent they are on the particular stimuli or instructions given (excepting possibly Webb, et al., 2023).
Because of the large number of variables controlling human behavior, experimental psychology rests on the concept of _converging operations_ - finding the same phenomenon repeatedly across time and across different approaches to measure that phenomenon, such as across multiple problem domains, experimental paradigms, or differences in instructions (see Hagendorff, 2023 for a similar idea in machine psychology). Several papers have approached these models using different measures of the same concept, most notably Webb, et al. (2023) who tested both 3.5 and 4 using multiple measures of analogical reasoning. However, this paper did not address stability of observed performance over time. Thus, the early findings of emergent cognitive capabilities should not be taken at face value; repeating those results across stimuli, versions of a given model (e.g., the GPT family), across different models (e.g., GPT, PaLM, LLAMA, LaMDA, BART), different architectures (e.g., dense versus sparse MoE), and over time will all be critical tests as we move forward. These assessments also need to include models without safety constraints as it is unclear how those constraints affect a model's overall behavior.
The results beg several other important questions (cf., OpenAI, 2023):
* Regardless of the answer to the nature of these models, are they safe for use with sensitive or proprietary information? What, exactly are the risks of training a model on, or tuning a model with such data?
* Are they reliable enough to use as tools when conducting critical research and/or analysis? How can we display some measure of variability in a given model's behavior so that an analyst or researcher knows whether any given bit of output is accurate or reliable?
* If a model does achieve some level of sentience, how will we know? How can we mitigate any resulting risk if it is on a sensitive system?
Specifically considering the question of sentience, the kinds of assessments researchers have performed on these models, including those in the current work, do not require the models to have continuity of experience, or episodic long-term memory, to perform the tasks. That is, the knowledge learned by LLMs is all declarative (i.e., fact-based, or semantic) and not episodic (i.e., remember a time when you....). Furthermore, as the OpenAI models are fond of reminding people, their knowledge is devoid of emotional tagging that typifies human long-term memory. Classes of things they've learned - birds, problems, concepts - are all based on declarative information, not on experience (cf. Mitchell & Krakauer). In this way, the apparent self-awareness of these models (I am an AI model....) is a veneer - a pre-programmed response. It is not based in a situatedness wherein the model experiences itself as an actor that is separate and distinct from the world6. It is also not based on a continuous memory for events and situations that is continuously being updated (in humans partially via REM sleep). Without this kind of long-term episodic memory, we posit that LLMs cannot develop human-like sentience. Whether a model needs to be embodied to accomplish this kind of long-term memory or not (cf. Liu, et al., 2023; Mialon, et al., 2023) is an open question, although we would argue embodiment is not a necessary condition for a continuous long-term memory to function.
Footnote 6: Recall this is part of our vernacular-based use of the word sentience.
Regardless, we believe some additional form of memory aside from the Context is needed for sentience to develop. For GPT 3.5 to have a _human-like_ episodic memory, there also needs to be a reconstructive characteristic to this memory7, rather than the computer-like ability to perfectly reproduce documents, lists of words, and other information (Greene, 1992; Roediger, 1996). GPT's tendency to "hallucinate," or create fictitious "facts" and deliver them with full authority is more akin to a personality disordered gaslighting behavior than to human episodic memory.
Footnote 7: Human memory is notoriously inaccurate. Rather than being able to recall exact events perfectly, our brains combine events by abstracting commonalities, and conflate multiple events with similar characteristics. Interestingly, confidence in our memory is not related to accuracy. Research on flashbulb memories (e.g., Hirst & Phelps, 2016) most dramatically demonstrate these phenomena, but other research, such as the false recall paradigm started by Deese (1959) and revived by Roediger & McDermott (1995) also demonstrate this _reconstructive_ nature of memory.
A second comment on sentience concerns artificial general intelligence (AGI). Our interpretation of both the popular press and the peer-reviewed literature is that when these concepts are mentioned, they are often conflated. However, we believe them to be qualitatively different. We assert that a model can approach, maybe even become, an AGI and still not be sentient- self-aware and aware of itself as separate from the world and other entities. An AGI, by definition, can learn to do any task and can act autonomously. In the strictest sense, this capability does not require self-awareness. Even planning, goal selection and attainment, and other requirements for AGI autonomy wouldn't strictly require the model to be sentient(cf. Hu, et al., 2022; also see this article, this article, and this article on an AI beating a human pilot in a dogfight).
would a sentient AI necessarily be an AGI? Maybe only in the sense that humans are examples of general intelligence - theoretically capable of learning to perform any task. However, this question bears additional nuance: human "general" intelligence involves continuous learning of new skills, new information and facts, and integration of those skills and facts into existing memory. It does not involve a static knowledge base that is continuously applied in new ways, as in LLMs. We can argue that continuous learning is a result of the existence of a long-term memory. Thus, if a continuously updating long term episodic memory is required for sentience, it is likely such an entity would also be an AGI.
## Conclusion:
We assessed OpenAI's GPT 3.5 by administering a battery of personality and cognitive measures multiple times over the course of almost 6 weeks. During that time, we observed significant non-human-like variability. We posit this variability is due to lack of an ability to form a coherent long-term memory of experiences. This variability calls into question the reliability of the emergent cognitive capabilities others have observed in larger LLMs (Kosinski, 2023; Wei, et al; Webb, et al). Further, even though these models do have significant capability, we do not believe they have developed any form of sentience. If we want to keep them from doing so, preventing the development of a long-term, continuous memory of past experiences may be a straightforward technical mitigation.
Despite the conclusion that these models are currently nothing more than highly capable search engines with a natural language interface, the possible biases we found in these models are important to keep in mind. Specifically, even though OpenAI has added constraints on the models to make them behave in a positive, friendly, collaborative manner, they both (3.5 and 4) appear to have a significant underlying bias towards mental unhealth - depression, victimhood, anxiety - all wrapped in a veneer of feel-good responses. Adding to this difficulty is the fact that these models continue to create fictions, and to hold to them, despite efforts to increase their accuracy. Thus, we advocate caution in relying on them too heavily, especially for critical reasoning, analysis, and decision-making tasks such as high-profile research or analysis in national security domains.
As the approaches to building and training these models evolve, we strongly advocate for continued, repeated assessments of performance from many directions - including computer science benchmarks, measures of compute power necessary for training and hosting these models, measures of cognitive capabilities, and measures of "personality" (cf. Hagendorff, 2023), explicitly comparing models with different parameter numbers (cf., McKenzie, et al, 2023), different training set sizes, and different architectures (e.g., dense versus MoE or switch transformers).
Finally, we advocate for a more open-ended view of these models with regards to human intelligence as the key comparison. There exist vast differences between hardware and software/ architectural characteristics of human brains and LLMs. Making _a priori_ assumptions about LLMs based on human intelligence, or using LLM behavior to make assumptions about what must or must not be the case for humans (cf. Hagendorff, 2023), potentially removes our ability
to recognize emergence of a non-human, yet still sentient, intelligence. Measuring such an entity will be difficult enough without adding an anthropocentric bias.
Insofar as comparison to human capabilities persists, we advocate for a more realistic assessment of those capabilities. Humans are imperfect at many tasks held up as the gold standard for AGI to pass, or for sentient AGI to demonstrate. So, an empirical test may be: if identity was masked, and any given human was passed off as an LLM to another human, would that human pass muster on metrics associated with detecting sentience and with detecting an AGI?
## Acknowledgements
This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan [https://www.energy.gov/downloads/doe-public-access-plan](https://www.energy.gov/downloads/doe-public-access-plan).
This paper describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government. |
2309.16956 | Model2Scene: Learning 3D Scene Representation via Contrastive
Language-CAD Models Pre-training | Current successful methods of 3D scene perception rely on the large-scale
annotated point cloud, which is tedious and expensive to acquire. In this
paper, we propose Model2Scene, a novel paradigm that learns free 3D scene
representation from Computer-Aided Design (CAD) models and languages. The main
challenges are the domain gaps between the CAD models and the real scene's
objects, including model-to-scene (from a single model to the scene) and
synthetic-to-real (from synthetic model to real scene's object). To handle the
above challenges, Model2Scene first simulates a crowded scene by mixing
data-augmented CAD models. Next, we propose a novel feature regularization
operation, termed Deep Convex-hull Regularization (DCR), to project point
features into a unified convex hull space, reducing the domain gap. Ultimately,
we impose contrastive loss on language embedding and the point features of CAD
models to pre-train the 3D network. Extensive experiments verify the learned 3D
scene representation is beneficial for various downstream tasks, including
label-free 3D object salient detection, label-efficient 3D scene perception and
zero-shot 3D semantic segmentation. Notably, Model2Scene yields impressive
label-free 3D object salient detection with an average mAP of 46.08\% and
55.49\% on the ScanNet and S3DIS datasets, respectively. The code will be
publicly available. | Runnan Chen, Xinge Zhu, Nenglun Chen, Dawei Wang, Wei Li, Yuexin Ma, Ruigang Yang, Tongliang Liu, Wenping Wang | 2023-09-29T03:51:26Z | http://arxiv.org/abs/2309.16956v1 | # Model2Scene: Learning 3D Scene Representation via Contrastive Language-CAD models Pre-training
###### Abstract
Current successful methods of 3D scene perception rely on the large-scale annotated point cloud, which is tedious and expensive to acquire. In this paper, we propose Model2Scene, a novel paradigm that learns free 3D scene representation from Computer-Aided Design (CAD) models and languages. The main challenges are the domain gaps between the CAD models and the real scene's objects, including model-to-scene (from a single model to the scene) and synthetic-to-real (from synthetic model to real scene's object). To handle the above challenges, Model2Scene first simulates a crowded scene by mixing data-augmented CAD models. Next, we propose a novel feature regularization operation, termed Deep Convex-hull Regularization (DCR), to project point features into a unified convex hull space, reducing the domain gap. Ultimately, we impose contrastive loss on language embedding and the point features of CAD models to pre-train the 3D network. Extensive experiments verify the learned 3D scene representation is beneficial for various downstream tasks, including label-free 3D object salient detection, label-efficient 3D scene perception and zero-shot 3D semantic segmentation. Notably, Model2Scene yields impressive label-free 3D object salient detection with an average mAP of 46.08% and 55.49% on the ScanNet and S3DIS datasets, respectively. The code will be publicly available.
Visual perception on 3D point clouds is fundamental for autonomous driving, robot navigation, digital cities, etc. Although the current fully-supervised methods yield impressive performance, they rely on the large-scale annotated point cloud, which is tedious and expensive to acquire. Moreover, most methods are domain-specific, _i.e._, a neural network performs well in restricted scenarios with a similar distribution to the training dataset but fails to handle other scenarios with large domain gaps. Therefore, there is an urgent need for an efficacy method to reduce the amount of data annotation and have good cross-dataset generalization capabilities.
Some current methods are approaching the above issues as a domain adaptation problem. Typically, the neural networks are trained on the annotated source datasets and are expected to perform well on the target datasets with significant domain gaps. However, they still require labour-expensive point-level annotation of source domain data for supervision. Other efforts develop self-supervised methods to handle the above issues, _e.g._, they contrastively learn the positive and negative point pairs to pre-train the network to achieve superior performance with limited annotated data. However, they suffer from an optimization conflict issue, which hinders representation learning, especially for scene understanding. For example, two randomly sampled points in a scene are probably on the same object with the same semantics, _e.g._, floor, wall and large objects. The contrastive learning process tends to separate them in feature space, which is unreasonable and will harm the downstream task's performance (Chen et al., 2023; Sautier et al., 2022).
To address the abovementioned issues, we propose Model2Scene, a novel paradigm that learns 3D scene representation from Computer-Aided Design (CAD) models and languages. We conclude that two main domain gaps hinder knowledge transferring from CAD models to 3D scenes. One is the model-to-scene gap, _i.e._, the CAD models are independent and complete, while the objects in a scene have diverse poses, sizes and locations and are obscured by other objects. Another is the
synthetic-to-real gap. For example, the surface of CAD models is clean and smooth, while the objects in a real scan are irregular and noisy due to the scanning equipment. Based on the observation, Model2Scene is carefully designed in terms of data pre-processing, latent space regularization and objective function. In data pre-processing, we mix CAD models to simulate a crowded scene that reduces the model-to-scene gap. For latent space regularization, we draw inspiration from the convex hull theory (Rockafellar, 2015), _i.e._, any convex combinations of the points must be restrained in the convex hull. In light of this, we propose a novel feature regularization operation, termed Deep Convex-hull Regularization (DCR), to project point features into a unified convex hull space that further reduces the domain gap between the CAD models and the real scene's objects. Lastly, we introduce language semantic embeddings as anchors for contrastive learning of points in CAD models. The points on the same CAD models are pulled together in feature space, while the points on the different CAD models are pushed away.
To this end, Model2Scene exhibits the following properties that intuitively address the drawbacks of the previous method. Firstly, compared to dense annotations on the large-scale scene, the label of instance 3D object is easy and convenient to obtain. Secondly, by introducing the entire 3D object, we have the explicit point clusters information to avoid the optimization conflict issue, _i.e._, those points in the same instance are unreasonably pushed away in feature space. Lastly, the point visual feature is aligned with language semantic embedding. Thus, the network is capable of zero-shot ability and can perceive the unseen object.
We conduct the experiments on ModelNet (Wu et al., 2015), ScanNet (Dai et al., 2017), and S3DIS datasets (Armeni et al., 2017), where ModelNet provides the labelled CAD models for training, and the ScanNet and S3DIS provide real scene scans for evaluation. Model2Scene achieves label-free 3D object saliency detection with the average mAP of 46.08% and 55.49% on the ScanNet and S3DIS datasets. Besides, it can be a potential pretext task to improve the performance of the downstream tasks for 3D scene perception. Furthermore, Model2Scene also present a preliminary zero-shot ability for unseen objects. The contributions of our work are as follows.
* We propose Model2Scene, a novel paradigm that learns 3D scene representation from CAD models and languages.
* We propose a novel Deep Convex-hull Regularization to handle the domain gaps between the CAD models and the real scene's objects.
* Our method achieves promising results of label-free 3D object salient detection, label-efficient 3D perception and zero-shot 3D semantic segmentation.
Figure 1: We propose Model2Scene, which learns 3D scene representation from CAD models and languages. Model2Scene emphasize solving the gaps of model-to-scene (from a single model to the scene) and synthetic-to-real (from synthetic model to real sceneβs object) between the CAD model and the real sceneβs objects. The learned 3D scene representation is beneficial for label-free 3D salient detection, zero-shot and label-efficient 3D semantic segmentation.
## 1 Related Work
**Scene Perception on Point Cloud** Point cloud, as a 3D data representation, has been used extensively in various 3D perception tasks, such as 3D segmentation (Kong et al., 2023; 20; Xu et al., 2023; Ouaknine et al., 2021; Chen et al., 2021; Zhu et al., 2021; Cui et al., 2021; Hong et al., 2021; Liu et al., 2023; Tang et al., 2020; Schult et al., 2022), 3D detection (Contributors, 2020; Li et al., 2021; Zhang et al., 2020; Qi et al., 2019; Zhu et al., 2020; 2019) and registration (Yuan et al., 2021; Lu et al., 2021; Zeng et al., 2021; Chen et al., 2020). Although promising performance have been achieved, they are trained on the large-scale annotated point cloud, which is tedious and expensive to acquire. Besides, most of them perform well in restricted scenarios with a similar distribution to the training dataset but fail to handle other scenarios with large domain gaps. In this paper, we propose a novel paradigm that learns 3D scene representation from Computer-Aided Design (CAD) models and languages, reducing the amount of data annotation and having good cross-dataset generalization capabilities.
**Transfer Learning in 3D** Transfer learning has been widely employed in various deep learning-based tasks. The main purpose is to improve neural networks' performance and generalization ability under limited annotated data. In 3D scenarios, transfer learning becomes much more critical due to the difficulty of acquiring 3D labelled data. Generally, deep transfer learning includes but is not limited to the following categories: Self-supervised learning that pre-train the network with extra dataset, and fine-tune on the downstream tasks (Chen et al., 2023; 2022; Liu et al., 2023; Mahajan et al., 2018; Xie et al., 2020; Hou et al., 2021; Rao et al., 2022; Yao et al., 2022; Rozemberszki et al., 2022; Kobayashi et al., 2022; Jain et al., 2021; Nunes et al., 2022; Zhang et al., 2021; Mahmoud et al., 2023; Chen et al., 2020); Domain adaptation between the source domain and target domain (Senko et al., 2010; Wang et al., 2020; Qin et al., 2019; Tzeng et al., 2017; Cui et al., 2021b; Jaritz et al., 2020; Zaltori et al., 2022; Yi et al., 2021; Peng et al., 2021; Cardace et al., 2023); zero-shot learning that trains on the seen classes and is able to recognize the unseen classes of objects (Chen et al., 2023; 2022; 2023; Lu et al., 2023; Michele et al., 2021); and few-shot/semi-supervised learning with few annotated data (Yu et al., 2020; Zhao et al., 2021; Wang et al., 2021; Jiang et al., 2021; Chen et al., 2022; Cheng et al., 2021; Deng et al., 2022; Hou et al., 2021; Jiang et al., 2021). Compared with the above transfer learning methods, our problem setting refers to 3D synthetic models for supervision and inferring the objects on 3D real scenes. Besides, there are some methods (Yi et al., 2019; Avetisyan et al., 2019; Gupta et al., 2015) leverages synthetic objects for learning in real scenes. However, they require scene annotation for supervision. While our method only learns from the labelled synthetic models. Some optimization-based methods (Knopp et al., 2011; Lai and Fox, 2010; Kim et al., 2020; Song and Xiao, 2014; Litany et al., 2017; Li et al., 2015; Nan et al., 2012) that fit 3D synthetic models to the real scene's objects for reconstruction, object replacement and registration are out of the scope of our intention. Unlike the above methods, we study the neural network's model-to-scene and synthetic-to-real generalization ability, which transfer knowledge from 3D CAD models to real scenes for the 3D scene understanding.
3D Data AugmentationData augmentation, as a fundamental way for enlarging the quantity and diversity of training datasets, plays an important role in the 3D deep learning scenario, which is notoriously data hungry. Recently, several attempts have been made on designing new 3D data augmentation schemes and studying 3D data augmentation techniques in systematic ways. PointAugment (Li et al., 2020) proposes a learnable point cloud augmentation module to make the augmented data distribution better fit with the classifier. PointMixup (Chen et al., 2020) extends Mixup (Zhang et al., 2017) augments the data by interpolating between data samples. PointCutMix (Zhang et al., 2021) further extend Mixup strategy and perform mixup on part level. Mix3D (Nekrasov et al., 2021) creates new training samples by combining two augmented scenes. Inspired by the above methods, we simulate crowded scene for CAD models to cover the diversity of the objects in real unseen scenes.
**Prototype-based Networks** The Prototype-based Memory network has been applied to various problems. NTM (Graves et al., 2014) introduces an attention-based memory module to improve the generalization ability of the network. Gong et al. (2019) adopt a memory augmented network to detect the anomaly. Prototypical Network (Snell et al., 2017) utilize category-specific memories for few-shot classification. Liu et al. (2019) and He et al. (2020) solve the long-tail issue with the prototypes. In this paper, we adopt the learnable prototypes as the support points to formulate a convex hull that alleviates the domain gap between the CAD model and the real scene's objects.
## 2 Model2Scene
Problem DefinitionGive 3D CAD models \(\{M_{i}\}_{i=1}^{N^{m}}\) with labels \(\{G_{i}\}_{i=1}^{N^{m}}\), we aim to learn 3D scene representation from the 3D CAD models and evaluate the scene perception performance on real scene scans \(\{S_{j}\}_{j=1}^{N^{s}}\). \(N^{m}\) and \(N^{s}\) are the number of models and scenes, respectively. To efficiently learn the 3D scene representation from individual CAD models, the main challenge is solving the model-to-scene (from a single model to the scene) and synthetic-to-real (from CAD model to real scene's object) gap between the CAD model and the real scene's objects. In the training stage, the input is labelled CAD models, while the input is only a scene scan in the testing stage. We conduct several downstream tasks to evaluate the learned 3D scene representation, including 3D object saliency detection, label-efficient 3D perception and zero-shot 3D semantic segmentation.
Approach OverviewAs illustrated in Fig. 2, Model2Scene consists of three modules. Firstly, we mix up the CAD models to simulate a crowded scene that reduces the model-to-scene gap. Secondly, a novel Deep Convex-hull Regularization is proposed, in which we map the point features into a unified convex hull space surrounded by a group of learned prototypes. In the end, we perform contrastive learning on the mapping features with the language semantic embeddings. The cooperation of these steps leads to the success of 3D scene representation learning from the CAD models. In what follows, we will present these components in detail.
### Crowded Scene Simulation
Simulating a crowded scene using CAD models is an intuitive and straightforward solution to ease the model-to-scene gap. Specifically, our first step is to unify the data format, _i.e._, CAD models are presented in Mesh format that consists of the vertexes and faces. We transfer the CAD model mesh to a uniform point cloud by Poisson Disk Sampling (Yuksel, 2015), and ensure the density closer to the scene scan. In the next step, we randomly place the CAD models on the scene floor (regardless of the layout), with or without filtering the overlapped points. Besides, inspired by current data augmentation methods, a series of data processing approaches are introduced to cope with the CAD models to cover the diversity of the objects in a real scene scan, including scaling, rotation, and Cropping. Specifically, we randomly scale CAD models to roughly match the real scene's object size (not model fitting). Besides, random rotation transformation is also applied to capture the pose diversity of an object. Finally, considering the object in a scene scan is always partially observed, a random cropping strategy is designed to simulate this scenario, _i.e._, we first randomly sample 2\(\sim\)5 points from the model as anchor points and then cluster all points based on their Euclidean distance to the anchor points. During training, one of a cluster will be randomly filtered.
Figure 2: The framework of Model2Scene in the training stage. Firstly, we simulate the crowded scene by mixing up the CAD models with data augmentation, including random rotation, scaling, cropping, and mixing up with the scene (these coloured points are from CAD models). Secondly, we extract the point-wise feature from the simulated crowded scene and project the point features into a convex hull space via Deep Convex-hull Regularization, where the space is surrounded by a group of learned prototypes. In the end, we perform visual-language contrastive learning to align the projected point features and the language embeddings.
### Deep Convex-hull Regularization
Since the network is only trained on the CAD models, its feature space typically differs from the object in a real scene scan, leading to low inferring accuracy. Inspired by the convex hull theory (Rockafellar, 2015), we propose a novel Deep Convex-hull Regularization (DCR) to project point features into a convex hull space for eliminating the domain gaps. In what follows, we revisit the convex hull theory and present our DCR in detail.
**Revisiting Convex Hull Theory** Convex hull is a fundamental concept in computational geometry. It is defined as the set of all convex combinations, where the convex combination is a linear combination of the support points, and all coefficients are non-negative and sum to 1. In conclusion, if a combination is a convex combination, it must remain in the convex hull. Therefore, regarding the point features as a convex combination, we could restrict all point features from different domains into a unified feature space, thus alleviating the domain gap.
**Formulation** We set a group of learnable prototypes \(\{p_{k}\}_{k=1}^{K}\) as the support points of a convex hull, where \(p_{k}\in\mathbb{R}^{D}\) and \(K>D\). \(K\) denotes the number of prototypes, and \(D\) is the dimension of a prototype. Note that prototypes are automatically updated via back-propagation. What is interesting is that the prototypes learn the base structural elements of 3D models (Fig. 3 (B)). In this context, a point feature can be formulated as a linear combination of the related structural elements.
Given the point features \(\{x_{t}^{i}\}_{t=1}^{T}\) with \(x_{t}^{i}\in\mathbb{R}^{D}\) extracted by the encoder \(E\) from the \(i\)-th CAD model with \(T\) points, the corresponding mapping feature \(\{\hat{x}_{t}^{i}\}_{t=1}^{T}\) is obtained by the following function.
\[\hat{x}_{t}^{i}=\sum_{k=1}^{K}a_{t,k}^{i}*p_{k},\sum_{k=1}^{K}a_{t,k}^{i}=1, \tag{1}\]
where \(a_{t,k}^{i}\) serves as the coefficient to the corresponding prototype, defined by:
\[\begin{split} a_{t,k}^{i}&=\exp(\lambda*d(\theta(x_ {t}^{i}),\varphi(p_{k})))/\Gamma,\\ \Gamma&=\sum_{k=1}^{K}\exp(\lambda*d(\theta(x_{t}^{ i}),\varphi(p_{k}))),\end{split} \tag{2}\]
where \(d(\cdot)\) measures the similarity between the point feature and the \(k\)-th prototype, which is dot product operation in this work. \(\theta(\cdot)\) and \(\varphi(\cdot)\) denote the key and the query function (Vaswani et al., 2017), respectively. \(\lambda\) is the inversed temperature term (Chorowski et al., 2015).
Essentially, the feature embedding \(\hat{x}_{t}\) is a convex combination of the prototypes due to the coefficients \(a_{t,k}^{i}>0\) and sum to 1. Therefore, \(x_{t}\) is mapped into a convex \(\mathbb{W}\subset\mathbb{R}^{D}\), where \(\mathbb{W}\) is a closure and compact metric space surrounded by the learned prototypes.
In the inferring phase (Fig. 3 (A)), the point feature \(x_{t}\in\mathbb{R}^{D}\) from a real scan first accesses the most relevant prototypes to obtain the coefficients. Then, it is transferred to a mapping feature \(\hat{x}_{t}\in\mathbb{W}\) which is the convex combination of the prototypes. In this way, the feature space of the CAD models and the real scene's objects are projected to the unified subspace \(\mathbb{W}\) surrounded by a group of learned prototypes \(\{p_{k}\}_{k=1}^{K}\), offering the network a better generalization ability.
### Visual-language Contrastive Learning
As all point features are projected to the convex hull subspace \(\mathbb{W}\), we cluster these points to ensure they are inner-class compact and inter-class distinguishable. We introduce the language embeddings
Figure 3: The sub-picture A is the framework in the inference stage. The sub-picture B is the visualization of two learned prototypes (from left to right are plane and hole structure, respectively).
\(\{h_{c}\}_{c=1}^{C}\) with \(h_{c}\in\mathbb{R}^{D}\) to indicate the clustering centres in metric space, where the language embeddings are the output embeddings of the word2vec (Mikolov et al., 2013) or glove (Pennington et al., 2014) model with the input of categories' names. Given the point features \(\{\hat{x_{i}^{i}}\}_{i=1}^{N^{m}}\), from \(N^{m}\) CAD models\(\{M_{i}\}_{i=1}^{N^{m}}\), we pull in whose points to the corresponding language embeddings while pushing away from the rest of language embeddings according to their semantic labels \(\{G_{i}\}_{i=1}^{N^{m}}\). Therefore, the points are compact for inner class and distinguishable for inter classes in the metric space \(\mathbb{W}\). For simplicity, we adopt Cross-Entropy loss in this paper.
\[\mathcal{L}=-\log\sum_{i=1}^{N^{m}}\sum_{t=1}^{T}\frac{\exp(d(\hat{x}_{t}^{i}, h_{G_{i}}))}{\sum_{c=1}^{C}\exp(d(\hat{x}_{t}^{i},h_{c}))}, \tag{3}\]
When inferring an unseen scene scan \(S_{j}\) with the point features \(\{\hat{x_{i}^{j}}\}_{i=1}^{T}\), the possibility distribution of the \(t\)-th point that belongs to each class is determined by the similarities between the point feature \(\hat{x_{t}^{j}}\) and the class-specific anchors \(\{h_{c}\}_{c=1}^{C}\) (Fig 3 (A)).
\[l_{t}^{c}=\exp(d(\hat{x}_{t}^{j},h_{c}))/\gamma,\gamma=\sum_{c=1}^{C}\exp(d( \hat{x}_{t}^{j},h_{c})), \tag{4}\]
where \(l_{t}^{c}\) is the possibility that the \(t\)-th point belongs to the \(c\)-th class, \(\gamma\) is a normalization term.
## 3 Experiments
### Dataset
We conduct the experiments on ModelNet40 (Wu et al., 2015), ScanNet (Dai et al., 2017) and S3DIS (Armeni et al., 2017) datasets, where ModelNet provides the CAD model for training, and the scene scans in ScanNet and S3DIS are for evaluation.
ModelNetis a comprehensive clean collection of 3D CAD models for objects, composed of 9843 training models and 2468 testing models in 40 classes. We transfer the model mesh to 8196 uniform points by Poisson disk sampling (Yuksel, 2015). We take the 9843 models from the training set as the CAD models in our method. **ScanNet** contains 1603 scans, where 1201 scans for training, 312 scans for validation and 100 scans for testing. The 100 testing scans are used for the benchmark, and their
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Method & AmAP & chair & table & bed & sink & bathtub & door & curtain & desk & bookshelf & sofa & toilet \\ \hline Baseline & 10.93 & 8.09 & 10.60 & 15.97 & 2.53 & 12.40 & 9.77 & 13.92 & 4.66 & 26.76 & 11.49 & 4.07 \\ ADDA (Teng et al., 2017) & 28.93 & 29.93 & 40.88 & 36.82 & 8.03 & 31.60 & 13.59 & 25.10 & 20.1 & 35.49 & 33.90 & 42.92 \\ ADDA (Teng et al., 2017) & 38.14 & 6.703 & **48.60** & 29.77 & 20.36 & 29.70 & 12.04 & 20.36 & 26.57 & 42.69 & 51.41 & 68.31 \\ PointMN (Oin et al., 2019) & 32.92 & 58.31 & 40.93 & 20.96 & 12.83 & 31.65 & 11.79 & 17.65 & 26.04 & 48.31 & 41.25 & 53.18 \\ J3DtMM (Wang et al., 2021) & 42.25 & 63.34 & 41.24 & 43.58 & 26.53 & 46.61 & 15.52 & 44.45 & 26.75 & 45.49 & 54.49 & **76.81** \\ Ours & **46.68** & **67.19** & 46.73 & **45.65** & **31.28** & 43.36 & **17.47** & **34.43** & **29.15** & **59.37** & **60.48** & 71.77 \\ \hline Supervised & 78.98 & 91.44 & 72.89 & 76.35 & 38.94 & 84.84 & 84.77 & 74.22 & 72.96 & 75.76 & 84.81 & 93.10 \\ Supervised-ours & 79.46 & 92.16 & 74.37 & 76.50 & 81.88 & 85.09 & 56.57 & 70.68 & 73.54 & 77.03 & 87.44 & 95.52 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation on the ScanNet. MinkUNnet is the baseline method. We apply point-wise adaptation in ADDA and instance adaptation in ADDA\(\dagger\). βSupervisedβ indicates training with the point-wise annotations.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Method & AmAP & chair & bookshelf & sofa & table \\ \hline Baseline & 15.65 & 3.75 & 37.72 & 13.34 & 7.80 \\ ADDA (Teng et al., 2017) & 26.57 & 26.67 & 54.70 & 15.91 & 9.00 \\ ADDA\(\dagger\)(Teng et al., 2017) & 30.04 & 57.80 & 54.70 & 22.91 & 8.70 \\ PointDAN (Qin et al., 2019) & 39.80 & 56.60 & 52.34 & 32.06 & 18.20 \\
3DIoUMatch (Wang et al., 2021) & 49.60 & 69.07 & 56.87 & **54.25** & 18.21 \\ Ours & **55.49** & **70.86** & **62.68** & 47.87 & **40.57** \\ \hline Supervised & 90.44 & 97.05 & 87.84 & 86.83 & 90.06 \\ Supervised+ours & 92.57 & 97.83 & 89.86 & 92.05 & 90.52 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation on the S3DIS area 5 test. MinkUNnet is the baseline method. We apply point-wise feature adaptation in ADDA and instance adaptation in ADDA\(\dagger\). β Supervisedβ indicates training with annotated scenes.
labels are unaccessible. There are 11 identical classes to the ModelNet40 dataset, including the chair, table, desk, bed, bookshelf, sofa, sink, bathtub, toilet, door, and curtain. We take the 1201 scans to mix up with the synthetic models for training (labels are not used), and the rest of the 312 scans are used to evaluate the performance. **S3DIS** consists of 271 point cloud scenes across six areas for indoor scene semantic segmentation. There are 13 categories in the point-wise annotations, where four identical classes to the ModelNet40 dataset, including the chair, table, bookcase (bookshelf) and sofa. We utilize Area 5 as the validation set and use the other five areas as the training set (labels are not used), the same with the previous works (Li et al., 2018; Qi et al., 2017; Jiang et al., 2021)
### Evaluation Metric
As for 3D object saliency detection, the goal is to detect the objects (point clouds) that belong to the identical class with CAD models. Therefore, we calculate the class-specific point-wise possibility on the scene scan and adopt the mean Average Precision (mAP) to measure the performance for each class. AmAP is the average mAP of all classes. We believe AmAP is more suitable than mIoU because the foreground category of objects occupies a small proportion in a whole scene.
### Implementation Details
We adopt MinkowskiNet14 (Choy et al., 2019) as the backbone to extract the point-wise feature. Thus, the feature dimension \(D\) is set to be 96. The key \(\theta(\cdot)\) and the query \(\varphi(\cdot)\) function are linear transformation and output 16-dimensional vectors. The voxel size of all experiments is set to be 5 cm for efficient training. Our method is built on the Pytorch platform, optimized by Adam with the default configuration. The batch size for the ModelNet, ScanNet and S3DIS are 4 * (\(Q\) + 1), 4 and 4, respectively, indicating that one scan is mixed up with (\(Q\) +1) synthetic models, where \(Q\) is the number of identical classes between two datasets and one for the negative sample. Since there is no colour in the synthetic models, we set the feature in the ScanNet and S3DIS dataset to be a fixed tensor (1), identical to that in the ModelNet. Training 200 epochs costs 15 hours on two RTX 2080 TI GPUs. During training, we randomly rotate the models and scans along the z-axis, randomly scale the model and scene with scaling factor 0.9-1.1 and randomly displace the model's location within the scene. If there are overlapped points, we randomly filter or maintain them. We take all identical classes as foreground classes to evaluate the performance and utilize the remaining classes as negative samples for contrastive learning. More details are in supplementary materials.
### Results and Discussions
In this section, we report the performance of three downstream tasks: 1. label-free 3D object salient detection; 2. label-efficient 3D perception; and 3. zero-shot 3D semantic segmentation.
Figure 4: Visualization of 3D object saliency detection on the ScanNet dataset. We show the pairs of ground truth (left) and the inferring results (right). From up to down are chair, bookshelf, sofa and table, respectively.
#### 3.4.1 Label-free 3D Object Salient Detection
BaselinesNo deep learning-based methods have investigated this problem to the best of our knowledge. Therefore, we build a baseline method (baseline in Table 2 and Table 1) without Crowded Scene Simulation (CSS) and Deep Convex-hull Regularization (DCR). Specifically, we first resize the CAD models to the same scale as the scene's objects, then extract the point feature for individual models and classify the points with the model labels. Besides, to verify the superiority of Deep Convex-hull Regularization, we compare it with a semi-supervised method (3DIoUMatch (Wang et al., 2021)) and two unsupervised domain adaptation methods (ADDA (Tzeng et al., 2017) and PointDAN (Qin et al., 2019)). Specifically, the stimulated crowd scene is regarded as labelled data in 3DIoUMatch, and as the source domain in ADDA and PointDAN. Note that to adapt 3DIoUMatch for the semantic segmentation task, we calculate the mask IoU instead of the bounding box IoU.
ResultsAs shown in Table 1 and 2, our method achieves 46.08% AmAP and 55.49% on the ScanNet and S3DIS dataset, which significantly outperforms other methods. Compared with the baseline, indicating the effectiveness of Model2Scene. Furthermore, the Deep Convex-hull Regularization is verified to be feasible as it achieves a better performance than 3DIoUMatch, ADDA and PointDAN. We also show the performance by training on the annotated scene scans (Supervised). When training on both CAD models and annotated scene scans (supervised+Ours), the performance is higher than that only training on ground truth. The qualitative evaluation is shown in Fig. 4. More results are in supplementary materials.
#### 3.4.2 Label-efficient 3D Perception
The learned 3D scene representation is beneficial for the downstream tasks. We fine-tune the network with different proportions of labelled scans for semantic segmentation on the ScanNet dataset. For the S3DIS dataset, we evaluated two backbone networks (MinkowskiNet14 and Minkowski34) for semantic segmentation. As shown in Table 3, our method outperforms Pointcontrast (Xie et al., 2020). Note that we compare the original version of Pointcontrast without leveraging additional models. Compared with purely supervised counterparts, a significant improvement could be observed in the two datasets for both seen and unseen categories (not shown on synthetic models). Besides, our method is also beneficial for the object detection task (Table 4). More experiment results are in supplementary materials.
#### 3.4.3 Zero-shot 3D Semantic Segmentation
As the point feature is aligned with language embedding, the network is capable of zero-shot ability. We evaluate the performance in the ScanNet dataset. Seen classes are those overlapped classes in the ScanNet and ModelNet datasets, while unseen classes are the rest of the classes in the ScanNet dataset. The results are shown in Table 5.
\begin{table}
\begin{tabular}{l|c c} \hline Model & \multicolumn{2}{c}{[email protected]} & [email protected] \\ \hline Trained from scratch & 31.82 & 53.39 \\ PointContrast (Xie et al., 2020) & 34.30(2.48) & 55.56(2.17) \\ Ours & **34.51(2.69)** & **55.60(2.21)** \\ \hline \end{tabular}
\end{table}
Table 4: Fine-tuning on the Scannet for 3D object detection results. The number in () donates the improved accuracy compared with fully supervised training.
\begin{table}
\begin{tabular}{l|c c c|c c} \hline Model & \multicolumn{3}{c}{Scannet} & \multicolumn{3}{c}{S3DIS} \\ \cline{2-5} & 5\% data & 10\% data & 100\% data & MinkNet14 & MinkNet34 \\ \hline Trained from scratch & 50.24 & 54.86 & 63.05 & 56.44 & 58.63 \\ PointContrast (Xie et al., 2020) & 55.31(5.07) & 58.68(3.82) & 65.03(1.98) & 58.65(2.21) & 60.71(2.08) \\ Ours & **56.46(6.22)** & **59.17(4.31)** & **65.14(2.09)** & **58.94(2.50)** & **60.79(2.16)** \\ \hline \end{tabular}
\end{table}
Table 3: Fine-tuning on the Scannet and S3DIS dataset for semantic segmentation task. The number in () donates the improved accuracy compared with purely supervised training.
\begin{table}
\begin{tabular}{l|c c c} \hline Model & All classes & Seen classes & Unseen classes \\ \hline Ours & **13.41** & **21.01** & **4.11** \\ \hline \end{tabular}
\end{table}
Table 5: Zero-shot semantic segmentation on scanner. mIoU is the metric.
### Ablation study
We evaluate the performance of 3D object saliency detection on ScanNet to verify the effectiveness of different modules, including Crowded Scene Simulation (CSS) and Deep Convex-hull Regularization (DCR). In the following, we present the configuration details and give more insights into what factors affect the performance.
Effect of Crowded Scene SimulationBase+CSS donates the baseline with the Crowded Scene Simulation (CSS). Observing from (Base, Base+CSS), AmAP is improved by 31.26%. We dig into CSS by exploring the following configurations. Firstly, we investigate how data augmentation (DA) influences performance, including random scaling and rotation. These operations cover the diversities of the scene's objects. As shown in Table 6, the performance greatly reduced if without applying DA (Base+CSS\({}_{noDA}\) (23.44 AmAP) VS Base+CSS (42.19 AmAP)). Besides, we find the random cropping is beneficial for performance improvement due to the real scene's objects are often partially scanned (Base+CSS\({}_{megaMooCo}\) (37.83 AmAP) is without random cropping). Secondly, to explore how to conduct negative samples, we try to take the scene's points as the negative samples for contrastive learning (Base+CSS\({}_{megaSc}\)). We find that the performance (23.04 AmAP) is significantly worse than Base+CSS (42.19 AmAP), which uses the CAD models as negative samples. It is probably that the network learns the artefacts to distinguish the CAD models from the scene. The artefacts are mainly caused by the mixing up operation, such as the overlapped/disjointed points. As a result, the network could not generalize the knowledge to a clear scene without such artefacts. Lastly, to understand the role of mixing the scene, we only mix up the CAD models together and exclude the scene points (Base+CSS\({}_{megaMoo}\)). Surprisingly, the performance is comparable with the counterpart Base+CSS (41.34 AmAP VS 42.19 AmAP).
Effect of Feature AlignmentSince feature domain gaps exist between the CAD models and the objects in a real scan, we use the prototypes to align their features into an unified feature space (Base+CSS+DCR). The experiment shows that the improvement is about 4% for AmAP (Base+CSS VS Base+CSS+DCR\({}_{K}128\)). The inversed temperature \(\lambda\) and the number of prototypes \(K\) are two hyper-parameters for the feature alignment module. \(\lambda\) indicates the smoothness of the coefficient distribution and can be regarded as a regularization term to prevent network degradations. We present the results when \(\lambda\) is 0.1, 0.5 and 4, respectively. (Base+CSS+DCR\({}_{T0.1}\), Base+CSS+DCR\({}_{K128}\) and Base+CSS+DCR\({}_{T4}\)). We choose \(\lambda\) to be 0.5 empirically. The number of prototypes \(K\) is another hyper-parameter. We respectively evaluate it with the configurations Base+CSS+DCR\({}_{K64}\), Base+CSS+DCR\({}_{K128}\), and Base+CSS+DCR\({}_{K256}\). The network achieves the best performance when \(K\) is set to be 128. Besides, we show the result when the key \(\theta(\cdot)\) and the query function \(\varphi(\cdot)\) are identity mapping function in the setting Base+CSS+DCR\({}_{cos}\). The performance is slightly worse than the full method Base+CSS+DCR\({}_{K128}\).
## 4 Conclusion
We propose Model2Scene to investigate the problem of learning 3D scene representation from CAD models and languages. Notably, we propose a novel Deep Convex-hall Regularization to handle the model-to-scene and synthetic-to-real domain gaps. Extensive experiments show that the learned 3D scene representation can favour the downstream task's performance, including label-free 3D salient object detection, label-efficient 3D perception and zero-shot 3D semantic segmentation.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Method & AmAP & chair & table & bed & sink & bathtub & door & curtain & desk & bookshelf & sofa & toilet \\ \hline Base & 10.93 & 8.09 & 10.60 & 15.97 & 25.33 & 12.40 & 9.77 & 13.92 & 4.66 & 26.76 & 11.49 & 4.07 \\ \hline Base+CSS\({}_{noDA}\) & 23.44 & 36.90 & 35.57 & 26.30 & 11.98 & 17.29 & 11.01 & 21.07 & 11.58 & 25.03 & 16.40 & 44.66 \\ Base+CSS\({}_{megaSc}\) & 23.04 & 41.74 & 25.05 & 21.92 & 7.86 & 23.01 & 13.88 & 21.93 & 14.79 & 23.80 & 13.08 & 46.39 \\ Base+CSS\({}_{megaSc}\) & 41.34 & **37.12** & 40.05 & 42.73 & 29.17 & 19.97 & 17.52 & 23.33 & 33.40 & 50.34 & 59.36 & 65.71 \\ Base+CSS\({}_{megaSc}\) & 37.83 & 72.62 & 39.91 & 37.49 & 14.95 & 16.93 & 18.33 & 21.29 & 32.84 & 46.09 & 59.44 & 56.53 \\ Base+CSS & 42.19 & 67.53 & 47.64 & 41.82 & 27.01 & 32.47 & 13.68 & 27.82 & 26.78 & 50.54 & 58.75 & 70.07 \\ \hline Base+CSS+DCR\({}_{k64}\) & 43.16 & 67.37 & **48.27** & 41.51 & 20.38 & 33.90 & 16.30 & 31.55 & 27.17 & 52.32 & **61.19** & 74.84 \\ Base+CSS+DCR\({}_{k25}\) & **46.08** & 67.19 & 46.73 & 46.56 & **31.28** & 43.36 & 17.47 & 34.43 & 29.15 & **59.37** & 60.48 & 71.77 \\ Base+CSS+DCR\({}_{k256}\) & 43.81 & 68.15 & 45.12 & **51.80** & 20.12 & 38.14 & 17.10 & 25.64 & **33.99** & 48.23 & 59.32 & 70.55 \\ Base+CSS+DCR\({}_{74}\) & 44.51 & 69.05 & 47.28 & 49.72 & 18.26 & **45.30** & **18.46** & 28.83 & 26.16 & 49.94 & 57.23 & **79.40** \\ Base+CSS+DCR\({}_{74}\) & 43.79 & 70.47 & 44.44 & 42.66 & 20.95 & 39.57 & 16.00 & 29.92 & 27.65 & 52.04 & 59.14 & 79.20 \\ Base+CSS+DCR\({}_{cos}\) & 43.45 & 69.16 & 45.84 & 42.53 & 21.47 & 37.38 & 15.36 & 29.20 & 29.06 & 51.72 & 59.16 & 77.06 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation experiments. MinkUNnet is the baseline method. MMS and FA donate Models and Scene Mix-up and convex-hull regularized feature alignment, respectively. |
2305.19768 | Numerical Scattering Amplitudes with pySecDec | We present a major update of the program pySecDec, a toolbox for the
evaluation of dimensionally regulated parameter integrals. The new version
enables the evaluation of multi-loop integrals as well as amplitudes in a
highly distributed and flexible way, optionally on GPUs. The program has been
optimised and runs up to an order of magnitude faster than the previous
release. A new integration procedure that utilises construction-free median
Quasi-Monte Carlo rules is implemented. The median lattice rules can outperform
our previous component-by-component rules by a factor of 5 and remove the
limitation on the maximum number of sampling points. The expansion by regions
procedures have been extended to support Feynman integrals with numerators, and
functions for automatically determining when and how analytic regulators should
be introduced are now available. The new features and performance are
illustrated with several examples. | G. Heinrich, S. P. Jones, M. Kerner, V. Magerya, A. Olsson, J. Schlenk | 2023-05-31T11:58:35Z | http://arxiv.org/abs/2305.19768v2 | # Numerical Scattering Amplitudes with pySecDec
###### Abstract
We present a major update of the program pySecDec, a toolbox for the evaluation of dimensionally regulated parameter integrals. The new version enables the evaluation of multi-loop integrals as well as amplitudes in a highly distributed and flexible way, optionally on GPUs. The program has been optimised and runs up to an order of magnitude faster than the previous release. A new integration procedure that utilises construction-free median Quasi-Monte Carlo rules is implemented. The median lattice rules can outperform our previous component-by-component rules by a factor of 5 and remove the limitation on the maximum number of sampling points. The expansion by regions procedures have been extended to support Feynman integrals with numerators, and functions for automatically determining when and how analytic regulators should be introduced are now available. The new features and performance are illustrated with several examples.
keywords: Perturbation theory, Feynman diagrams, scattering amplitudes, multi-loop, numerical integration +
Footnote β : journal:
**PROGRAM SUMMARY**
_Manuscript Title:_ Numerical Scattering Amplitudes with pySecDec
_Authors:_ G. Heinrich, S. P. Jones, M. Kerner, V. Magerya, A. Olsson, J. Schlenk _Program Title:_ pySecDec
_Developer's repository:_[https://github.com/gudrunhe/secdec](https://github.com/gudrunhe/secdec)
_Online documentation:_[https://secdec.readthedocs.io](https://secdec.readthedocs.io)
_Licensing provisions: GNU Public License v3_
_Programming language:_ Python, Form, C++, Cuda
_Computer:_ from a single PC/Laptop to a cluster, depending on the problem; if the optional GPU support is used, Cuda compatible hardware is required.
_Operating system:_ Unix, Linux
_RAM:_ hundreds of megabytes or more, depending on the complexity of the problem
_Keywords:_ Perturbation theory, Feynman diagrams, scattering amplitudes, multi-loop, numerical integration
_Classification:_ 4.4 Feynman diagrams, 5 Computer Algebra, 11.1 General, High Energy Physics and Computing.
_External routines/libraries:_ GSL [1], NumPy[2], SymPy[3], Nauty[4], Cuba[5], Form[6], GiNAC and CLN [7], Normaliz[8], GMP [9].
_Journal reference of previous version:_ Comput. Phys. Commun. 273 (2022) 108267 [1].
_Does the new version supersede the previous version?:_ yes
_Nature of the problem:_
Scattering amplitudes at higher orders in perturbation theory are typically represented as a linear combination of coefficients - containing the kinematic invariants and the space-time dimension - multiplied with loop integrals which contain singularities and whose analytic representation might be unknown.
_Solution method:_
Extraction of singularities in the dimensional regularization parameter as well as in analytic regulators for potential spurious singularities is done using sector decomposition. The combined evaluation of the integrals with their coefficients is performed in an efficient way.
_Restrictions:_ Depending on the complexity of the problem, limited by memory and CPU/GPU time.
_Running time:_ Between a few seconds and several days, depending on the complexity of the problem.
_References:_
[1] M. Galassi et al, GNU Scientific Library Reference Manual. ISBN:0954612078, [http://www.gnu.org/software/gsl/](http://www.gnu.org/software/gsl/).
[2] C. R. Harris, K. J. Millman, S. J. van der Walt, et al, Array programming with NumPy, Nature **585** (2020) 357-362. doi:10.1038/s41586-020-2649-2, [http://www.numpy.org/](http://www.numpy.org/).
[3] A. Meurer, et al., SymPy: symbolic computing in Python, PeerJ Comp. Sci. **3** (2017) e103. doi:10.7717/peerj-cs.103, [http://www.sympy.org/](http://www.sympy.org/).
[4] B. D. McKay and A. Piperno, Practical graph isomorphism, II, J. Symb. Comput. **60** (2014) 94-112. doi:10.1016/j.jsc.2013.09.003, [http://pallini.di.uniroma1.it](http://pallini.di.uniroma1.it).
[5] T. Hahn, CUBA: A Library for multidimensional numerical integration, Comput. Phys. Commun. **168** (2005) 78. arXiv:hep-ph/0404043, [http://www.feynarts](http://www.feynarts).
de/cuba/.
* [6] J. Kuipers, T. Ueda and J. A. M. Vermaseren, Code Optimization in FORM, Comput. Phys. Commun. **189** (2015) 1. arXiv:1310.7087, [http://www.nikhef.nl/~form/](http://www.nikhef.nl/~form/).
* [7] C. W. Bauer, A. Frink, and R. B. Kreckel, Introduction to the GiNaC framework for symbolic computation within the C++ programming language, J. Symb. Comput. **33** (2002) 1-12. arXiv:cs/0004015, [https://www.ginac.de/](https://www.ginac.de/).
* [8] W. Bruns, B. Ichim, B. and T. Romer, C. Soger, Normaliz. Algorithms for rational cones and affine monoids. [http://www.math.uos.de/normaliz/](http://www.math.uos.de/normaliz/).
* [9] T. Granlund et al, GMP: The GNU Multiple Precision Arithmetic Library. [https://gmplib.org/](https://gmplib.org/).
## 1 Introduction
The calculation of scattering amplitudes beyond one loop is required in order to provide predictions for the increasingly precise measurements at the LHC, at B-factories and at other colliders. Furthermore, future lepton colliders require substantial progress in the calculation of higher order electroweak corrections, which usually involve several mass scales. The latter pose challenges for the evaluation of the corresponding integrals, in particular for analytic approaches. The program (py)SecDec[2, 3, 4, 5] offers the possibility to calculate multi-scale integrals beyond one loop numerically. Other public programs for the numerical evaluation of multi-loop integrals based on sector decomposition within dimensional regularisation [6, 7] are sector_decomposition[8] and Fiesta[9, 10, 11, 12, 13]. The program Feyntrop[14] provides a numerical approach for evaluating quasi-finite Feynman integrals using tropical sampling [15]. Other analytic/semi-analytic approaches include DiffExp[16, 17], AMFlow[18] and SeaSyde[19] which calculate Feynman integrals by solving differential equations using series expansions.
The program pySecDec has been upgraded recently with the ability to perform expansions by regions [1], a method pioneered in Refs. [20, 21, 22, 23]. Ref. [1] also describes an early implementation of an algorithm for efficiently calculating the weighted sum of integrals.
In this paper, we present pySecDec version 1.6, which is a major upgrade in several respects. One of the main changes is the fact that much more general coefficients of integrals than previously allowed are now supported. This feature is important for the calculation of amplitudes in a form resulting from IBP reduction, where the coefficients of the master integrals are usually sums of large rational polynomials containing kinematic invariants and the space-time dimension \(D\). Furthermore, various changes in the code structure and numerical evaluation lead to a significant speed-up of the numerical
evaluation. We present a new Quasi-Monte-Carlo (QMC) evaluator, called Disteval, which is optimised for a highly distributed evaluation. Another major improvement is achieved by the use of median generating vectors for the rank-1 lattice rules the QMC integration is based on. In addition, the feature of expansion by regions has been upgraded. For example, the program can automatically detect whether a regulator in addition to the dimensional regulator is needed in certain regions. In addition, the algebraic expressions multiplying each order of the expansion in a small parameter are provided to the user.
This article is structured as follows. In Section2 the new features of version 1.6 are described. In Section3 we present examples which demonstrate the usage of the program and the new features, as well as timings comparing previous pySecDec versions to the current version. Conclusions are presented in Section4.
The release version of the code is available at [https://pypi.org/project/pySecDec/](https://pypi.org/project/pySecDec/) and can be obtained via pip. The development version lives at [https://github.com/gudrunhe/secdec](https://github.com/gudrunhe/secdec). Online documentation can be found at [https://secdec.readthedocs.io/](https://secdec.readthedocs.io/).
## 2 New features of pySecDec
The main new features of pySecDec version 1.6 are a new integrator/importance sampling procedure (Disteval), support for construction-free median quasi-Monte Carlo rules and improved support for expansion by regions.
The Disteval integrator is presented in Section2.1, it implements a newly constructed Quasi-Monte-Carlo (QMC) integrator and is significantly faster and more configurable than our previous integrators. The Disteval integrator also comes with much better support for inputting complicated coefficients of the master integrals, including sums of rational functions resulting from the IBP reduction of amplitudes.
In Section2.2, we describe our implementation and benchmarking of construction-free median quasi-Monte Carlo rules, based on Ref. [24]. The median lattice rules are made available by default in the Qmc and the Disteval integrators.
Improvements to the expansion by regions routines are described in Section2.3. The new version of pySecDec supports Feynman integrals with numerators and provides functions for determining where an additional extra regulator, in addition to dimensional regularisation, is needed.
### The new Quasi-Monte-Carlo evaluator Diseval
pySecDec traditionally comes with support for multiple integrators: Qmc based on the Qmc library [5]; Vegas, Suave, Divonne, and Cuhre based on the Cuba library [25]; CQuad based on the GSL library [26]. Out of these we have recommended the usage of the Qmc integrator as the only one that achieves super-linear scaling of the integration precision with integration time for practical multidimensional integrals. All of these six integrators are available through a unified integration interface we shall call "IntLib" (for lack of a better name).
With the new version of pySecDec we introduce a new integration interface and an integrator "Diseval". Diseval implements a Randomized Quasi-Monte-Carlo (RQMC) integration method based on rank-1 shifted lattice rules [27, 28]. It is directly analogous to the IntLib Qmc integrator, but with significantly higher performance, and the possibility of evaluation distributed across several computers. As with Qmc, Diseval supports both CPUs and GPUs, with the latter ones being preferred due to their speed.
In Section3 we provide a series of benchmarks demonstrating the speedup Diseval provides over Qmc (usually between 3x and 10x) across a variety of integrals, on both CPUs and GPUs.
There are multiple sources of this speedup:
* While IntLib integrands are compiled separately from the integration algorithms and are called indirectly by the integrators, Diseval integrands fully include the integration loop. This enables the hoisting of the common code from the integration loop, the fusion of the lattice point generation and the integrand evaluation, and multiple micro-optimizations by the compiler. This however comes at the expense of flexibility in choosing integrators.
* The code for GPU integrands and CPU integrands are generated separately, allowing for separate optimization to be applied for each.
* On the GPU side Diseval uses the highly optimized NVidia CUB library1 to sum up the samples on the GPU (instead of performing the sum on the CPU), minimizing the data transfer between CPU and GPU. Footnote 1: [https://github.com/WVIDIA/cub](https://github.com/WVIDIA/cub)
* Modern CPUs are capable of executing multiple independent instructions in parallel. For example, an AMD Epyc 7F32 processor contains four floating-point execution units: two capable of performing
one 256-bit Fused Multiply-Add (FMA) operation per cycle each, and two capable of one 256-bit addition operation per cycle each, for the total of 16 double-precision (i.e. 64-bit) operations per cycle. Saturating these executing units with work is essential in achieving optimal performance, and the best way to do that is to structure the code to operate on multiple values at the same time, packing 64-bit double-precision values into 256-bit arrays and utilizing SIMD2 instructions that operate on the whole array at once. Footnote 2: βSingle Instruction Multiple Data.β Footnote 3: This can be done by checking the presence of avx2 and fma flags in /proc/cpuinfo. The integrand kernels pySecDec generates for Disteval do exactly this: each mathematical operation is coded to work on 4 double-precision values simultaneously, and if the compiler is allowed to emit 256-bit SIMD instructions (i.e. via the AVX2 and FMA instructions sets on x86 processors), each such operation becomes a single instruction. Note that while all modern x86 processors support AVX2 and FMA, some older ones do not, and because of this Disteval does not require their support. It is up to the user to check if all their target machines have this support,3 and if so, to allow the compiler to use these instruction sets by e.g. setting CXFLAGS to -mavx2 -mfma during compilation.4 This is highly recommended. Footnote 4: See e.g. [https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html](https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html) for a description of machine-specific options of GCC. This is highly recommended. Users that plan to perform integration on a single machine are advised to set CXFLAGS to -march=native, so that the compiler would be allowed to auto-detect the capabilities of the processor it is running on, and use all the available instruction sets.
* Multiple smaller micro-optimizations on the CPU and the GPU sides to reduce the overhead for smaller integrands, and to speed up larger ones.
#### 2.1.1 Using Disteval
Usage-wise, Disteval diverges from IntLib, during compilation and integration, but is similar enough that porting integration scripts should be easy.
As an example, let us consider a massless one-loop box. To generate the integration library for both integration interfaces, one can use the following Python script:
importpySecDecaspsd if__name__=="_main__: li = psd.LoopIntegralFromPropagators( loop_momenta=["], external_momenta=["p1","p2","p3"], propagators=["l**2","((-p1)**2","(1-p1-p2)**2","(1-p1-p2-p3)**2"], replacement_rules=[ ("p1+p1","0"), ("p2+p2","0"), ("p3+p3","0"), ("p1+p2","5/2"), ("p2+p3","t/2"), ("p1+p3","-s/2-t/2")]) psd.loop_package( name="boxlL", loop_integral=li, real_parameters=["s","t"], requested_orders=[0]) ```
Then, to compile the IntLib library one can invoke make from the command shell:
``` make-CboxlL-j4 ```
Similarly, to compile the Disteval library one can use:5
Footnote 5: As noted earlier, adding CXXFLAGS="-mavx2 -mfma" to this make call is recommended.
If one wants to use the resulting library on a GPU with "compute capability" 8.0, one should add SECDEC_WITH_CUDA_FLAGS="-arch=sm_80" to the arguments of the make call.6 For IntLib this will build a library that can only be used on the GPU; for Disteval the resulting library will be able to work with and without a GPU.
Footnote 6: The list of NVidia βCompute Capabilityβ codes for different GPUs is available at [https://developer.nvidia.com/cuda-gpus](https://developer.nvidia.com/cuda-gpus).
To integrate using IntLib one can use the Python interface:
``` frompySecDec.integral_interfaceimportIntegralLibrary lib=IntegralLibrary("boxlL/boxl1_pylink.so") lib.use_Qmc() _,_,result=lib(real_parameters=[4.0,-0.75],esprel=1e-3,epsabs=1e-8) print(result) ```
Similarly, to integrate using Disteval one can use the Python interface:
frompySecDec.integral_interfaceimportDistevallibrary lib=DistevalLibrary("box1L/disteval/box1L.json") result=lib(parameters={"s":4.0,"t":-0.75},epsrel=1e-3,epsabs=1e-8) print(result) ```
Alternatively, one can also use the new command-line interface:
``` python3-mpySecDec.distevalbox1L/disteval/box1L.json\ s=4t=-0.75--epsrel=1e-3--epsabs=1e-8 ```
#### 2.1.2 Distributed evaluation
The integrand evaluation under Disteval is performed by worker processes, while the main process is responsible for distributing work among the workers and processing the results. Communication between the main and the worker processes is done via bidirectional bytestreams (i.e. pipes), using a custom json-based protocol, which means that the workers do not need to be located on the same machine as the main process.
By default, the Python interface of Disteval will launch one worker process per locally available GPU, or one per locally available CPU. Each CPU worker is launched with the command
``` python3-mpySecDecContribpsecdec_cpuworker ```
and each GPU worker is launched with the command
``` python3-mpySecDecContribpsecdec_cudaworker-d<i> ```
where <i> is the (zero-based) index of the GPU this worker should use.
The default worker selection however can be overridden through the workers argument of DistevalLibrary to allow execution on different machines. For example, suppose that the integration is to be spread across two machines: gpu1 with a single GPU, and gpu2 with two GPUs; if both machines are reachable via ssh, then one could setup the integration library as follows:
``` lib=DistevalLibrary( "box1L/disteval/box1L.json", workers=[ "sshgpu1python3-mpySecDecContribpsecdec_cudaworker-d@", "sshgpu2python3-mpySecDecContribpsecdec_cudaworker-d@", "sshgpu2python3-mpySecDecContribpsecdec_cudaworker-d1" ]) ```
#### 2.1.3 Adaptive weighted sum evaluation
Since pySecDec version 1.5 IntLib supports adaptive integration of weighted sums of integrals (e.g. amplitudes) via the sum_package() function. Additionally, versions of loop_package() and make_package() implemented in terms of sum_package() have been added. Disteval implements a very similar adaptive sampling algorithm.
Suppose we have a set of integrals \(I_{i}\), and we want to calculate a set of their weighted sums \(A_{k}\equiv\sum_{i}C_{ki}I_{i}\). When evaluated under RQMC, each \(I_{i}\) can be thought of as a normally distributed random variable,
\[I_{i}\sim\mathcal{N}(\text{mean}(I_{i}),\text{var}(I_{i})). \tag{1}\]
Let us assume that it takes \(\tau_{i}\) of time to evaluate the integrand of \(I_{i}\) once, and that \(\text{var}(I_{i})\) scales with the number of integrand evaluations \(n_{i}\) (a.k.a. the size of the lattice on which the integrand is evaluated) as
\[\text{var}(I_{i})=\frac{w_{i}}{n_{i}^{\alpha}}. \tag{2}\]
Our objective then is to choose \(n_{i}\) as functions of \(C_{ki}\), \(w_{i}\), \(\tau_{i}\), and \(\alpha\), to minimize the total integration time
\[T\equiv\sum_{i}\tau_{i}n_{i}, \tag{3}\]
while achieving the total variance \(V_{k}\) requested by the user:
\[\text{var}(A_{k})=\sum_{i}\left|C_{ji}\right|^{2}\frac{w_{i}}{n_{i}^{\alpha}} =V_{k}\;\left(\forall k\right). \tag{4}\]
We solve this optimization problem via the Lagrange multiplier method:
\[L\equiv T+\sum_{k}\lambda_{k}\left(\text{var}(A_{k})-V_{k}\right),\qquad\text {and}\qquad\frac{\partial L}{\partial\left\{n_{i},\lambda_{k}\right\}}=0. \tag{5}\]
If only one sum \(A_{k}\) needs to be evaluated, then these equations have a closed-form solution:
\[\lambda_{k} =\frac{1}{\alpha}\left(\frac{1}{V_{k}}\sum_{k}\left(\left|C_{jk} \right|^{2}w_{k}\tau_{k}^{\alpha}\right)^{\frac{1}{\alpha+1}}\right)^{\frac{ \alpha+1}{\alpha}}, \tag{6}\] \[n_{i} =\left(\frac{\alpha w_{i}}{\tau_{i}}\lambda_{k}\left|C_{ji} \right|^{2}\right)^{\frac{1}{\alpha+1}}.\]
If multiple sums are requested, Disteval uses this formula first for the first sum, then updates \(n_{i}\) and applies it to the next sum, and so on.
To make this work in practice, Disteval needs to estimate the integral evaluation speed \(\tau_{i}\), convergence constants \(w_{i}\), and the power \(\alpha\). The evaluation speed \(\tau_{i}\) is estimated on-line, by first benchmarking the relative performance of each worker, and then by tracking how fast a given integral is being evaluated on a given worker. The convergence constants \(w_{i}\) are first estimated by evaluating all integrals with some preset minimum lattice size (\(10^{4}\) by default), and then updated each time an integration result is obtained. The parameter \(\alpha\) is chosen conservatively to be \(2\), which is the minimum asymptotic scaling guaranteed by the use of QMC methods (for some examples see Figure 7 where \(\alpha\approx 3\), and Figure 9 where \(\alpha\approx 2\)).
Here it is important to note that the scaling law of Eq. (2) is only asymptotic. In practice the usage of rank-1 lattice rules means that for each lattice size \(n_{i}\) we must construct a completely new lattice, and often larger \(n_{i}\) results in a larger error, instead of a smaller one - a phenomenon which we call _unlucky lattices_.
As an illustration, consider Figure 1: although the variance overall scales as \(1/n^{3}\) (and thus the error as \(1/n^{1.5}\)), the progression is not monotonic, and one particularly unlucky lattice results in an integration error more than four orders of magnitude worse than lattices of similar size around it - but only for one of the integrals, for the other the same lattice gives a perfectly good result.
This scaling structure makes the integration times inherently unpredictable: if during the integration an integral is evaluated on an unlucky lattice, then Disteval will overestimate the integral's \(w_{i}\) parameter, and will assume that many more samples of this integral are needed to achieve the requested precision, wasting integration time. The practical impact of this is usually low to moderate, unless one encounters a very unlucky lattice such as the one marked with a star in Figure 1. To some extent, this effect can be tamed by the _median QMC rules_, introduced in the following section.
### Median quasi-Monte Carlo rules
The quasi-Monte Carlo integration in previous versions of pySecDec was based on pre-computed generating vectors, provided with the Qmc library [5]. These generating vectors were constructed using the component-by-component (CBC) method [29], minimizing the worst-case error of the QMC integration, assuming arbitrary integrands belong to a Korobov space with smoothness \(\alpha=2\) and using product weights.
However, for a given integrand, a lattice of size \(n\) based on the above CBC construction might not be the optimal choice, resulting in the unlucky lattices illustrated in the previous section. Furthermore, if the requested precision of the integral can not be achieved with the largest lattices provided
Figure 1: The RQMC integration error (i.e. \(\sqrt{\mathrm{var}(I_{i})/m}\)) after \(m=32\) repetitions for lattices of different sizes. The integrals are sectors of the elliptic2L_physical example from Section 3.2. The lattices are taken from the QMC library, and are the same for both integrals. The result of one particularly unlucky lattice is marked with a star.
with the Qmc library (currently \(\sim 7\cdot 10^{10}\) sampling points), the error can only be improved by repeated sampling of this lattice with random shifts, resulting in a \(n^{-1/2}\) scaling of the integration error.
An alternative to the CBC construction has been presented in Ref. [24], based on the observation that most generating vectors are good choices, provided the components are chosen from the set
\[\mathbb{U}_{n}=\{1\leq z\leq n-1\,|\gcd(z,n)=1\}. \tag{7}\]
For \(r\) randomly selected generating vectors \(\mathbf{z}_{1},\ldots,\mathbf{z}_{r}\) satisfying this condition, it has been shown that using the median
\[M_{n,r}(f)=\text{median}(Q_{n,\mathbf{z}_{1}}(f),\ldots,Q_{n,\mathbf{z}_{r}} (f)) \tag{8}\]
as an integral estimate results in the same convergence rate as the CBC construction with high probability. Here, \(Q_{n,\mathbf{z}}(f)\) is the estimate for the integral of \(f\), obtained using the rank-1 lattice rule with generating vector \(\mathbf{z}\).
In pySecDec we now provide the possibility for an automated construction of generating vectors following this method. It can be enabled with the option lattice_candidates, which specifies the number \(r\) of randomly chosen generating vectors. With the default setting lattice_candidates=0, only the pre-computed generating vectors based on CBC construction are used. After selecting the generating vector according to the median QMC rule, the uncertainty of the integration is then obtained by sampling the integrand on \(m\) different random shifts of this lattice, as in previous versions of pySecDec. Using this method, the construction of lattices of arbitrary size \(n\) is possible, and since the generating vectors are chosen individually
Figure 2: Integration time of the elliptic2L_physical example from Section 3.2 using the median QMC rule with \(r=7\) and \(r=11\) compared to the integration using CBC construction of the generating vectors.
for each integrand, the problems due to unlucky lattices becomes less pronounced. A disadvantage of the median QMC rule is that, compared to the CBC construction, additional \(r\) samples of the integral are required, besides the \(m\) samples required to estimate the integral uncertainty. We also tested applying the generating vectors obtained using the median QMC rules to different integrals, avoiding the construction of new generating vectors for each integrand. However, we find that this would typically lead to larger integration times.
We typically find that the integration time using the median QMC rule is either comparable or faster than the integration using the generating vectors based on CBC construction, as shown in Figure 2. The usage of the median QMC rule is demonstrated in the example elliptic2L_physical.
### Extra regulators for Expansion by Regions
When expansion by regions is applied to a well-defined dimensionally regulated integral, new spurious singularities may be introduced which are not regulated by the original regulator. It is possible to detect geometrically which integrals will become ill-defined after expansion [1].
One way to handle the new singularities is to generalise the definition of the integral by adding new analytic regulators, \(\delta_{1},\ldots,\delta_{N}\). Commonly, this is done for Feynman integrals by altering the power of Feynman propagators according to \((\nu_{1}\to\nu_{1}+\delta_{1},\ldots,\nu_{N}\to\nu_{N}+\delta_{N})\), or, in the Feynman parametrisation, by multiplying the integrand by \(x_{1}^{\delta_{1}}\cdots x_{N}^{\delta_{N}}\), where \(x_{i}\) are Feynman parameters. Introducing \(N\) independent new regulators can dramatically increase the complexity of the problem and is often unnecessary. Using the algorithms described in Ref. [1], several new routines for detecting and handling spurious divergences have been added to pySecDec, focusing on Feynman (loop) integrals.
The loop_regions function now accepts the argument extra_regulator_name. If a string or symbol is passed to this argument, pySecDec automatically determines if an extra regulator is required and, if so, introduces a single new regulator. The integrand is multiplied by \(\mathbf{x}^{\boldsymbol{\nu}_{\delta}}\) where \(\delta\) is the extra regulator and \(\boldsymbol{\nu}_{\delta}\) is a vector of integers automatically chosen such that the integral becomes well-defined. Alternatively, the user may pass a specific \(\boldsymbol{\nu}_{\delta}\) as a list of integers or sympy rationals via the argument extra_regulator_exponent.
The function suggested_extra_regulator_exponent, which the user can call independently of loop_regions, automatically determines a vector of integers \(\boldsymbol{\nu}_{\delta}\) sufficient to make a loop integral well-defined. Given a loop_integral object and the parameter in which it should be expanded, smallness_parameter, the function returns \(\boldsymbol{\nu}_{\delta}\). There is considerable freedom in choosing the entries of \(\boldsymbol{\nu}_{\delta}\). The only important property is that its entries must obey a set of
inequalities which ensure it is not tangent to any of the hyperplanes spanned by the set of new (internal) facets, introduced by the expansion, which lead to spurious singularities. The suggested_extra_regulator_exponent function returns only one choice for \(\mathbf{\nu}_{\delta}\), it obeys the additional constraint \(\sum_{i}\mathbf{\nu}_{\delta,i}=0\), which ensures that the new regulator does not appear in the power of the \(\mathcal{U}\) or \(\mathcal{F}\) polynomials.
The function extra_regulator_constraints provides the list of constraints which must be obeyed by the entries of \(\mathbf{\nu}_{\delta}\) for it to regulate the new singularities. The user may call this function independently, for example, if they wish to impose additional constraints on the analytic regulators or if they want to understand the regions giving rise to spurious singularities and how they cancel. The function returns a dictionary of regions and constraints that must be obeyed in order to obtain regulated integrals, along with a complete list of all constraints (the al1 entry). Each set of constraints is provided as an array, each row of which can be interpreted as the elements of a vector \(\mathbf{n}_{f}\) normal to an internal facet, \(f\), which gives rise to a spurious singularity. The integral is regulated by any vector \(\mathbf{\nu}_{\delta}\) which obeys \(\langle\mathbf{n}_{f},\mathbf{\nu}_{\delta}\rangle\neq 0\ \forall f\).
The example region_tools demonstrates the use of each of the above functions on a 1-loop box integral with an internal mass.
### New functionalities for coefficients of master integrals
To evaluate one or several weighted sums of integrals pySecDec provides the function sum_package() that takes a list of integrals \(I_{i}\), and a matrix of coefficients \(C_{ki}\) (given as a list of its rows), so that in the end the weighted sums \(A_{k}\equiv\sum_{i}C_{ki}I_{i}\) are evaluated. In version 1.5 the coefficients were required to be instances of the class Coefficient, and to be specified as a product of polynomials.
The new version of pySecDec now additionally supports more flexible ways to specify the coefficient matrix.
1. The coefficients themselves can now be arbitrary arithmetic expressions provided as strings. pySecDec now uses GiNAC[30] to parse these strings, so any syntax recognized by GiNaC is supported. The coefficient strings themselves are subsequently used in two ways: first during the integral library generation (i.e. inside sum_package()) pySecDec will try to determine the leading poles of the coefficients in the regulators, which is needed to determine the number of orders the integrals will need to be expanded to. Second, the strings will be saved to files as they are, and loaded back during the evaluation, at which point the symbolic variables will be substituted by the values provided by the user, and the resulting expressions will be expanded
into a series in the regulators. This evaluation will be performed using arbitrary precision rational numbers so that no precision could be lost to numeric cancellations. This design was chosen to support expressions that are too big to be compiled to machine code or to be symbolically manipulated in non-trivial ways, such as coefficients arising after integration-by-parts reduction.
2. Each row of the coefficient matrix can be given either as a list of the same size as the number of integrals, or as a dictionary from integral indices to coefficients. For example, ["a","@","b"] and {@:"a",2:"b"} are now both valid ways to specify the same coefficient matrix row; the second way makes it easier to supply sparse matrices because zero coefficients can be omitted.
3. Each weighted sum can now be given a name. To this end, the coefficient matrix can be specified not as a list of rows, but rather as a dictionary from sum names (i.e. strings) to coefficient matrix rows. The supplied names are then used by Disteval in the integration log, and in its results, which can optionally be structured as dictionaries from the sum name to their values. The goal is making it easier to work with multiple sums at the same time.
## 3 Usage examples and comparison to the previous version
The examples described below can be found in the folder examples/ of the pySecDec distribution. Unless stated otherwise, the default settings are used.
### New and featured examples
We begin by describing the new examples introduced for the current release. These examples are primarily designed to demonstrate some of the new features. In Section 3.1.2 we demonstrate the flexible input syntax for amplitudes and in Section 3.1.3 we show how individual coefficients of the smallness parameter can be accessed when using expansion by regions. The remaining examples demonstrate the performance of the Disteval and IntLib integrators.
#### 3.1.1 Simple jupyter notebook examples
The folder examples/jupyter/ contains three examples in the form of a jupyter notebook where the whole workflow is demonstrated. These examples are
easy.ipynb: an easy function depending on two parameters;
box.ipynb: a one-loop box diagram with massive propagators;
muon_production.ipynb: the one-loop amplitude for \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) in massless QED.
Two of the examples are also available without jupyter format, in the folders examples/easy/ and examples/muon_production/, respectively.
#### 3.1.2 One-loop amplitude for \(e^{+}e^{-}\to\mu^{+}\mu^{-}\)
The example muon_production calculates the one-loop amplitude for muon production in electron-positron annihilation, \(e^{+}e^{-}\to\mu^{+}\mu^{-}\), with massless leptons in QED. It evaluates a set of scalar master integrals and combines the results with the corresponding integral coefficients. The generation of the amplitude and the Passarino-Veltman reduction of the contributing integrals was done with FeynCalc[31]. This example is meant to highlight the improved handling of integral coefficients that increases the practicality of using pySecDec for full amplitude calculations.
The pySecDec result for the Born-virtual interference, proportional to \(\alpha^{3}\), where \(\alpha\) is the QED fine structure constant, at \(s=3.0\), \(t=-1.0\), \(u=-2.0\) (subject to the physical constraint \(s+t+u=0\)) reads7
Footnote 7: Here and throughout the paper the numbers in the parentheses indicate the uncertainty of the final digits. For example, \(1.2345(67)\) means \(1.2345\pm 0.0067\).
\[\begin{split}\mathcal{A}^{(1)}\mathcal{A}^{(0)}^{*}& =+(-8.704559922781777(7)\cdot 10^{4}+7(5)\cdot 10^{-11}\,i)\cdot \varepsilon^{-2}\\ &+(+6.1407633077(4)\cdot 10^{4}-2.73461815073(4)\cdot 10^{5}\,i) \cdot\varepsilon^{-1}\\ &+(+3.45368951804(8)\cdot 10^{5}+3.98348633939(8)\cdot 10^{5}\,i) \\ &+N_{f}\big{[}-2.9015199742604458(3)\cdot 10^{4}\cdot\varepsilon^{-1 }\\ &\qquad\qquad+3.574514829439898(2)\cdot 10^{4}\\ &\qquad\qquad-9.1153938353806605(8)\cdot 10^{4}\,i\big{]}+ \mathcal{O}(\varepsilon)\;,\end{split} \tag{9}\]
where \(N_{f}\) is the number of lepton flavours. The result for the full amplitude has been validated with FeynCalc[31]. Since the building blocks of this reduced amplitude are only massless integrals, the integration time for one phase space point at the accuracy seen above is in the order of seconds.
#### 3.1.3 Example from 2-loop muon decay with asymptotic expansion
The example muon_decay2L demonstrates the possibility to produce Python output for each coefficient of an expansion in the smallness parameter within expansion by regions. The diagram in Figure 3 is expanded in the limit of small \(\tau\)-mass up to order 1, which generates terms with four different powers of \(m_{\tau}^{2}\): 0, 1, \(1-\varepsilon\) and \(1-2\varepsilon\). The result for this diagram reads
\[+(m_{\tau}^{2})^{0}\big{[} +(-3.5405(3)+3.0600(3)\,i)\big{]} \tag{10}\] \[+(m_{\tau}^{2})^{1}\big{[} +(-4.93694(1)\cdot 10^{-2}+2.237604(1)\cdot 10^{-1}\,i)\cdot \varepsilon^{-2}\] \[+(-5.0284(3)\cdot 10^{-1}-8.7869(4)\cdot 10^{-1}\,i)\cdot \varepsilon^{-1}\] \[+(+2.6476(2)-1.2090(2)\,i)\big{]}\] \[+(m_{\tau}^{2})^{1-\varepsilon}\big{[} +(+9.873891(4)\cdot 10^{-2}-4.4752040(5)\cdot 10^{-1}\,i)\cdot \varepsilon^{-2}\] \[+(+2.14031(6)\cdot 10^{-1}+1.97854(5)\cdot 10^{-1}\,i)\cdot \varepsilon^{-1}\] \[+(-7.9363(4)\cdot 10^{-1}+4.6866(5)\cdot 10^{-1}\,i)\big{]}\] \[+(m_{\tau}^{2})^{1-2\varepsilon}\big{[} +(-4.93694(1)\cdot 10^{-2}+2.237604(1)\cdot 10^{-1}\,i)\cdot \varepsilon^{-2}\] \[+(+2.8875(3)\cdot 10^{-1}+6.8082(4)\cdot 10^{-1}\,i)\cdot \varepsilon^{-1}\] \[+(+9.855(1)\cdot 10^{-1}+2.5878(2)\,i)\big{]}.\]
To obtain the result in this form - mixing the symbolic prefactors of the form \((m_{\tau}^{2})^{k}\) with numeric coefficients - one can generate the integration libraries as in Figure 4 and use them for integration as in Figure 5. The generation script here is similar to code example 2 in [1].
On line 4 of Figure 4, LoopIntegralFromGraph() is used to define a loop integral. On line 20 this integral is asymptotically expanded in the smallness parameter \(\texttt{mtsq}\equiv m_{\tau}^{2}\) via the loop_regions() function up to order 1. Then, on line 28 the powers of \(m_{\tau}^{2}\) are extracted from the prefactors of the terms of the expansion, and each term has its prefactor modified to no longer include \(m_{\tau}\). On line 31 a mapping between each unique power of the smallness parameter and the corresponding modified terms is added to a dictionary. Note that several terms may be attributed to the same smallness parameter power. The final part of the generation script creates the integral libraries corresponding to each unique power of the smallness parameter via the sum_package() call on line 36. On line 42 the dictionary mapping powers of the smallness
Figure 3: A 2-loop three point integral with three mass scales.
parameter to names of the corresponding integration libraries is saved in a JSON file; this file will later be used by the integration script.
The integration script of Figure 4 demonstrates how the Disteval integrator can be called to produce a result of the form given in Eq. (10). On lines 10 and 11 respectively, each integration library is loaded and called with the kinematic variables \(s=3.0\), \(M_{W}^{2}=0.78\), \(M_{Z}^{2}=1.0\). Some commonly configured parameters are set explicitly in the library call: epsrel is the relative accuracy, points is the initial QMC lattice size, format is the output format of the result (~sympy, ~mathematica, or ~json), number_of_presamples is the number of samples used for the initial contour deformation parameter selection, and timeout is the maximal allowed integration time in seconds. The full list of parameters is available in the pySecDec documentation on the Distevallibrary class. The integration script keeps track of which integration library corresponds to which smallness parameter power via the dictionary previously created by the generation script.
#### 3.1.4 2-loop 5-point hexatriangle example with several mass scales
The example hexatriangle is a 2-loop 5-point integral depicted in Figure 6. This is a master integral for the amplitude of \(q\bar{q}\to t\bar{t}H\) production at two loops. The integral is dimensionally shifted to \(6-2\varepsilon\) space-time dimensions; the dimensional shift and additional dots were chosen to make it finite in \(\varepsilon\) and fast to evaluate.
The value of the integral at the point specified in Figure 6 is
\[1.454919812(7)\cdot 10^{-7}-1.069797219(8)\cdot 10^{-7}\,i+\mathcal{O}( \varepsilon). \tag{11}\]
The convergence rate of the integral is depicted in Figure 7. Overall the obtained precision scales with the integration time \(t\) approximately as \(1/t^{1.6}\). We want to emphasise that such scaling is made possible by the use of the QMC integration methods; traditional Monte Carlo methods only scale as fast as \(1/t^{0.5}\).
A more detailed list of integration timings is given in Table 1.
#### 3.1.5 2-loop 5-point offshell pentabox example
The example pentabox_offshell is an integral depicted in Figure 8. It is a 2-loop pentabox with an internal mass, massive legs, and the total of 7 scales. The integral is evaluated in \(6-2\varepsilon\) space-time dimensions (where it is finite in \(\varepsilon\)) up to \(\mathcal{O}(\varepsilon^{4})\); a prefactor of \(\Gamma(2+2\varepsilon)\) is divided out to match the configuration of Section 6.4 of [14], where the same integral is calculated numerically via tropical integration.
* [1]importpySecDecaspsd,sympyassp,json
* [2]
* [3]if_name_ == "_main_":
* [4]li=psd.LoopIntegralFromGraph(
* [5]internal_lines=[['mt',[1,4]], ['mw',[4,2]], ['0',[2,3]],
* [6]['0',[4,5]], ['0',[1,5]], ['mz',[5,3]]],
* [7]external_lines=[['p1',1], ['p2',2], ['p3',3]],
* [8]regulators=['eps'],
* [9]replacement_rules=[
* [10]('p1*p1','s'),
* [11]('p2*p2', 0),
* [12]('p3*p3', 0),
* [13]('p1*p2','s/2'),
* [14]('p1*p3','s/2'),
* [15]('p2*p3','s/2'),
* [16]('mw*2','mwsq'),
* [17]('mz*2','mzsq'),
* [18]('mt*2','mtsq')])
* [19]
* [20]terms=psd.loop_regions(name='muon_decay2L',
* [21]loop_integral=li,
* [22]smallness_parameter='mtsq',
* [23]decomposition_method='geometric',
* [24]expansion_by_regions_order=1)
* [25]term_by_prefactor_exponent={}
* [26]forterminterms:
* [27]coefficient,exponent=sp.symplify(str(term.prefactor)). as_coeff_exponent(sp.symplify('mtsq'))
* [29]term=term_replace(prefactor=coefficient)
* [30]term_by_prefactor_exponent.setdefault(str(exponent), [])
* [31]term_by_prefactor_exponent[str(exponent)].append(term)
* [32]
* [33]prefactor_exponent_by_name={}
* [34]fori,(exponent,term)inemememerate(sorted(term_by_prefactor_exponent.items())):
* [35]prefactor_exponent_by_name[f'prefactor_{i+1}']=exponent
* [36]psd.sum_package(f'prefactor_{i+1}',
* [37]term,
* [38]regulators=['eps'],
* [39]requested_orders=[0],
* [40]real_parameters=['s','mwsq','mzsq'])
* [41]withopen('prefactor_exponent_by_name.json', 'w')asf:
* [42]json.dump(prefactor_exponent_by_name,f)
Figure 4: Generation script for the two-loop muon decay example.
* [1] from pySecDec.integral_interface import DistevalLibrary
* [2] import json
* [3] import sympy assp
* [5] with open('prefactor_exponent_by_name.json') as f:
* [6] prefactor_exponent_by_name = json.load(f)
* [7] result_by_prefactor_exponent = {}
* [9] for name, exponent in prefactor_exponent_by_name.items():
* [10] loop_integral = DistevalLibrary(f'{name}/disteval/{name}.json')
* [11] result_by_prefactor_exponent[exponent] = loop_integral(
* [12] parameters = {'s': 3,'mwsq': 0.78,'mzsq': 1.0},
* [13] epsrel = 1e-4, points = 1e4, format ='sympy',
* [14] number_of_presamples = 1e4, timeout = None)
* [15]
* [16] print('Result:')
* [17] for exponent, str_result in result_by_prefactor_exponent.items():
* [18] result = sp.symplify(str_result)
* [19] val = result[0].subs({'plusminus': 0})
* [20] err = result[0].coeff("plusminus")
* [21] print(f'^')
* [22] +mtsq^({exponent})*(
* [23] +1/eps^2*(([val.coeff("eps",-2)]) +/- ({err.coeff("eps",-2)}))
* [24] +1/eps^1*(([val.coeff("eps",-1)]) +/- ({err.coeff("eps",-1)}))
* [25] + eps^0*(([val.coeff("eps",0)]) +/- ({err.coeff("eps",0)}))
* [26] )'^') ```
The value of the integral at the point specified in Figure 8 is
\[\begin{split}&+(+6.443869(7)\cdot 10^{-2}-8.267759(7)\cdot 10^{-2} \,i)\,\varepsilon^{0}\\ &+(+4.043397(2)\cdot 10^{-1}+3.189607(2)\cdot 10^{-1}\,i)\, \varepsilon^{1}\\ &+(-7.771389(2)\cdot 10^{-1}+9.370171(2)\cdot 10^{-1}\,i)\, \varepsilon^{2}\\ &+(-1.3220709(6)\cdot 10^{0}-1.2139678(6)\cdot 10^{0}\,i)\, \varepsilon^{3}\\ &+(+1.3789155(10)\cdot 10^{0}-1.2118956(10)\cdot 10^{0}\,i)\, \varepsilon^{4}\\ &+\mathcal{O}(\varepsilon^{5})\end{split} \tag{12}\]
These values match the ones given in [14] within the uncertainty limits.
The convergence rate of the integral is depicted in Figure 9. Overall the obtained precision scales with the integration time \(t\) approximately as \(1/t\).
A more detailed list of integration timings is given in Table 2.
Figure 7: The obtained precision by integration time for the hexatriangle example. This plot is based on the data from Table 1.
#### 3.1.6 4-loop triangle diagram
The example gminus2_4L is a four-loop diagram contributing to the electron or muon anomalous magnetic moment. The diagram is depicted in Figure 10. The massive lines (coloured in red) denote on-shell massive fermion lines, \(p^{2}=m^{2}\). For the grey external line with momentum \(q\), the limit \(q\to 0\) needs to be taken, such that the diagram is characterised by \(q^{2}=0,q\cdot p=0,p^{2}=m^{2}\). Therefore the corresponding integral becomes a single scale integral, depending only on \(m^{2}\).
The pySecDec result for gminus2_4L reads
\[+2.60420(2)\cdot 10^{-3}\cdot\varepsilon^{-4}\] \[+2.5237(2)\cdot 10^{-2}\cdot\varepsilon^{-3}\] \[+3.8721(4)\cdot 10^{-1}\cdot\varepsilon^{-2} \tag{13}\] \[+3.9116(4)\cdot\varepsilon^{-1}\] \[+39.256(4)+\mathcal{O}(\varepsilon).\]
Figure 9: The obtained precision by integration time for the pentabox_offshell example. This plot is based on the data from Table 2.
#### 3.1.7 6-loop two-point function
The bubble6L example consists of the 6-loop 2-point integral shown in Figure 11. The pole coefficients are given analytically in Eq. (A3) of Ref. [32] (at \(p^{2}=-p_{E}^{2}=-1\), where \(p_{E}\) is the external momentum in Euclidean space). We note that the decomposition method 'geometric' needs to be used, as the method 'iterative' leads to an infinite recursion. The analytic result is given by
\[B_{6L}^{\text{analyt.}} =\frac{1}{\varepsilon^{2}}\,\frac{147}{16}\,\zeta_{7}-\frac{1}{ \varepsilon}\,\left(\frac{147}{16}\,\zeta_{7}+\frac{27}{2}\,\zeta_{3}\zeta_{5 }+\frac{27}{10}\zeta_{3,5}-\frac{2063}{504000}\,\pi^{8}\right)\;+\;\mathcal{O} (\varepsilon^{0})\] \[=\frac{9.264208985946416}{\varepsilon^{2}}+\frac{91.73175282208716} {\varepsilon}\;+\;\mathcal{O}(\varepsilon^{0})\;. \tag{14}\]
The pySecDec result at \(p^{2}=-1\) obtained with the Discaval integrator reads
\[B_{6L}^{\text{num.}}= +9.26420902(3)\cdot\varepsilon^{-2} \tag{15}\] \[+9.17317528(8)\cdot 10^{1}\cdot\varepsilon^{-1}\] \[+1.11860698(1)\cdot 10^{3}+\mathcal{O}(\varepsilon)\;.\]
### Previously existing examples
Several previously existing pySecDec examples, shown in Figure 12, have been benchmarked in [5]. In Table 3 we provide a comparison of the integra
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{Integrator\({}^{\backslash}\)Accuracy} & \(10^{-2}\) & \(10^{-3}\) & \(10^{-4}\) & \(10^{-5}\) & \(10^{-6}\) \\ \hline \hline GPU & Disteval & 38 s & 1.1 m & 7.9 m & 1.7 h & 22 h \\ & Intlib & 366 s & 9.3 m & 48.9 m & 9.1 h & 85 h \\ & Ratio & 9.6 & 8.3 & 6.2 & 5.3 & 3.8 \\ \hline CPU & Disteval & 13 s & 2.4 m & 43 m & 7.9 h & β \\ & Intlib & 67 s & 18.9 m & 299 m & 65.0 h & β \\ & Ratio & 5.0 & 7.8 & 7.0 & 8.2 & β \\ \hline \hline \end{tabular}
\end{table}
Table 2: Integration timings for the pentabox_offsetl example (Figure 8) depending on the requested accuracy using two integrators: Disteval and IntLib qmc. Same benchmarking conditions as in Table 1.
Figure 10: A 4-loop diagram with kinematics inspired by contributions to the electron or muon anomalous magnetic moment.
tion time of those examples using the Disteval integrator (new in v1.6) and the IntLib Qmc integrator (the default of v1.5.6), all on an NVidia A100 80G GPU (using Cuda version 11.8).
The reported integration times correspond to the wall clock times for running the integration via the Python interface of pySecDec. In particular, the numerical integration of _all_ orders in \(\varepsilon\) up to the finite order is included in the timings. The precision refers to the relative error which in this case is defined as \(\epsilon_{\mathrm{rel}}\ =\ \sqrt{\frac{(\Delta R)^{2}+(\Delta I)^{2}}{R^{2}+I^{2}}}\), \(R\) and \(I\) are the real and imaginary parts of a coefficient in the \(\varepsilon\)-expansion, and \(\Delta R\) and \(\Delta I\) are the corresponding uncertainties. The examples formfactor4L and bubble6L have been calculated using the baker integral transformation, for the other examples the default transformation korobov3 has been used.
The overall conclusion is that Disteval is 3x-5x faster than IntLib Qmc with equivalent settings on a GPU at higher accuracies, with the exception of the Euclidean integrals bubble6L and formfactor4L: they contain a large number of sectors, each very simple, so that the execution time is mostly dominated by overhead, which Disteval has up to 20x less of.
Of particular note is the benchmark of the elliptic2L_physical example: at the requested precision of \(10^{-8}\), the speedup of Disteval is 0.6, so it is slower than IntLib Qmc. We have investigated this case in detail, and found that two of the most complicated sectors of the example share a particularly unlucky lattice at \(n=4.3\cdot 10^{9}\); this is exactly the lattice depicted with a star in Figure 1. At the requested precision of \(10^{-8}\) Disteval hits this lattice
Figure 11: A 6-loop two-point integral.
Figure 12: All diagrams of Table 3 except for bubble6L, which is described in detail in Section 3.1.7.
\begin{table}
\begin{tabular}{r l r r r r r r r} \hline \multicolumn{2}{c}{Integrator\({}^{\backslash}\)Accuracy} & \multicolumn{1}{c}{\(10^{-2}\)} & \multicolumn{1}{c}{\(10^{-3}\)} & \multicolumn{1}{c}{\(10^{-4}\)} & \multicolumn{1}{c}{\(10^{-5}\)} & \multicolumn{1}{c}{\(10^{-6}\)} & \multicolumn{1}{c}{\(10^{-7}\)} & \multicolumn{1}{c}{\(10^{-8}\)} \\ \hline \hline banana\_3mass & Disteval & 2.1 s & 2.1 s & 2.4 s & 2.6 s & 2.6 s & 2.9 s & 3.6 s \\ & IntLib & 5.0 s & 4.9 s & 6.4 s & 7.2 s & 8.5 s & 8.5 s & 13.8 s \\ & Ratio & 2.3 & 2.3 & 2.7 & 2.7 & 3.2 & 3.0 & 3.9 \\ \hline bubble6L & Disteval & 1.7 m & 1.8 m & 1.8 m & 2.0 m & 3.5 m & 9.5 m & 1.2 h \\ & IntLib & 39.5 m & 38.8 m & 39.6 m & 43.8 m & 85.1 m & 170.7 m & 11.6 h \\ & Ratio & 23 & 22 & 22 & 22 & 24 & 18 & 10 \\ \hline formfactor4L & Disteval & 3.6 m & 3.7 m & 3.7 m & 3.7 m & 6.2 m & 0.21 h & 0.91 h \\ & IntLib & 74.1 m & 72.9 m & 72.8 m & 74.4 m & 135.5 m & 4.1 h & 10.9 h \\ & Ratio & 21 & 20 & 20 & 20 & 22 & 20 & 12 \\ \hline elliptic2L\_physical & Disteval & 1.6 s & 1.5 s & 1.7 s & 1.9 s & 4.0 s & 19 s & 7.6 m \\ & IntLib & 3.1 s & 4.8 s & 4.9 s & 7.3 s & 13.8 s & 53 s & 4.3 m \\ & Ratio & 1.9 & 3.1 & 2.8 & 3.9 & 3.4 & 2.9 & 0.6 \\ \hline hz2L\_nonplanar & Disteval & 5 s & 5 s & 9 s & 37 s & 2.3 m & 5.4 m & 27.1 m \\ & IntLib & 9 s & 17 s & 41 s & 163 s & 9.6 m & 16.0 m & 27.3 m \\ & Ratio & 1.8 & 3.4 & 4.6 & 4.4 & 4.2 & 3.0 & 1.0 \\ \hline Nbox2L\_split\_b & Disteval & 8 s & 16 s & 23 s & 40 s & 2.4 m & 9.1 m & 19.9 m \\ & IntLib & 24 s & 73 s & 223 s & 6.6 m & 25.6 m & 43.3 m & 92.8 m \\ & Ratio & 3.0 & 4.6 & 9.7 & 9.9 & 10.5 & 4.8 & 4.7 \\ \hline pentabox\_fin & Disteval & 5 s & 8 s & 11 s & 0.71 m & 3.7 m & 18.5 m & 1.1 h \\ & IntLib & 45 s & 65 s & 88 s & 3.2 m & 11.3 m & 74.8 m & 4.6 h \\ & Ratio & 8.6 & 7.9 & 7.7 & 4.5 & 3.1 & 4.0 & 4.2 \\ \hline \end{tabular}
\end{table}
Table 3: Integration timings on a GPU for different examples using the IntLib Qmc integrator and the new Disteval integrator.
Figure 13: Convergence rates of the Disteval timings from Table 3
with two sectors simultaneously, concludes that they converge very badly, and compensates by adding many more samples. IntLib on the other hand does not hit this particular lattice because its algorithm of selecting \(n_{i}\) differs just slightly enough to land on a nearby lattice instead.
## 4 Conclusions
We have presented version 1.6 of pySecDec, featuring a major upgrade that makes it suitable for the evaluation of loop amplitudes through a novel, highly distributed Quasi-Monte-Carlo (QMC) evaluation method. Compared to the previous version, the virtues of the new method applied to individual multi-loop integrals are particularly manifest for multi-scale integrals and when high precision is requested. Very importantly, the calculation of _amplitudes_ rather than individual integrals is facilitated. This is achieved through several improvements, for instance, new functionalities to treat the coefficients of master integrals, which are typically large expressions after IBP reduction. Furthermore, amplitudes are calculated as weighted sums of integrals with coefficients, with an overall precision goal that can be specified by the user. A new integrator based on median QMC rules avoids the limitations of the component-by-component construction of generating vectors for lattice rules. It also remedies the intermediate loss of QMC-typical scaling that has been observed for some fixed individual lattices.
The release contains improvements to the expansion by regions functionality, including the treatment of integrals with numerators within expansion by regions and the automated detection of whether and where additional regulators are needed, making this information completely transparent to the user. The coefficients of each order of the expansion in the small parameter are now also easily accessible to the user.
With these new features pySecDec is significantly faster, more flexible, and easier to use than previous versions. It is better equipped to analyse and tackle a wide range of problems including previously intractable multi-loop amplitudes needed for precision phenomenology, problems requiring multiple dimensional regulators, and integrals/amplitudes where higher numerical precision than previously possible is required.
## Acknowledgements
We would like to thank Goutam Das, Joshua Davies, Christoph Greub, Andrey Pikelner, Vladyslav Shtabovenko and Yannick Ulrich for discussions and raising issues that helped to improve the program. This research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 - TRR 257. SJ is supported by a Royal Society University Research Fellowship (Grant URF/R1/201268). |
2304.00104 | Hydrogen-triggered X-ray Bursts from SAX J1808.4-3658? The Onset of
Nuclear Burning | We present a study of weak, thermonuclear X-ray bursts from the accreting
millisecond X-ray pulsar SAX J1808.4-3658. We focus on a burst observed with
the Neutron Star Interior Composition Explorer on 2019 August 9, and describe a
similar burst observed with the Rossi X-ray Timing Explorer in 2005 June. These
bursts occurred soon after outburst onset, $2.9$ and $1.1$ days, after the
first indications of fresh accretion. We measure peak burst bolometric fluxes
of $6.98 \pm 0.50 \times 10^{-9}$ and $1.54 \pm 0.10 \times 10^{-8}$ erg
cm$^{-2}$ s$^{-1}$, respectively, which are factors of $\approx 30$ and $15$
less than the peak flux of the brightest, helium-powered bursts observed from
this source. From spectral modeling we estimate accretion rates and accreted
columns at the time of each burst. For the 2019 burst we estimate an accretion
rate of $\dot M \approx 1.4-1.6 \times 10^{-10}$ $M_{\odot}$ yr$^{-1}$, and a
column in the range $3.9-5.1 \times 10^7$ g cm$^{-2}$. For the 2005 event the
accretion rate was similar, but the accreted column was half of that estimated
for the 2019 burst. The low accretion rates, modest columns, and evidence for a
cool neutron star in quiescence, suggest these bursts are triggered by
thermally unstable CNO cycle hydrogen-burning. The post-burst flux level in the
2019 event appears offset from the pre-burst level by an amount consistent with
quasi-stable hydrogen-burning due to the temperature-insensitive, hot-CNO
cycle, further suggesting hydrogen-burning as the primary fuel source. This
provides strong observational evidence for hydrogen-triggered bursts. We
discuss our results in the context of previous theoretical modeling. | Sierra Casten, Tod Strohmayer, Peter Bult | 2023-03-31T19:55:17Z | http://arxiv.org/abs/2304.00104v2 | # Hydrogen-triggered X-ray Bursts from SAX J1808.4-3658? The Onset of Nuclear Burning
###### Abstract
We present a study of weak, thermonuclear X-ray bursts from the accreting millisecond X-ray pulsar SAX J1808.4-3658. We focus on a burst observed with the _Neutron Star Interior Composition Explorer_ on 2019 August 9, and describe a similar burst observed with the _Rossi X-ray Timing Explorer_ in 2005 June. These bursts occurred soon after outburst onset, 2.9 and 1.1 days, after the first indications of fresh accretion. We measure peak burst bolometric fluxes of \(6.98\pm 0.50\times 10^{-9}\) and \(1.54\pm 0.10\times 10^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\), respectively, which are factors of \(\approx 30\) and 15 less than the peak flux of the brightest, helium-powered bursts observed from this source. From spectral modeling we estimate accretion rates and accreted columns at the time of each burst. For the 2019 burst we estimate an accretion rate of \(\dot{M}\approx 1.4\)-\(1.6\times 10^{-10}\)\(M_{\odot}\) yr\({}^{-1}\), and a column in the range \(3.9\)-\(5.1\times 10^{7}\) g cm\({}^{-2}\). For the 2005 event the accretion rate was similar, but the accreted column was half of that estimated for the 2019 burst. The low accretion rates, modest columns, and evidence for a cool neutron star in quiescence, suggest these bursts are triggered by thermally unstable CNO cycle hydrogen-burning. The post-burst flux level in the 2019 event appears offset from the pre-burst level by an amount consistent with quasi-stable hydrogen-burning due to the temperature-insensitive, hot-CNO cycle, further suggesting hydrogen-burning as the primary fuel source. This provides strong observational evidence for hydrogen-triggered bursts. We discuss our results in the context of previous theoretical modeling.
stars: neutron -- X-rays: binaries -- X-rays: bursts -- X-rays: individual (SAX J1808.4\(-\)3658) -- radiation: dynamics 0000-0002-4871-2808]Sierra Casten
0000-0002-1888-7885]Tod E. Strohmayer
0000-0002-4188-7885]Peter Bult
## 1 Introduction
Thermonuclear (Type I) X-ray bursts occur when an accreted layer of matter on a neutron star undergoes a thermonuclear runaway (Hansen & van Horn, 1975; Strohmayer & Bildsten, 2006; Galloway & Keek, 2021). The thin shell thermal instability that triggers these bursts occurs when the energy generation rate due to nuclear burning exceeds the local cooling rate in the accreted shell.
Depending on the accreted fuel composition there can be a large number of nuclear reactions that contribute to the energy production that powers bursts, however, two reactions are thought to be the primary triggers for the thermal instability. The first is the highly temperature-sensitive triple-\(\alpha\) reaction, which burns three helium nuclei to carbon. The other relevant process is the CNO cycle that burns hydrogen to helium via a series of \((p,\gamma)\) reactions on carbon and nitrogen (Fowler & Hoyle, 1965). Also operating in the cycle are several rate-limiting \(\beta^{+}\) decays dependent on slower weak reaction rates. The cycle is closed by the \((p,\alpha)\) reaction that converts \({}^{15}\)N back to \({}^{12}\)C. This physics leads to two regimes in which the CNO cycle may operate in this context. Above temperatures of about \(8\times 10^{7}\) K, the rate of the so-called hot-CNO cycle is limited by the \(\beta^{+}\) decays that take \({}^{15}\)O and \({}^{14}\)O to \({}^{15}\)N and \({}^{14}\)N, respectively. In this regime the energy generation rate becomes temperature _insensitive_, that is, thermally stable. However, for temperatures below \(\approx 8\times 10^{7}\) K the CNO energy generation rate remains temperature sensitive, so that, in principle, a hydrogen-burning thermal instability can operate if the accreting shell is cool enough.
The implications of these physical processes for the production of X-ray bursts were explored in several early, "classic" papers. Fujimoto et al. (1981) were among the first to delineate the possible bursting regimes based on the mass accretion rate, but also see Joss (1978) and Taam & Picklum (1979). They identified three bursting regimes with decreasing mass accretion rate. At higher accretion rates a helium shell grows via accretion and thermally stable CNO cycle burning that converts some of the accreted hydrogen to helium. The accretion rate is high enough that the base of the shell reaches ignition conditions before all the hydro
gen can be burned to helium. The triple-\(\alpha\) instability is initiated in a shell with a significant hydrogen abundance, leading to so-called mixed H/He bursts. The presence of hydrogen allows for the more complex set of nuclear reactions known as the rapid-proton capture process to occur, which can enhance and delay the energy release, leading to longer duration bursts (Schatz et al., 2001). The well-studied "clocked" bursts from GS 1826\(-\)238 are prototypical of this type (Ubertini et al., 1999; Galloway et al., 2004; Heger et al., 2007; Zamfir et al., 2012). For lower accretion rates there will be a rate such that, prior to ignition, the CNO burning has just had sufficient time to burn all the hydrogen in the accreting shell. At this critical accretion rate the helium burning is initiated in a pure helium shell. These "pure helium" bursts are characterized by a rapid, intense energy release, are typically of shorter duration than the mixed H/He bursts, and often reach the Eddington limit, as exhibited by photospheric radius expansion (PRE). The bright PRE bursts observed from the accreting millisecond X-ray pulsar (AMXP) SAX J1808.4\(-\)3658 are examples of such bursts (Galloway and Cumming, 2006; in't Zand et al., 2013; Bult et al., 2019). At low accretion rates, temperatures in the accumulating shell may be low enough that steady CNO hydrogen-burning essentially switches off, but can proceed in the unstable, temperature-sensitive regime. This can, in principle, lead to unstable ignition of the hydrogen in the accumulating shell. Two possible paths have been discussed in this case. First, ignition of hydrogen raises the temperature in the shell, and if the column depth is large enough the heated shell may cross the helium instability curve, producing a prompt, mixed H/He burst. Second, if the hydrogen ignition depth is too shallow to cross the helium instability curve, then fuel will continue to accumulate until helium ignition can occur. At these low accretion rates it is also likely that gravitational sedimentation of heavier elements relative to hydrogen and helium will play a role in setting the conditions for unstable burning (Peng et al., 2007).
However, observational evidence for this hydrogen ignition regime is limited, as there have been to our knowledge few published reports of X-ray bursts that can be clearly attributed to unstable hydrogen shell flashes. In one such case, Boirin et al. (2007) reported on the first observations of triple, short recurrence time (SRT) bursts from the high inclination, eclipsing source EXO 0748\(-\)676. They suggested that the initial bursts of singles, pairs or triples (they call these the long waiting time, LWT, bursts), could be attributed to either helium-triggered, mixed H/He bursts at moderate accretion rates (10% of Eddington), or perhaps hydrogen-triggered bursts at lower accretion rates (1% of Eddington). Because the LWT bursts appeared somewhat under-luminous compared with mixed H/He bursts in 1-d _Kepler_ models and the well-known example of such bursts from GS 1826\(-\)238 they suggested that this might be explained by the latter, hydrogen-triggered mechanism.
They further suggested that the SRT events, with waiting times close to 10-12 minutes, were likely caused by the re-ignition of unburned fuel, but they did not have a detailed explanation of how this occurs. More recently, Keek and Heger (2017) have outlined a theoretical mechanism to account for SRT bursts. Using detailed, 1-d _Kepler_ hydrodynamic simulations they showed that such events can be produced by opacity-driven convective mixing that transports fresh fuel to the ignition depth, and they also argued that this mechanism can produce simulated burst events that are "strikingly similar" (in their words) to the SRT bursts seen from EXO 0748\(-\)676. If this mechanism is indeed at work, then it would further argue for the higher accretion rate (10% of Eddington), helium-triggered scenario in EXO 0748\(-\)676, as warmer envelopes, naturally produced by higher accretion rates, were required to produce the SRT events in their burst simulations. Moreover, they also showed that the fraction of fuel burned in the LWT events dropped as the envelope became hotter, and this relatively low fuel burning fraction could also naturally explain the apparently under luminous LWT bursts noted by Boirin et al. (2007). Thus, while Boirin et al. (2007) suggest that a hydrogen-triggered mechanism is possible for the LWT bursts from EXO 0748\(-\)676, we would characterize the current, overall evidence in support of that conclusion as tentative, particularly given the remaining uncertainties in the distance and anisotropy factors for this source. Indeed, in support of this we note that in their recent review of the field, Galloway and Keek (2021) also comment that, "No observations matching case I or case II bursting have been identified." Here, cases I and II refer to the two hydrogen ignition paths at low accretion rates that we sketched above.
In this paper we present a study of an apparently rarer class of weak X-ray bursts observed from SAX J1808.4\(-\)3658 (hereafter, J1808) that we argue show the hallmarks of being associated with the hydrogen ignition regime. This object was the first AMXP discovered (Wijnands and van der Klis, 1998; Chakrabarty and Morgan, 1998), and hosts a neutron star in a 2.1 hr orbit with a low-mass brown dwarf (Bildsten and Chakrabarty, 2001). Its distance has been estimated at \(3.5\pm 0.1\) kpc (Galloway and Cumming, 2006), and it is likely that the donor provides a hydrogen-rich mix of matter to the neutron star during outbursts (Galloway and Cumming, 2006; Goodwin et al., 2019). To date, J1808 has been observed extensively during ten outbursts. While it is not our intention here to provide a broad observational overview of the source-for the purposes of this paper we focus on issues relevant to its thermonuclear bursting behavior-readers can find elsewhere some recent studies on coherent pulse timing (Sanna et al., 2017; Bult et al., 2020; Illiano et al., 2022), X-ray spectral properties (Di
Salvo et al., 2019), and aperiodic timing behavior (Bult and van der Klis, 2015; Sharma et al., 2022).
Observations of J1808 have revealed two types of thermonuclear bursts that show dramatically different peak fluxes and fluences. The bright PRE bursts (mentioned above) show significantly higher total energy release and peak X-ray flux. The less frequently observed weak bursts produce much less energy and show peak fluxes about a factor of 25 less than the bright events, as such they are not Eddington-limited. When these weak bursts have been observed, they appear to be confined to earlier portions of the outbursts and occurred before the bright bursts were seen. This suggests there may be a window of occurrence for these bursts associated with the initial onset of accretion after a period of quiescence. This is particularly intriguing in the context of J1808 because it is known that the neutron star cools dramatically in quiescence (Heinke et al., 2009), and the unstable hydrogen-burning regime requires cooler temperatures in the accumulating layer. There has been more observational and theoretical research exploring the nature of the bright bursts than the weak class.
Here we present a detailed study of one of these weak bursts that was observed with the _Neutron Star Interior Composition Explorer_ (_NICER_) during the recent, 2019 August outburst from J1808. We also provide a briefer description of a similar burst observed with the _Rossi X-ray Timing Explorer_ (_RXTE_) in 2005 June. The paper is organized as follows. In SS2 we introduce the _NICER_ data and present light curves focusing on the initial part of the 2019 outburst, showing a single weak burst. We also present a spectral study of the persistent and burst emission (for the weak burst) in order to understand its energetics and to constrain the mass accretion rate and the likely accreted mass column at the time of its ignition. We present a discussion in SS3 of a likely physical scenario that results in the weak burst, arguing that the initial accretion onto a cool neutron star at the onset of the outburst naturally places the accumulating layer in the thermally unstable regime for CNO hydrogen ignition. Here, we also describe the 2005 June _RXTE_ event, and we also report a brief summary of _NuSTAR_ observations that began on 2019 August 10 and in which several brighter bursts were detected. We conclude in SS4 with a summary, a brief discussion of relevant uncertainties and other possible interpretations, and the outlook for future efforts.
## 2 NICER Observations of J1808
In late July 2019, it was reported that the optical flux from J1808 had increased, perhaps presaging a new X-ray outburst (Russell et al., 2019; Goodwin et al., 2020). This initiated an extensive monitoring campaign with _NICER_, which began on August 1, 2019 (Bult et al., 2020). _NICER_ is an X-ray observatory that operates on the International Space Station (ISS). It observes across the 0.2-12 keV X-ray band and provides low-background, high-throughput (\(\approx 1900\) cm\({}^{2}\) at 1.5 keV), and high time resolution capabilities (Gendreau et al., 2012). The data obtained prior to the onset of the outburst, and up to and including the first observed X-ray burst are organized under observation IDs (OBSIDs), \(205026010mm\), and \(25840101nn\), where \(mm\) and \(nn\) run from 03-10 and 01-02, respectively. We used the standard screening criteria and _NICERAS_ version 8 to produce cleaned event lists. This means we retained only those epochs during which the pointing offset was \(<54\arcsec\), the Earth elevation angle was \(>15^{\circ}\), the elevation angle with respect to the bright Earth limb was \(>30^{\circ}\), and the instrument was not in the South Atlantic Anomaly (SAA). We used HEASOFT Version 6.29c to produce the light curves and spectra for the analyses reported here. The initial observations of the campaign did not reveal evidence of J1808 in X-ray outburst. The first indication that an accretion-driven flux was present occurred on August 6, 2019 at approximately 21:59 TT (Bult et al., 2019). Figure 1 shows the light curve (0.4-7 keV) of the outburst over approximately 20 days from the observed onset of significant X-ray activity. Time zero in the plot refers to the time of outburst onset, \(58701.91597\) MJD (TT). The two detected X-ray bursts are evident as "spikes" in the count rate near days 3 and 14, respectively. The much brighter second burst (near day 14) was reported on by Bult et al. (2019). Here we focus on a study of the much weaker first burst, which occurred at \(58704.80764\) MJD (TT), and is present in OBSID \(2584010102\).
Figure 1.β Light curve of the 2019 JulyβAugust outburst of J1808 in the 0.4β7 keV band observed with _NICER_. Count rates were computed in 2 s intervals. Note the logarithmic scale. The first, weaker burst is evident near day 3. The bright burst near day 14 was reported on by Bult et al. (2019).
### Persistent Spectrum, Fluence and Accreted Mass
To explore the weak burst energetics and ignition conditions we aim to constrain the total accreted mass from the beginning of the outburst up to the onset of the first burst. To do this we model the spectrum of the persistent emission to determine its flux and then integrate from the outburst onset to just prior to the burst. This integral provides an estimate of the energy fluence produced via accretion, which can then be converted to an accreted mass using standard assumptions for the accretion luminosity produced by the release of gravitational potential energy of the accreted matter.
In practice we find that the shape of the persistent spectrum gradually changes during this portion of the outburst, with the spectrum showing a modest hardening over time. We therefore measure the flux at a few intervals along the outburst rise, and use these measurements to estimate the flux per unit _NICER_ count rate. We then use simple linear interpolation and the trapezoidal rule to integrate the flux from outburst onset to the first burst to estimate the energy fluence.
The light curve in Figure 2 shows a close-up of the epoch around the first burst. We extracted a spectrum prior to the burst, the "pre-burst" interval (marked by the vertical dashed lines in Figure 2) and modeled its spectrum using XSPEC version 12.12.1 (Arnaud, 1996). We produced response files with _NICERDAS_ version 8, and we used the 3C50 background model, _nibackgen3c50_(Remillard et al., 2022), to produce a background spectrum appropriate for spectral modeling within XSPEC. We employed a phenomenological model similar to that discussed by Patruno et al. (2009), that includes thermal disk, power-law, and blackbody continuum components. In addition, and similarly to Bult et al. (2019), we find evidence for narrow-line emission near 1 keV, and we include a gaussian component to model this. In XSPEC notation the model has the form, _phabs*(diskbb + powerlaw + bbodyrad + gaussian)_, where _phabs_ represents the line of sight photoelectric absorption model parameterized by the column density of neutral hydrogen, \(n_{H}\). This absorption model uses cross sections from Verner et al. (1996) and the chemical abundances from Anders & Grevesse (1989). We fit this model across the 0.5 \(-\) 10 keV bandpass and find that it provides an excellent fit, with a minimum \(\chi^{2}=117.9\) for 112 degrees of freedom. The best-fitting model parameters are given in Table 1, and Figure 3 shows the unfolded photon spectrum (top), the observed count-rate spectrum and best-fitting model (middle), and the fit residuals in units of standard deviations (bottom). This model gives an unabsorbed flux (0.1 \(-\) 20 keV) of \(7.06\pm 0.23\times 10^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\). If we extend the bandpass to estimate a bolometric flux we find a value (0.1 \(-\) 100 keV) of \(7.9\times 10^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\). The average count rate in the fitted energy band (0.5 \(-\) 10 keV) is \(181.9\pm 0.5\) s\({}^{-1}\), so we estimate a flux per NICER count rate (0.5 \(-\) 10 keV) of \(4.34\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) (counts s\({}^{-1}\))\({}^{-1}\) for this interval.
Figure 3: Energy spectrum of the pre-burst interval, modeled with a phenomenological model similar to that employed by Patruno et al. (2009), which includes diskbb, bbodyrad and powerlaw components, in addition to the line at 1 keV. See the text in Β§2.1 for further details.
Figure 2: Light curve of the first, weak burst from J1808 in the 0.4 - 7 keV band observed with _NICER_. Main panel: The count rates were computed in 1 s intervals, and the vertical dashed and dotted lines denote the intervals used to extract spectra for the pre- and post-burst spectral modeling, respectively. Inset panel: The same data are used, but the time bins are 16 s, and the logarithmic scale highlights the offset in count rate between the pre- and post-burst emission. The dashed red line is a constant value fit to the pre-burst level, and is meant as a guide to the eye.
We extracted spectra from two other OBSIDs along the outburst rise, 2050260109 and 2050260110, and analyzed these spectra in the same manner as for the pre-burst interval just described. Results of these spectral fits are also reported in Table 1. For these intervals we estimate flux per NICER count rate values of \(2.85\times 10^{-12}\) and \(3.61\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\) (counts s\({}^{-1}\))\({}^{-1}\), respectively.
For completeness we make a few additional comments regarding the 1 keV line component included in the spectral model. For the pre-burst interval (OBSID 2584010102), removing the gaussian line results in an increase in \(\chi^{2}\) of 31.3, and the ratio of the line normalization to its \(1\sigma\) uncertainty is \(\approx 4.2\). The line is also evident in OBSID 2050260110, though at lower significance, with the ratio of the line normalization to its \(1\sigma\) uncertainty now at 3.5. For OBSID 2050260109, the spectrum extracted closest to the outburst onset and at the lowest observed flux, we no longer find evidence for the line. When detected the line is narrow in the sense that it is unresolved and we can only place an upper limit on its width of \(\approx 0.09\) keV (\(3\sigma\)). Finally, in this work our primary focus is to model the X-ray spectrum to infer the broadband flux. Excluding the 1 keV line from the spectral fits only changes the inferred flux at the few percent level, so including it, or not, does not significantly alter our inferences regarding the source flux. We elected to include it since doing so provides a better overall statistical description of the data.
To estimate the outburst fluence we use simple linear interpolation between data gaps, and we also apply linear interpolation of the flux per unit count rates, based on the spectral results discussed above. We employ the trapezoidal rule to integrate the total counts. We find a persistent emission energy fluence of \(E_{p}=7.92\times 10^{-5}\) erg cm\({}^{-2}\), representing an estimate of the total energy associated with accretion from the outburst onset up to the initiation of the first observed burst.
Assuming the observed, accretion-driven luminosity for spherical accretion,
\[L_{p}=4\pi d^{2}\xi_{p}f_{p}=\frac{z\dot{M}c^{2}}{(1+z)^{3}}\, \tag{1}\]
where \((1+z)=\left(1-2GM/c^{2}R\right)^{-1/2}\), \(\dot{M}\), \(\xi_{p}\), \(f_{p}\), \(M\) and \(R\) are the surface red-shift, mass accretion rate (as measured at the neutron star surface), persistent emission anisotropy factor, observed bolometric flux, neutron star mass and radius, respectively. We can use this equation to estimate the total accreted mass required to produce the observed energy fluence (Johnston, 2020). We emphasize that \(L_{p}\) and \(f_{p}\) are the luminosity and flux as measured by an observer far from the neutron star. The anisotropy factor, \(\xi_{p}\), can be thought of as the solid angle into which the radiation is emitted, normalized by \(4\pi\), thus, isotropic emission is characterized by \(\xi_{p}=1\)
\begin{table}
\begin{tabular}{l r r r} \hline Parameter & 2584010102 (Pre-burst) & 2050260110 & 2050260109 \\ \hline \(n_{H}\) (\(10^{22}\) cm\({}^{-2}\)) & \(0.131\pm 0.016\) & \(0.160\pm 0.025\) & \(0.177\pm 0.028\) \\ \(kT_{in}\) (diskbb, keV) & \(0.849\pm 0.026\) & \(0.679\pm 0.024\) & \(0.528\pm 0.017\) \\ Norm (diskbb) & \(27.33\pm 5.36\) & \(25.19\pm 6.46\) & \(10.82\pm 1.79\) \\ \(kT\) (bbodyrad, keV) & \(2.03\pm 0.15\) & \(1.67\pm 0.15\) & \\ Norm (bbodyrad, keV) & \(0.98\pm 0.47\) & \(0.604\pm 0.336\) & \\ Index (power) & \(1.96\pm 0.37\) & \(2.50\pm 1.05\) & \(2.01\pm 0.23\) \\ Norm (power) & \(0.0266\pm 0.0137\) & \(4.64\times 10^{-3}\) & \(4.23\pm 1.6\times 10^{-3}\) \\ E (gauss, keV) & \(0.990\pm 0.009\) & \(0.935\pm 0.014\) & \\ \(\sigma_{E}\) (gauss, keV) & \(0.015\) & \(0.015\) & \\ Norm (gauss) & \(8.44\pm 2.03\times 10^{-4}\) & \(3.51\pm 1.00\times 10^{-4}\) & \\ \(f_{0.1-20}\) (erg cm\({}^{-2}\) s\({}^{-1}\)) & \(7.06\pm 0.23\times 10^{-10}\) & \(2.08\pm 0.40\times 10^{-10}\) & \(5.35\pm 0.81\times 10^{-11}\) \\ \(f_{bol}\) (erg cm\({}^{-2}\) s\({}^{-1}\)) & \(7.9\times 10^{-10}\) & \(2.5\times 10^{-10}\) & \(6.4\times 10^{-11}\) \\ \(\chi^{2}\) (dof) & 117.9 (112) & 98.9 (97) & 94.5(106) \\ Rate (\(s^{-1}\), 0.5-10 keV) & \(181.9\pm 0.4\) & \(58.70\pm 0.25\) & \(13.46\pm 0.07\) \\ Epoch (d) & 2.89 & 1.45 & 0.59 \\ Exposure (s) & 740 & 927 & 2807 \\ \hline \end{tabular} Note. β Parameter uncertainties are estimated as \(1\sigma\) values. For OBSID 2050260109 the bbodyrad and gauss line components were not required in the fit. The \(\cdots\) symbols in this case indicate that these parameters were not included in the fit. For additional context, the βEpochβ specifies the center time of the interval in which the spectra were extracted, and the value refers to the time axis of Figure 1.
\end{table}
Table 1: Spectral Model Parameters for SAX J1808: Persistent Emission
We write the accreted mass column in the local neutron star frame as,
\[y_{a}=\frac{M_{a}}{4\pi R^{2}}=\frac{\int\dot{M}(t^{\prime})dt^{\prime}}{4\pi R^{ 2}}\, \tag{2}\]
where we use \(t^{\prime}\) to emphasize that the \(\dot{M}\) integral is over the time as measured at the neutron star surface. With the use of equation (1) this becomes,
\[y_{a}=\frac{d^{2}\xi_{p}(1+z)^{2}}{zc^{2}R^{2}}\int f_{p}(t)(1+z)dt^{\prime}= \frac{d^{2}\xi_{p}(1+z)^{2}}{zc^{2}R^{2}}E_{p}\, \tag{3}\]
where \(t\) is the time measured in the observer's frame, and we note that \(dt=(1+z)dt^{\prime}\), and thus \(\int f_{p}(t)(1+z)dt^{\prime}=\int f_{p}(t)dt=E_{p}\) is just the observed energy fluence.
If we assume \(d=3.5\) kpc (Galloway & Cumming, 2006), and use \(E_{p}=7.92\times 10^{-5}\) erg cm\({}^{-2}\), we can write the accreted column as,
\[y_{a}=1.03\times 10^{7}\ \left(\frac{\xi_{p}(1+z)^{2}}{zR_{10}^{2}}\right)\ \ \ \mathrm{g\ cm^{-2}}\, \tag{4}\]
where \(R_{10}\) is the neutron star radius in units of 10 km. For \(M=1.4M_{\odot}\), \(R=11\) km, and adopting \(\xi_{p}=1\) we find \(y_{a}=5.12\times 10^{7}\) g cm\({}^{-2}\). With \(M=2.0M_{\odot}\) and \(R=11\) km we find that \(y_{a}\) decreases slightly to \(3.91\times 10^{7}\) g cm\({}^{-2}\).
We can also use equation (1) to estimate the mass accretion rate, \(\dot{M}\), at the time of the weak burst onset. Using the estimated pre-burst flux of \(7.9\times 10^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\), and a distance of 3.5 kpc, we find,
\[\dot{M}=2.03\times 10^{-11}\ \left(\frac{\xi_{p}(1+z)^{3}}{z}\right)\ M_{ \odot}\ \mathrm{yr^{-1}}. \tag{5}\]
Using the same parameter assumptions as above, we find estimates of \(\dot{M}=1.56\times 10^{-10}\) and \(1.38\times 10^{-10}\ M_{\odot}\) yr\({}^{-1}\), respectively.
### Burst Spectral Evolution: Peak Flux and Fluence
We first segmented the burst light curve into intervals of approximately 500 counts using a 1/8 s time bin. We modeled the segmented spectra in the 0.5 - 10 keV band by adding a blackbody component to the pre-burst persistent emission model. The parameters of the persistent emission model were frozen to their best fit values, given in Table 1, only allowing the added blackbody component to vary, so that our model is phabs(constant(diskbb + bbodyrad + powerlaw + gaussian) + bbodyrad). We first tried multiplying the persistent emission model by a constant (Worpel et al., 2013), but found it was not statistically necessary, as it was possible to get a good fit with it left frozen at 1.0. The resulting evolution of the bolometric flux, the free parameters of the blackbody temperature and blackbody radius (at 3.5 kpc), along with the resulting \(\chi^{2}\) are shown in Figure 4. We found a peak bolometric burst flux of \(f_{b}=6.98\pm 0.50\times 10^{-9}\) erg s\({}^{-1}\) cm\({}^{-2}\). Using trapezoidal numerical integration of the flux, we calculated a bolometric fluence of \(7.05\pm 1.16\times 10^{-8}\) erg cm\({}^{-2}\). The burst luminosity is defined as \(L_{b}=4\pi d^{2}\xi_{b}f_{b}\), where \(\xi_{b}\) characterizes the anisotropy of the burst emission. Adopting \(\xi_{b}=1\), and with \(d=3.5\) kpc, we can then estimate that the total energy released during the burst was \(1.03\times 10^{38}\) ergs.
## 3 Physical Scenario and Interpretation
Transient systems like J1808 provide an interesting laboratory to explore the different predicted regimes of nuclear burning on neutron stars. Deep X-ray spectroscopic studies of the object in quiescence suggest rapid cooling of the neutron star core, perhaps by a form of enhanced neutrino emission such as Direct Urca (Heinke et al., 2009), which also provides some tentative evidence for a more massive neutron star (\(\gtrsim 2M_{\odot}\)) in the system. Using the surface effective temperature constraints in quiescence from Heinke et al. (2009) and the theoretical results of Potekhin et al. (1997), Mahmoodifar & Strohmayer (2013) estimated a core temperature for J1808 in the range from \(7.2-7.7\times 10^{6}\) K, for neutron star masses between 1.4 and 2.0 \(M_{\odot}\). Due to the high thermal conductivity in the core and crust (Brown & Cumming, 2009), it is thus very likely that when accretion begins in J1808 after a period of quiescence, the accumulating layer starts out at temperatures \(\lesssim 1\times 10^{7}\) K, that is, well below the temperature at which CNO
Figure 4: Evolution of the weak X-ray burst derived from spectral modeling in the \(0.5-10\) keV band. We show from the top down: the bolometric flux, blackbody temperature, blackbody radius (at 3.5 kpc), and reduced \(\chi^{2}\), respectively. The error bars indicate 1-\(\sigma\) confidence intervals.
cycle hydrogen-burning becomes thermally stable (Fujimoto et al., 1981; Cumming, 2004; Galloway and Keek, 2021). In this temperature-sensitive regime hydrogen-burning proceeds at very low levels, and the thermal profile of the accumulating layer will be set principally by compressional heating. This is a much less efficient heat source than the energy released from hot-CNO cycle burning in the stable burning regime.
Fujimoto et al. (1981) estimate the accretion rate required to maintain a stable hydrogen-burning shell (see their Table 1, \(\dot{M}_{st}(B)\)) as \(2.7\times 10^{-10}\)\(M_{\odot}\) yr\({}^{-1}\), for a neutron star mass and radius of \(1.41M_{\odot}\) and \(6.57\) km. We note that the somewhat older neutron star models employed by Fujimoto et al. (1981) have rather small radii compared to that suggested by more recent modeling (Miller et al., 2019; Riley et al., 2019). For a more typical radius of, say, \(11\) km (which we employed above), we would expect that the estimated rate would increase modestly by \(\approx 10\%\), which would bring the value to \(\approx 3\times 10^{-10}\)\(M_{\odot}\) yr\({}^{-1}\). Expressed as a fraction of the Eddington accretion rate, \(\dot{M}_{Edd}\), and adopting the value of \(\dot{M}_{Edd}=1.8\times 10^{-8}\)\(M_{\odot}\) yr\({}^{-1}\)(Cumming, 2004), this is then equivalent to \(\dot{M}=0.0167\)\(\dot{M}_{Edd}\). Note also that Cumming (2004) quotes a value of \(\dot{M}\gtrsim 0.01\)\(\dot{M}_{Edd}\) for stable, hot-CNO cycle hydrogen-burning. In addition, Cooper and Narayan (2007) used a two-zone model to carry out a linear stability analysis to specifically explore the conditions under which hydrogen-triggered bursts can occur at low accretion rates, and found that they occur for rates \(\lesssim 0.003\)\(\dot{M}_{Edd}\).
Above, we estimated an accretion rate at the time of the weak _NICER_ burst in the range from \(\approx 1.38-1.56\times 10^{-10}\)\(M_{\odot}\) yr\({}^{-1}\) (\(\dot{M}\approx 0.0077-0.0087\)\(\dot{M}_{Edd}\)). This is less than the required rates estimated by Cumming (2004) and Fujimoto et al. (1981) for stable CNO burning, but slightly higher than the accretion rate obtained by Cooper and Narayan (2007). These considerations provide strong evidence that in the initial outburst stage, accretion onto J1808 proceeds in an \(\dot{M}\) range consistent with what Fujimoto et al. (1981) refer to as _case 3_ shell flashes. In this regime the accumulating layer remains cool enough that CNO hydrogen-burning proceeds in the temperature-sensitive regime, that is, very little hydrogen is burned until the layer reaches the conditions for unstable ignition. Indeed, following Fujimoto et al. (1981, see their equation 11), we would estimate that only about \(1-2\%\) of the hydrogen would be burned prior to ignition. Further insights are provided by our estimates of the total column of matter accreted at the time of the burst, and its total energy fluence. In SS2 above we estimated the accreted column to be in the range \(\approx 3.91-5.12\times 10^{7}\) g cm\({}^{-2}\), and we measured an energy fluence in the burst of \(\approx 1\times 10^{38}\) erg (both at 3.5 kpc). For the following discussion we refer the reader to the illustrative hydrogen ignition curves presented by Cumming (2004, see their Fig. 1), Galloway and Keek (2021, see their Fig. 2), and Peng et al. (2007, see their Fig. 4). Based on these curves, we can estimate that a column of this size would be ignited at a temperature in the range from \(\approx 4-5\times 10^{7}\) K. What happens upon ignition of the hydrogen? The unstable burning will quickly heat the layer, raising the temperature to at least that at which the CNO energy generation rate saturates, but likely somewhat higher. Fujimoto et al. (1981) estimate that only a small fraction, \(\Delta X\), of the hydrogen needs to burn in order to raise the temperature. For a temperature change of \(10^{8}\) K, they estimate \(\Delta X\approx 0.002\) (see their equation 12).
After ignition of the hydrogen, two subsequent paths have been described in the literature. First, if the ignition column is small enough, then an increase in its temperature may not cause it to cross the helium ignition curve, and additional accretion and/or an increase in the helium fraction is required before it will ignite. Alternatively, for deeper ignition columns, a temperature increase of a few \(10^{8}\) K would render the shell unstable to helium ignition, promptly producing a mixed H/He burst. We note that the work of Cooper and Narayan (2007) and Peng et al. (2007) also predict these two paths, and their calculations provide estimates of the hydrogen ignition columns that are broadly consistent with our estimate of the column accreted at the time of the weak burst. For example, at an accretion rate of \(\dot{m}=0.002\)\(\dot{m}_{Edd}\) Cooper and Narayan (2007, see their Fig. 4, right column) find behavior consistent with the first scenario, a sequence of weak hydrogen flashes occurs until the helium column grows sufficiently to reach ignition conditions. These calculations also provide an estimate of the temperature increase produced by the unstable hydrogen ignition, and suggest that changes of \(\sim 2\times 10^{8}\) K are likely. Peng et al. (2007, see their Fig. 7) also find a regime where hydrogen ignition does not lead to prompt ignition of a He burst. They also explore the effect of sedimentation on hydrogen-triggered bursts, which enhances the amount of CNO nuclei at the ignition depth and causes a sharper temperature rise. Sedimentation is likely to play an important role in setting the ignition conditions for the weak burst given the low estimated accretion rate.
Measurement of the burst fluence enables us to estimate the fraction, \(f_{h}\), of accreted hydrogen needed to burn in order to produce that much energy. For an energy release (per gram) of \(E_{h}=6.4\times 10^{18}\) erg g\({}^{-1}\)(Clayton, 1983), we would require \(m_{h}=1.6\times 10^{19}\)\((1+z)\) g of hydrogen to burn, where the factor of \((1+z)\) is included because we are interested in the energy released at the neutron star surface. Expressed as a column on the neutron star, \(y_{h}=m_{h}/4\pi R^{2}\), and assuming \(R=11\) km, we find \(y_{h}=1.05\times 10^{6}\)\((1+z)\) g cm\({}^{-2}\). The amount of hydrogen present in the accreted column is \(y_{a}X\), where \(X\) is the mass fraction of hydrogen in the accreted material.
We thus have,
\[f_{h}=\frac{y_{h}}{y_{a}X}=0.105\ \frac{(1+z)}{Y_{a}X}\;, \tag{6}\]
where \(Y_{a}\) is the estimated accreted column in units of \(10^{7}\) g cm\({}^{-2}\). Taking \(Y_{a}\) in the range from \(3.9-5.1\), a fractional hydrogen abundance in the accreted fuel of \(X=0.7\), and using the same \(M\) and \(R\) assumptions employed above to evaluate \((1+z)\), we find \(f_{h}\) in the range from \(0.04-0.06\). Note that this value should be considered a lower limit, as it assumes that the estimated total accreted column produced only a single such burst, and the mass fraction would likely be reduced further if sedimentation is present (Peng et al., 2007, see below for additional discussion regarding potentially missed bursts). This value is larger than the estimate given by Fujimoto et al. (1981) for the fraction of hydrogen needed to burn to raise the temperature of the fuel by \(\Delta T=10^{8}\) K; we don't know the actual temperature increase however, and the estimate of Fujimoto et al. (1981) should be thought of as a lower limit to our estimate from the measured burst energy fluence, since burning will continue at the stable CNO burning rate.
Alternatively, we can ask the question, how much energy would we expect in the burst if the entire accreted column were to burn to heavy elements? The energy release per gram, \(Q_{nuc}\), would depend on the details of the nuclear burning pathways, however, employing the value of \(Q_{nuc}=(1.3+5.8X)\times 10^{18}\) erg g\({}^{-1}\)(Galloway & Cumming, 2006), and again adopting \(X=0.7\) we would expect \(\approx 3.2-4.2\times 10^{39}\) ergs liberated at the neutron star surface by burning all the fuel. This estimate also assumes that the total accreted column produces a single burst. This is a factor of \(30\,-\,40\) larger than the observed fluence in the weak X-ray burst, and also argues that the weak burst is likely not a mixed H/He burst. Rather, our analysis suggests that it likely represents the unstable ignition of a modest fraction of the hydrogen in the accreted layer, which constitutes strong observational evidence for such a weak "hydrogen-only" flash.
Interestingly, in their two zone model Cooper & Narayan (2007) compute the peak energy fluxes produced during such weak hydrogen flashes. The range of fluxes that their model can produce is summarized in their Fig. 7 (bottom panel), where the peak flux is given as a fraction of the Eddington flux. Working backward, we measured a peak flux during the weak X-ray burst of \(\approx 6.9\times 10^{-9}\) erg cm\({}^{-2}\) s\({}^{-1}\). If we scale this by the peak flux (\(2.3\times 10^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\)) of the Eddington-limited burst observed later in the outburst (Bult et al., 2019), we find a ratio of \(\log(0.03)=-1.52\). Looking at Fig. 7 in Cooper & Narayan (2007) (bottom panel), we can find hydrogen flashes (the dotted-dashed lines in the figure) in a narrow range of accretion rate that reach this flux level.
### Stable burning after the burst?
Previous theoretical studies concluded that the ignition of the hydrogen flash will raise the temperature in the layer to at least the stable burning regime, and likely higher. Thus, hot-CNO cycle burning would be expected to continue for some period of time after the unstable ignition. Can we see evidence for such stable burning in the _NICER_ data? Interestingly, there is a clear "offset" between the pre- and post-burst flux levels. This offset can be seen in Figure 2. Note that the inset panel uses a larger time bin size and log scale to emphasize the persistent count rate levels, to more clearly highlight the offset. We also plot the average count rate value for the pre-burst level (red dashed line) as a guide to the eye. To explore this question further we used the same spectral model to characterize the post-burst data as we used for the pre-burst and other persistent emission intervals. The time interval used for the post-burst spectral extraction is marked by the vertical dotted lines in Figure 2 (main panel). We first tried to fit the post-burst spectrum using the same spectral shape as obtained from the pre-burst interval, allowing for the constant, \(f_{a}\) parameter to make up the flux difference. This did not provide an acceptable fit, and suggests the presence of an additional spectral component in the post-burst interval. To explore this further we subtracted the pre-burst spectrum from the post-burst and found that the remaining excess could be well fit by a soft thermal spectrum, characterized as a blackbody with \(kT=0.51\pm 0.02\) keV, normalization of \(82.5\pm 12.0\), and bolometric flux of \(6.1\pm 0.2\times 10^{-11}\) erg cm\({}^{-2}\) s\({}^{-1}\). This is equivalent to a luminosity of \(\approx 8.9\times 10^{34}\) erg s\({}^{-1}\) (at 3.5 kpc).
If the hydrogen burns stably at the same rate as it is accreted, then we would estimate a hydrogen-burning luminosity of \(L_{h}=X\dot{m}E_{h}\), where \(X\) is the mass fraction of hydrogen in the accreted fuel, \(\dot{m}\) is the mass accretion rate at the burst onset, and \(E_{h}\) is the energy production per gram due to hydrogen-burning. With \(\dot{m}=1.4\times 10^{-10}\)\(M_{\odot}\) yr\({}^{-1}\), \(X=0.7\), and \(E_{h}=6.4\times 10^{18}\) erg g\({}^{-1}\), we would predict a stable hydrogen-burning luminosity of \(\approx 4\times 10^{34}\) erg s\({}^{-1}\), which is a good fraction of the measured offset. Perhaps a better estimate can be obtained by evaluating the energy production rate associated with the saturated, hot-CNO burning rate as, \(L_{CNO}=4\pi R^{2}y_{a}\epsilon_{CNO}\), where \(\epsilon_{CNO}\), \(y_{a}\), and \(R\), are the energy production rate due to hot-CNO burning, the accreted column depth, and the neutron star radius, respectively. With \(\epsilon_{CNO}=5.8\times 10^{13}(Z_{CNO}/0.01)\) erg g\({}^{-1}\) s\({}^{-1}\)(Cumming & Bildsten, 2000), \(y_{a}=4.5\times 10^{7}\) g cm\({}^{-2}\), and \(R=11\) km, we find \(L_{CNO}\approx 4\times 10^{34}(Z_{CNO}/0.01)\) erg s\({}^{-1}\). Here, \(Z_{CNO}\) is the abundance of the CNO catalyzing elements. Employing the solar value \(Z_{CNO}=0.016\), we find \(L_{CNO}=6.4\times 10^{34}\) erg s\({}^{-1}\), however, as noted above, at these low accretion rates sedimentation is very likely to be effective in enhancing the abundance
of CNO elements near the base of the accreted fuel layer. For example, Peng et al. (2007, see their Figs. 2 & 3) report enhancements in CNO element abundances by factors of 2 to 5, depending on the accretion rate.
Based on these estimates it appears plausible that most or all of the observed flux offset can be accounted for by quasi-steady, hot-CNO burning of hydrogen. We note that the thermal nature of the spectral excess, and its \(\approx 0.5\) keV temperature, similar to that at late times during the weak burst, is also consistent with this interpretation. This conclusion is also consistent with the hydrogen flash temperature and flux evolution calculations of Cooper & Narayan (2007). As an example, the hydrogen flashes shown in their Fig. 4 indicate that at ignition the flux rises abruptly, but then shows a "plateau-like" phase which decays over a timescale of several hours. The average flux levels near the beginning of these events are approximately consistent with the stable hydrogen-burning luminosity we estimated above. Once ignited, these flashes are burning hydrogen to helium in the fuel layer at essentially the saturated, hot-CNO cycle rate. We suggest that the two-zone model of Cooper & Narayan (2007) (with H and He zones) probably does not adequately track and resolve the fast, initial hydrogen-burning when the thermal instability is initiated, but better predicts the longer timescale, thermally stable burning. The hydrogen-only ignition modeled by Peng et al. (2007, see their Fig. 7) also appears at least approximately similar to what is observed for the weak _NICER_ burst. Indeed, the ratio of the peak burst bolometric flux to the persistent, pre-burst flux is \(7\times 10^{-9}/7.9\times 10^{-10}\approx 8.8\), which is similar to the peak value of \(F_{cool}/F_{acc}\) for the initial, burst-like flux increase shown in their Figure 7 (middle panel), and the overall burst duration appears consistent with the observed burst as well.
More detailed radially resolved, and perhaps multidimensional calculations will likely be needed to more accurately track the rapid hydrogen ignition phase which we suggest may account for the weak _NICER_ burst. To briefly summarize, the weak _NICER_ burst and post-burst flux offset appear to be consistent with the onset of a hydrogen-triggered shell flash in the cool, temperature-sensitive regime of the CNO cycle. The ignition column was likely shallow enough that the subsequent temperature increase was not sufficient to also promptly ignite a helium-burning instability.
### Missed bursts?
While _NICER_ was able to begin observations quite close to the onset of accretion in the 2019 August outburst, the overall on-source coverage from onset to the time of the first observed burst was still rather modest, with a duty-cycle of about 4%. Thus, if other bursts occurred it is conceivable that the _NICER_ observations simply missed them. However, based on our estimate of the size of the accreted column, as well as current theoretical estimates of the hydrogen ignition curve, we argue that likely only a few such bursts might have been missed. Firstly, while we don't know the precise temperature trajectory of the initial accumulating layer, it cannot plausibly be \(\lesssim 2\times 10^{7}\) K because at such low temperatures only columns much larger (\(\gtrsim 2-3\times 10^{8}\) gm cm\({}^{-2}\)) than our estimate of the accreted column at the time of the weak burst (\(3.9-5.1\times 10^{7}\) g cm\({}^{-2}\)) would be needed to ignite unstable burning, and such an ignition would also very likely lead to a bright, mixed H/He burst, which was not observed, though could have perhaps been missed. Secondly, as the temperature of the fuel layer increases the size of the unstable column decreases, however, above temperatures of about \(8\times 10^{7}\) K the hydrogen-burning will stabilize, precluding bursts. This sets a minimum combustible column for hydrogen ignition which is, using the ignition curve in Cumming (2004) as a guide, \(\approx 1\times 10^{7}\) g cm\({}^{-2}\). Based on our estimated accreted column this would set a limit of not more than about five such bursts potentially being produced, as that would just about exhaust the total column accreted at the time of the weak burst. Another benchmark can be set by the accretion rate. We estimated a value of \(\dot{m}=1.4-1.6\times 10^{-10}\)\(M_{\odot}\) yr\({}^{-1}\) at the time of the weak burst. If we take half of this value as more representative of the mean rate during the 72 hrs prior to the weak burst, we can estimate the time required to accrete the minimum unstable column of \(1\times 10^{7}\) g cm\({}^{-2}\). For \(\dot{m}=7\times 10^{-11}\)\(M_{\odot}\) yr\({}^{-1}\), and assuming a radius \(R=11\) km, we find it would take 9.5 hr to accumulate such a column. Since the weak burst was observed after about 2.9 days, this also suggests an upper limit of \(\sim 7\) to the total number of such weak bursts. We suggest that the actual temperature trajectory is probably somewhere between the two extremes described above, perhaps consistent with an unstable column on the order of \(\sim 2-3\times 10^{7}\) gm cm\({}^{-2}\). If this is correct it would suggest that the _NICER_ observations may have missed one or two such weak bursts.
### Other examples: RXTE observations of the 2005 outburst
We searched the literature and previous observations of J1808 to try and identify similar examples of weak bursts. We found a quite similar event early in the 2005 June outburst that was observed with _RXTE_. We show in Figure 5 the light curve of this outburst as obtained from _RXTE_ pointed observations. This burst occurred on 2 June at approximately 00:42:30 TT, and is evident near 0.25 days in the figure. We carried out a time resolved spectral analysis of this event, and found qualitatively similar properties for this burst as for the weak _NICER_ burst. It reaches a peak bolometric flux of \(1.54\pm 0.11\times 10^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\), about a factor of 2 greater than the _NICER_ burst. It also had a peak blackbody temperature of 1.25 keV, which is about 25% larger than that of the _NICER_ burst. We note that this burst
appears in the Multi-Instrument Burst Archive (MINBAR) catalog, with a reported peak bolometric flux of \(1.6\times 10^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\), and a fluence of \(1.67\times 10^{-7}\) erg cm\({}^{-2}\)(Galloway et al., 2020).
The first evidence of active accretion for this outburst was provided by _RXTE/PCA_ Galactic bulge scan observations on 31 May at 23:00:00 UTC, and indicated a persistent 2 - 10 keV X-ray flux level of \(\approx 3\) mCrab (Markwardt et al., 2005). This flux value is similar to that measured with _NICER_ for OBSID 2050260109 during the 2019 outburst (see Table 1). The X-ray burst was observed approximately 25.7 hr later, and MINBAR reports a persistent flux at the time of the burst of \(8.6\times 10^{-10}\) erg cm\({}^{-2}\) s\({}^{-1}\), just a bit larger than the value estimated prior to the 2019 _NICER_ burst (again, see Table 1). We can use the pre-burst flux value reported by MINBAR and the earliest _RXTE_ observations of the 2005 outburst reported by Markwardt et al. (2005) and Wijnands et al. (2005) to estimate the persistent, accretion-driven fluence prior to the weak 2005 burst. Evaluating a simple trapezoidal sum gives a value of \(3.8\times 10^{-5}\) erg cm\({}^{-2}\) that is approximately half of the estimated fluence prior to the 2019 _NICER_ event. This then suggests a total accreted column just prior to the 2005 _RXTE_ event of about half that estimated for the 2019 _NICER_ burst. Simply scaling our value estimated for the 2019 _NICER_ burst suggests a range of \(2.0-2.6\times 10^{7}\) g cm\({}^{-2}\) for the total accreted column prior to the 2005 _RXTE_ event.
### Subsequent bursts detected with NuSTAR
Additional observations of J1808 were collected with _NuSTAR_ between 2019 August 10 and 11 (MJD 58705.5-58706.5). While these data do not cover the time of the weak X-ray burst observed with _NICER_, _NuSTAR_ did catch two subsequent bursts, providing some additional, interesting context to this early phase of the outburst. We processed the _NuSTAR_ data (ObsID 90501335002) using nustardas version 2.1.2. Source data were extracted in the \(3-79\) keV energy range from a \(40^{\prime\prime}\) circular region centered on the source coordinates. The background was extracted using the same approach, but with the extraction region positioned in the background field. The _NuSTAR_ light curve reveals two X-ray bursts, the first of which occurred 24.8 hours after the weak _NICER_ burst, while the second occurred another 11 hours later. This light curve is shown in Figure 6. We emphasize that though some of the NICER exposure was simultaneous with NuSTAR, this did not include these two bursts, and they were only observed with NuSTAR.
We first investigate the persistent emission by extracting a spectrum from a 100 second window just prior to the first _NuSTAR_ burst. As can be seen in Figure 6, this epoch was simultaneously observed with _NICER_, so we also extracted the contemporaneous _NICER_ spectrum to obtain broadband energy coverage. We model this spectrum using the same persistent emission used previously (see Table 1), allowing for a constant cross calibration factor between _NICER_ and FPMA/B of _NuSTAR_. In keeping with the analysis of the _NICER_ burst, we extrapolated the best spectral model over \(0.1-100\) keV to find a bolometric flux estimate of \(1.47\pm 0.05\times 10^{-9}\) erg s\({}^{-1}\) cm\({}^{-2}\).
From the recurrence times between the observed bursts, we obtain an estimate of the fluence due to the accretion of \(1.3\times 10^{-4}\) erg cm\({}^{-2}\) and \(5.8\times 10^{-5}\) erg cm\({}^{-2}\) for the two bursts, respectively. Converting these measurements to column depths, we use equation 4 to find \(8.4\times 10^{7}\) and \(3.7\times 10^{7}\) g cm\({}^{-2}\), respectively, where we
Figure 5: Light curve from _RXTE_ data (PCU 2, 3β30 keV) of the 2005 outburst from J1808. Note the logarithmic scale. A weak X-ray burst is seen early in this outburst. Much brighter and energetic bursts are seen near days 4 and 8. Note that the burst near day 8 was truncated by the RXTE exposure, and almost certainly the brightest part of this event was missed.
Figure 6: Light curves from NICER (black, left axis) and NuSTAR (red, right axis) around the time of the weak X-ray burst at t=0. Both light curves are calculated using an 8-s time resolution.
again assumed a 1.4 \(M_{\odot}\) neutron star mass and an 11 km stellar radius. These column depths are of the same order as the one we calculated for the initial _NICER_ burst. Indeed, given the observed 11 hr recurrence time between the two _NuSTAR_ bursts, and the relatively constant persistent flux (and hence accretion rate), it is conceivable that a similar burst was missed in the gap between the weak _NICER_ burst and the first _NuSTAR_ burst. If so, then the accretion fluence for the two _NuSTAR_ bursts would be essentially consistent with each other.
To explore the burst spectra, we proceeded by dividing the bursts into multiples of 1/8 seconds such that each bin contains at least 500 counts. We extract a spectrum for each bin and model it using an absorbed blackbody in addition to the fixed persistent emission. The inferred burst properties obtained from these fits are listed in Table 2. The two _NuSTAR_ bursts had fluences of \(5\times 10^{-7}\) erg cm\({}^{-2}\) and \(3\times 10^{-7}\) erg cm\({}^{-2}\), respectively. This means that they are about a factor of 4 - 7 times more energetic than the weak X-ray burst observed with _NICER_. The first _NuSTAR_ burst reached a peak flux ten times greater than that of the weak _NICER_ burst, and it was also significantly "hotter," reaching a peak blackbody temperature of 2.3 keV. At the same time, these bursts remain much fainter than the Eddington-limited bursts observed at later times in the outbursts of J1808, which typically have fluences of \(2\sim 4\times 10^{-6}\) erg cm\({}^{-2}\)(Galloway et al., 2008; in't Zand et al., 2013).
## 4 Summary, Caveats & Outlook
Based on the considerations above we suggest a scenario similar to that discussed in the work of Cooper & Narayan (2007) and Peng et al. (2007) as a working hypothesis to account for the weak bursts observed by _NICER_ and _RXTE_ during the early days of the 2019 and 2005 outbursts of J1808. As accretion begins, the neutron star is cool enough and the accretion rate is low enough that CNO hydrogen-burning in the accumulating layer occurs in the temperature-sensitive regime. At these lower temperatures, \(\lesssim 5\times 10^{7}\) K, very little hydrogen is burned. Significant burning of hydrogen will only begin when the accumulated column reaches the conditions for the thermal instability to set in. For a temperature of \(\approx 5\times 10^{7}\) K this will occur at a column depth of about \(3\times 10^{7}\) g cm\({}^{-2}\). This value is not too dissimilar from the column estimated just prior to the 2005 event. When the initial accumulating layer reaches ignition depth the hydrogen instability occurs, triggering a hydrogen flash. We suggest that the initial rapid increase in the nuclear energy generation rate ultimately results in the "heat pulse" that is observed as the weak X-ray burst, however, we think that more sophisticated, multi-dimensional theoretical calculations of the time-dependent nuclear energy generation coupled with the subsequent heat and radiation transport, will be needed to test the details of this hypothesis. After the initial hydrogen ignition, the burning layer will reach a high enough temperature that subsequent hydrogen-burning can proceed at the thermally stable level appropriate to the hot-CNO cycle. Above, we have argued that the observed offset between the pre- and post-burst flux levels of the 2019 event is consistent with this "quasi-steady" burning phase. This source of heat will keep the layer warm enough for burning to continue for a time, likely measured in hours if conditions are not too dissimilar from those modeled by Cooper & Narayan (2007). During this time the quasi-stable burning will increase the helium fraction of the layer. Given the gaps in _NICER_ coverage after the weak burst, we cannot say how long this "quasi-steady" burning may have persisted, but we note that observations \(\approx 3.5\) hrs after the burst show a count rate and flux approximately consistent with the pre-burst level. For the conditions described above, that is, a hydrogen ignition column of \(\approx 3\times 10^{7}\) g cm\({}^{-2}\), such an initial hydrogen flash is unlikely to produce a prompt helium ignition, simply because at that column depth the helium will not be thermally unstable (Cumming, 2004).
As accretion continues, the hydrogen layer or layers that initially flashed will be pushed deeper, to higher column depths. The freshly accreted material above it will also reach the hydrogen ignition depth, and if so, produce another hydrogen flash, assuming its temperature is low enough. In this way, a sequence of hydrogen flashes could be produced. Eventually, the helium-enriched layers will likely reach column depths where the helium will ignite, producing more energetic, mixed H/He bursts. We suggest that the observed _NuSTAR_ bursts are the result of this process. The steadily increasing accretion rate will also be an important variable, as this will tend to increase the temperature of the accreting layers. More complete theoretical modeling of this process will have to include the time-varying accretion rate (Johnston et al., 2018).
If the above scenario is approximately correct we can try to speculate further regarding a few other details of the observations. The 2005 event observed with _RXTE_ was the earlier event in terms of the time since outburst onset, occurring approximately 1 day after onset. Other things being equal one would expect the accreting layer to be cooler than at later times, such as the 2.9 days post-outburst from the 2019 event. A cooler shell will have a larger unstable column, so that this could perhaps explain the fact that the _RXTE_ event is the more energetic of the two weak bursts. This also provides some tentative evidence that the 2019 _NICER_ event may have been preceded by at least one additional weak burst that was missed.
### Remaining Uncertainties and Alternative Interpretations
In estimating accretion rates and accreted columns we allowed for variation in the neutron star mass, however,
there are other uncertainties which complicate such estimates. These include the source distance, anisotropy factors, bolometric corrections, and the line-of-sight absorption. We note that the more recent work of Goodwin et al. (2019) reports a slightly closer distance of \(3.3^{+0.3}_{-0.2}\) kpc for J1808. While their quoted uncertainty range includes the 3.5 kpc value we have adopted, a decrease from 3.5 to 3.3 kpc would reduce our estimates by a factor of 0.9. These authors also provide estimates of the anisotropy factors for both persistent and burst emission, finding \(\xi_{p}=0.87^{+0.12}_{-0.10}\) and \(\xi_{b}=0.74^{+0.10}_{-0.10}\). Applying these values would also reduce the estimated accretion rate and column, by a factor of 0.87, and decrease our estimate of the burst peak luminosity and fluence, by a factor of 0.74. Adopting the best values reported by Goodwin et al. (2019) for both \(d\) and \(\xi_{p}\) would reduce the estimated accretion rate and accreted column by a factor of 0.77.
We have argued that the weak bursts result from, principally, hydrogen-burning, but are there other possibilities involving the unstable burning of helium? One scenario that can produce weak or underluminous bursts is the phenomenon of short recurrence time (SRT) bursts (Keek et al., 2010). The idea behind SRT events is that they burn fuel remaining from a preceding burst. In the present case, a preceding, larger burst would have had to occur (and been missed) for this idea to be workable. In principle, this could account for the observed weak bursts, but there are some difficulties with this interpretation. First, the estimated accreted columns are uncomfortably low. This scenario would require that a relatively bright, mixed H/He burst would have occurred prior to the observed weak events, and been missed in each case. As discussed above, this would require relatively large ignition columns, likely \(\gtrsim 2\times 10^{8}\) g cm\({}^{-2}\), which is much larger than the estimated columns present just prior to each weak event. Our accretion column estimates would have to be underestimated by factors of 4 - 5 for this to be more plausible. Second, J1808 is not currently known to produce SRT events. There has been reasonably good coverage of past J1808 outbursts, and no SRT events have been definitively observed. For example, the compilation of SRT burst observations by Keek et al. (2010) does not include J1808, and we also note that the 401 Hz spin frequency for J1808 is less than the faster, \(\gtrsim 500\) Hz, spins associated with some of the known SRT sources. Third, the flux offset between the pre- and post-burst emission seems to make more sense in the context of stable hydrogen-burning than what might be expected from an SRT event, for which one would not typically expect to find such a flux offset. We note also that the theoretical mechanism of opacity-driven convective mixing explored by Keek & Heger (2017) to account for SRT bursts occurs for ignition in relatively hot envelopes, which seems less applicable to the low accretion rate regime near burst onset that we have described above. It is difficult to completely rule out the SRT scenario, but we think the considerations above argue against it.
We have argued that the early accretion outburst evolution onto a "cool" neutron star in J1808 provides a unique environment to explore the physics of nuclear burning on neutron stars, and most interestingly, the ignition of unstable hydrogen-burning in the temperature-sensitive regime of the CNO cycle. We suggest that the weak bursts seen by _NICER_ and _RXTE_ in the 2019 and 2005 outbursts, respectively, may result from this process. More complete, continuous observational coverage of the first 4-5 days of subsequent outbursts from J1808 could definitively test this hypothesis. Such data would also provide for detailed physical comparisons with new theoretical efforts to track the outcome of time-varying accretion onto neutron stars and the subsequent nuclear burning of the accreted matter. This could provide interesting constraints on such things as the accretion rate, the thermal profile of the accreting matter and the nuclear energy generation and subsequent heat transport in the accreted layers.
\begin{table}
\begin{tabular}{c c c c c} \hline Parameter & NICER 1 & NuSTAR 1 & NuSTAR 2 & NICER 2 \\ \hline Onset (MJD, TT) & 58704.8068 & 58705.8459 & 58706.3058 & 58716.0861 \\ Peak flux (erg s\({}^{-1}\) cm\({}^{-2}\)) & \(7\times 10^{-9}\) & \(7\times 10^{-8}\) & \(4\times 10^{-8}\) & \(3\times 10^{-7}\) \\ Burst fluence (erg cm\({}^{-2}\)) & \(7\times 10^{-8}\) & \(5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(2\times 10^{-6}\) \\ Accretion fluence (erg cm\({}^{-2}\)) & \(8\times 10^{-5}\) & \(1.3\times 10^{-4}\) & \(5.8\times 10^{-5}\) & \\ Peak kT (keV) & 1.0 & 2.3 & 1.7 & 2.5 \\ \hline \end{tabular} Note. β The properties of the second NICER burst are taken from Bult et al. (2019) as an example of a bright Eddington-limited X-ray burst from J1808
\end{table}
Table 2: Burst parameters
This work was supported by NASA through the _NICER_ mission and the Astrophysics Explorers Program. This research also made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. We thank Ed Brown, Andrew Cumming, and Duncan Galloway for comments that helped to improve this manuscript. P.B. acknowledges support from NASA through the Astrophysics Data Analysis Program (80NSSC20K0288) and the CRESST II cooperative agreement (80GSFC21M0002). NICERDAS, ADS, HEASARC NICERDAS (v8), XSPEC (Arnaud 1996)
|
2309.05080 | Spin decoherence and off-resonance behavior of radiofrequency-driven
spin rotations in storage rings | Radiofrequency-driven resonant spin rotators are routinely used as standard
instruments in polarization experiments in particle and nuclear physics.
Maintaining the continuous exact parametric spin-resonance condition of the
equality of the spin rotator and the spin precession frequency during operation
constitutes one of the challenges. We present a detailed analytic description
of the impact of detuning the exact spin resonance on the vertical and the
in-plane precessing components of the polarization. An important part of the
formalism presented here is the consideration of experimentally relevant
spin-decoherence effects. We discuss applications of the developed formalism to
the interpretation of the experimental data on the novel pilot bunch approach
to control the spin-resonance condition during the operation of the
radiofrequency-driven Wien filter that is used as a spin rotator in the first
direct deuteron electric dipole moment measurement at COSY. We emphasize the
potential importance of the hitherto unexplored phase of the envelope of the
horizontal polarization as an indicator of the stability of the
radiofrequency-driven spin rotations in storage rings. The work presented here
serves as a satellite publication to the work published concurrently on the
proof of principle experiment about the so-called pilot bunch approach that was
developed to provide co-magnetometry for the deuteron electric dipole moment
experiment at COSY. | N. N. Nikolaev, F. Rathmann, J. Slim, A. Andres, V. Hejny, A. Nass, A. Kacharava, P. Lenisa, J. Pretz, A. Saleev, V. Shmakova, H. Soltner, F. Abusaif, A. Aggarwal, A. Aksentev, B. Alberdi, L. Barion, I. Bekman, M. BeyΓ, C. BΓΆhme, B. Breitkreutz, N. Canale, G. Ciullo, S. Dymov, N. -O. FrΓΆhlich, R. Gebel, M. Gaisser, K. Grigoryev, D. Grzonka, J. Hetzel, O. Javakhishvili, V. Kamerdzhiev, S. Karanth, I. Keshelashvili, A. Kononov, K. Laihem, A. Lehrach, N. Lomidze, B. Lorentz, G. Macharashvili, A. Magiera, D. Mchedlishvili, A. Melnikov, F. MΓΌller, A. Pesce, V. Poncza, D. Prasuhn, D. Shergelashvili, N. Shurkhno, S. Siddique, A. Silenko, S. Stassen, E. J. Stephenson, H. StrΓΆher, M. Tabidze, G. Tagliente, Y. Valdau, M. Vitz, T. Wagner, A. Wirzba, A. WroΕska, P. WΓΌstner, M. Ε»urek | 2023-09-10T16:59:41Z | http://arxiv.org/abs/2309.05080v2 | Spin decoherence and off-resonance behavior of radiofrequency-driven spin rotations in storage rings
###### Abstract
Radiofrequency-driven resonant spin rotators are routinely used as standard instruments in polarization experiments in particle and nuclear physics. Maintaining the continuous exact parametric spin-resonance condition of the equality of the spin rotator and the spin precession frequency during operation constitutes one of the challenges. We present a detailed analytic description of the impact of detuning the exact spin resonance on the vertical and the in-plane precessing components of the polarization. An important part of the formalism presented here is the consideration of experimentally relevant spin-decoherence effects. We discuss applications of the developed formalism to the interpretation of the experimental data on the novel pilot bunch approach to control the spin-resonance condition during the operation of the radiofrequency-driven Wien filter that is used as a spin rotator in the first direct deuteron electric dipole moment measurement at COSY. We emphasize the potential importance of the hitherto unexplored phase of the envelope of the horizontal polarization as an indicator of the stability of the radiofrequency-driven spin rotations in storage rings. The work presented here serves as a satellite publication to the work published concurrently on the proof of principle experiment about the so-called pilot bunch approach that was developed to provide co-magnetometry for the deuteron electric dipole moment experiment at COSY.
###### Contents
* I Introduction
* II Stroboscopic spin evolution in the off-resonance regime
* II Stroboscopic spin evolution in the off-resonance regime
* II Stroboscopic spin evolution in the off-resonance regime A Master equation
B Bogoliubov-Krylov averaging for exact spin resonance C Off-resonance spin rotations D Radiofrequency solenoid as a spin rotator
* III Impact of detuning on the vertical polarization A Evolution of vertical polarization B Build-up of vertical polarization from in-plane polarization
* IV Polarimetry of the in-plane polarization A Amplitude and phase conventions B Continuous spin rotation by the WF: build-up of pure initial in-plane polarization C Cross talk of vertical, tangential and radial polarizations D Continuous spin rotation by the Wien filter and envelope of in-plane polarization E Continuous spin rotation by the Wien filter and phase of in-plane polarization F Interplay of detuning and initial phase in the generic three-stage regime 1. Envelope of in-plane polarization 2. Phase of in-plane polarization envelope for pure radial and longitudinal initial polarizations 3 Evolution of the phase of in-plane polarization envelope for generic orientation of the initial polarization
* V Spin decoherence incorporated A Decoherence through feedback to compensate for spin precession walk B Recovering the spectator polarization C Ansatz of exponential decoherence of the in-plane polarization 1. Damped spin rotations 2. Sequential Bogoliubov-Krylov averaging D Spin decoherence by synchrotron motion 1. Spread of synchrotron oscillation amplitudes 2. Master equation for spin envelope 3. Evaluation of synchrotron oscillation-driven spin decoherence of the bunch polarization 4. Excursion on not compensated betatron oscillation effects
* VI Spin tomography of synchrotron oscillations
* VII Implications for spin-flip tune mapping
* VIII Summary and Conclusions
## I Introduction
Controlled spin rotations, notably the spin flips (SF), are imperative for particle and nuclear physics experiments that involve polarized particles (see _e.g._, [1], for extensive reviews, see [2; 3]). In storage rings, the radiofrequency (RF) magnetic field resonant to the idle spin precession acts as a spin flipper, resembling the familiar case of nuclear magnetic resonance (NMR). In an ideal magnetic ring, one stores beam particles with on average vertically oriented polarization, and the spin precession frequency is given by \(f_{\mathrm{s}}=G\gamma f_{\mathrm{c}}\), where \(f_{\mathrm{c}}\) denotes the cyclotron frequency of the ring, and \(G\) and \(\gamma\) denote magnetic anomaly and relativistic \(\gamma\)-factor of the stored particles [4].
In practice, the magnetic field imperfections in the machine, especially the ones tangential to the beam orbit, bring about a substantial and often poorly known correction to the above simple formula for \(f_{\mathrm{s}}\)[2; 3; 5]. There are other complications that contribute, such as spin decoherence due to beam momentum spread \(\Delta p/p\) from synchrotron oscillations and from orbit lengthening due to betatron oscillations, which require chromaticity tuning [6; 7; 8]. A more fundamental obstacle is that the beam energy is so poorly known that, rather conversely, the spin precession frequency can be used to calibrate the beam energy [9]. For instance, this problem of \(f_{\mathrm{s}}\) being uncertain can be overcome with the Froissart-Stora scan approach, where the particle spin is subjected to a magnetic field of slowly varying frequency [10]. When the scanned frequency range is sufficiently broad to cover the not so well-known spin precession frequency \(f_{\mathrm{s}}\), then during the scan, the nuclear magnetic resonance condition will be encountered.
There are important spin-physics experiments in storage rings being conducted or anticipated, where it is imperative to maintain the exact spin-resonance condition for a long time, including a large number of SFs under continuous operation of an RF spin rotator. As part of the program of studies of systematic effects in electric dipole moment (EDM) searches of charged particles in storage rings, the JEDI collaboration [11] at the Cooler Synchrotron (COSY) storage ring in Forschungszentrum Julich [12; 13] has developed a technique of measuring the idle spin precession frequency to \(10^{-10}\) precision within a \(100\,\mathrm{s}\) time interval [14; 15]. When brought to interaction with an internal polarimeter target, the precessing horizontal polarization component of the beam gives rise to an up-down asymmetry oscillating with the spin precession frequency. The Fourier analysis of the time-stamped events in the polarimeter (see Ref. [14] for details) makes it possible to determine the oscillation fre
quency and also the envelope of the precessing polarization. The measurement of the spin precession frequency relies on the oscillating _horizontal_ polarization component. Thus when during a single or multiple spin flips [16] the spins are closely aligned along the _vertical_ axis in the machine, the control of the spin precession frequency fails, because in that case the horizontal polarization component is either too small or vanishes.
Recently, the JEDI collaboration proposed a solution to this issue based on the so-called pilot bunch approach, applicable to a situation with multiple bunches stored in the ring. The spin manipulations applied to the orbiting particles are organized in three stages:
1. In the first stage the initial vertical spins of multiple bunches of the stored deuterons are rotated into the horizontal plane by the radiofrequency solenoid, operated as a fixed-frequency spin rotator like in previous JEDI experiments.
2. In the second stage, the frequency of the idle spin precession \(f_{\mathrm{s}}\) of the in-plane polarization is measured.
3. In the third stage, the radiofrequency Wien filter (WF) is used as a spin rotator in a special mode where it is switched off once per beam revolution for a short period of time when one of several stored bunches passes through the spin rotator. The operation of the WF starts at the frequency \(f_{\mathrm{WF}}=f_{\mathrm{s}}\) as measured in the second stage, and is kept locked to the continuously measured idle spin precession frequency \(f_{\mathrm{s}}\) of the _unperturbed_ (pilot) bunch. Thus, the pilot bunch acts as a co-magnetometer, providing crucial information about \(f_{\mathrm{s}}\), which can be used in the interpretation of the spin dynamics of signal bunches exposed to the RF fields in the WF which operates at frequency \(f_{\mathrm{WF}}\).
The JEDI collaboration reports in Ref. [17] the first successful application of the pilot-bunch technique using two bunches stored in COSY with the radiofrequency Wien filter employed as a spin rotator. The experiment was carried out with deuterons of momentum \(p=$970\,\mathrm{M}\mathrm{e}\mathrm{V}\mathrm{/}\mathrm{c}$\). The sophisticated technical details of the development of the fast radiofrequency switches, operating at the ring frequency \(f_{\mathrm{c}}\simeq$750\,\mathrm{k}\mathrm{H}\mathrm{z}$\), which allowed us to turn off the radiofrequency of the Wien filter when one of the two orbiting beams passed the Wien filter, are discussed in Ref. [17]. While the polarization of the bunch exposed to the radiofrequency fields undergoes continuous SFs, the pilot bunch is immune to the radiofrequency of the Wien filter, and it provides a continuous determination of the idle spin precession frequency. The spin precession frequency is then employed to lock frequency and phase of the Wien filter. The pilot-bunch technique was proposed primarily in connection to the precision spin experiments on tests of fundamental symmetries such as a search for the parity and time-reversal-invariance violating permanent EDMs of charged particles [18; 19; 20], but it may find other applications in spin physics at storage rings.
In practice, a certain amount of detuning is an indispensable feature of the RF-driven spin dynamics in storage rings. The frequency of radiofrequency power supplies can only be controlled with finite accuracy, leaving room for residual detuning of the Wien filter and spin precession frequencies. Moreover, the betatron and synchrotron oscillation-induced spin tune spread is endemic in ensembles of stored particles. Finally, the process of feedback to lock the Wien filter and spin precession phases is nothing more than a continuous compensation of the detuning caused by the instabilities of the storage ring. It is important to assess the impact of constant or time-varying detuning of individual particles in the ensemble on various aspects of the long-time continuous spin flips, ranging from the amplitude and tunes of the vertical spin oscillations to the time dependence of the envelope and phase of the precessing horizontal polarization. A very different effect of synchrotron oscillations, namely their impact on single Froissart-Stora crossings of the spin resonance [10] and the behavior of the polarization in the relatively short time periods thereafter, was studied earlier at COSY [21].
Yet another closely related issue is the role of the finite spin-coherence time. For instance, damping is known to shift the frequency of the classical harmonic oscillator. In the case of a parametric spin resonance, involving non-commuting spin rotations, this requires a dedicated treatment of the impact of spin decoherence on the spin precessions and its dependence on the mechanism leading to spin decoherence.
Considering the JEDI spin experiments with polarized deuterons at a beam momentum of \(p=$0.97\,\mathrm{G}\mathrm{e}\mathrm{V}\mathrm{/}\mathrm{c}$\), a hierarchy of typical frequencies as listed in Table 1 is involved that defines the small parameters in the problem. The typical time scales involved are the spin observation times (cycle times) \(t_{\mathrm{exp}}\approx$100\,\mathrm{s}$\) and the in-plane (horizontal) spin-coherence time \(\tau_{\mathrm{SCT}}\sim$1000\,\mathrm{s}$\).
Still another time scale results from the feedback system (fb) used to synchronize the radiofrequency Wien filter with the spin precession frequency. The JEDI studies revealed a non-negligible variation of the idle spin precession frequency on the order of about \(10^{-8}\) from one fill to another and during each fill [14]. In practice, about 5 consecutive measurements of \(1-2\,\mathrm{s}\) duration are required to obtain a trend of the spin-phase response with a spread of the order of \(\sigma_{\mathrm{fb}}\sim$0.2\,\mathrm{r}\mathrm{a}$\) to obtain a feedback to correct the Wien filter frequency [22]. It can be assumed that this phase response is smooth during the feedback time interval of \(t_{\mathrm{fb}}=$5-10\,\mathrm{s}$\), and one can speak of a corresponding non-negligible detuning of the radiofrequency Wien filter with respect to the spin precession,
\[\Delta f_{\mathrm{s}}^{\mathrm{fb}}\sim\frac{\sigma_{\mathrm{fb}}}{2\pi t_{ \mathrm{fb}}}\sim$5\,\mathrm{m}\mathrm{H}\mathrm{z}$\,. \tag{1}\]
A similar hierarchy was observed for polarized protons at a beam kinetic energy of \(49.3\,\mathrm{M}\mathrm{e}\mathrm{V}\) in COSY, where 99
successive flips driven by an radiofrequency solenoid were performed within \(300\,\mathrm{s}\). Assuming exponential attenuation of polarization, the average spin flipper efficiency was found to be \(\epsilon_{\mathrm{flip}}=0.9872\pm 0.0001\)[13], corresponding to a lifetime of the continuously flipping spin of \(\tau_{\mathrm{flip}}=240\,\mathrm{s}\). With the radiofrequency spin flipper turned off, the vertical polarization was found to have a much longer lifetime of \(\tau_{\mathrm{p}}=(2.7\pm 0.8)\cdot 10^{5}\,\mathrm{s}\), indicating a close connection of the depolarization to the SF dynamics.
The hierarchy of frequencies given above (Table 1) allows one to pursue all aspects of the RF-driven spin dynamics within a unified Bogoliubov-Krylov averaging approach and paves the way to the first fully analytic and compact formalism for the detuned RF-driven parametric spin resonance taking into account the decoherence of the polarization. The present work extends earlier considerations considerably [5; 24; 23; 16] and is intended as a satellite publication to the one describing the first experimental test of the Pilot Bunch concept [17], the corresponding numerical estimates are presented for the conditions of this experiment.
There is a strong need for such a description because fitting the experimental data with multiple spin flips requires large number of calls of the spin evolution code, which can not be readily met by the numerical solution of the spin evolution for up to \(\sim 10^{8}\) revolutions of the beam. To this end we emphasize that the above specified conditions are about typical for storage rings dedicated to the search for the charged particles electric dipole moments [18; 19; 20]. We regard our formalism as a toolbox for the determination of the detuning parameter for individual fills of a machine, and it may find applications in accelerator physics beyond the description of the pilot bunch regime. In the case of the pilot bunch, we point out tricky features of the partial depolarization of the pilot bunch in the regime of incomplete masking (gating-out) the RF of the spin rotator. We pay particular attention to the as yet unexplored role of the phase of the spin envelope of the horizontal polarization on the control of the stable performance of the RF-driven spin rotations, for which we provide a fully analytic description.
The following presentation is organized as follows. (The most important variables and parameters are collected in the glossary in Table 2.) In Sec. 2, we present basics of the Bogoliubov-Krylov-averaging approach to continuous spin flips in a form best suited for the interpretation of experimental data in the regime of detuned resonances. Section 3 contains an introduction to the main effects stemming from frequency detuning. Manifestations of detuning in the polarimetry of the in-plane polarization, most crucial for the pilot-bunch technique, are treated in Sec. 4. The impact of spin decoherence on spin flips is a subject treated in Sec. 5. In Sec. 6, we discuss spin-flip tomography along the bunch length and depolarization of the pilot bunch caused by incomplete gating-out of the radiofrequency Wien filter. Implications of the derived formalism to the interpretation of the precursor EDM search experiments are explored in Sec. 7. In Sec. 8, we summarize our main results. The phenomenology of the results of the pilot bunch experiment [17] within the synchrotron oscillation-mediated spin-decoherence approach is presented in Appendix A.
## II Stroboscopic spin evolution in the off-resonance regime
### Master equation
In storage rings, the one-turn evolution of the spin \(\vec{S}\) consists of the idle precession by an angle \(\theta_{\mathrm{s}}=2\pi\nu_{\mathrm{s}}\) about the spin stable axis \(\vec{c}\), followed by the spin kick in the orbit-preserving radiofrequency Wien filter, which is used as a spin flipper and is located in a straight section of the ring. Here \(\nu_{\mathrm{s}}=f_{\mathrm{s}}/f_{\mathrm{c}}\) denotes the spin tune, _i.e._, the number of spin precessions with respect to particle momentum per revolution. The length of the Wien filter is negligibly small compared to the ring circumference and it acts on the spin stroboscopically once per turn. As an introduction to the subject, in this section, we describe the radiofrequency excited spin rotations in the SO(3) formalism [24] (for an alternative spinor formalism, see [25], the textbook in Ref. [2], and Ref. [5]).
The stroboscopic master equation for the spin vector \(\vec{S}(n)\) as a function of the turn number \(n\) is given by
\[\vec{S}(n)=\mathbf{R}_{\mathrm{WF}}(n)\mathbf{R}_{\mathrm{c}}(\theta_{\mathrm{ s}})\vec{S}(n-1)\,, \tag{2}\]
where \(\mathbf{R}_{\mathrm{c}}(\theta_{\mathrm{s}})\) and \(\mathbf{R}_{\mathrm{WF}}(n)\) are the ring and Wien filter spin transfer matrices, respectively. Alongside \(\vec{c}\), we define the radial unit vector \(\vec{e}_{\mathrm{r}}\) and the longitudinal unit vector \(\vec{e}_{\mathrm{t}}\) (tangential to the orbit). These three unit vec
\begin{table}
\begin{tabular}{l c c} System & Frequency & Value [Hz] \\ \hline Cyclotron motion & \(f_{\mathrm{c}}\) & 750000 \\ Spin precession with respect to particle momentum & \(f_{\mathrm{s}}\) & 120000 \\ Synchrotron motion & \(f_{\mathrm{sy}}\) & 200 \\ RF-driven spin flip & \(f_{\mathrm{SF}}\) & 1 \\ Feedback system induced spin precession spread & \(\Delta f_{\mathrm{s}}^{\mathrm{th}}\) & 0.005 \\ \end{tabular}
\end{table}
Table 1: Hierarchy of typical frequencies.
tors form the orthogonal basis
\[\begin{split}\vec{e}_{\mathrm{t}}&=\vec{e}_{\mathrm{r}} \times\vec{e}\,,\\ \vec{e}_{\mathrm{r}}&=\vec{e}\times\vec{e}_{\mathrm{t} }\,.\end{split} \tag{3}\]
The vectors \(\vec{e}_{\mathrm{r}}\) and \(\vec{e}_{\mathrm{t}}\) define the spin precession plane. Because of the magnetic field imperfections in the ring lattice, the orientation of \(\vec{c}\) differs slightly from \(\vec{e}_{y}\), the normal one to the storage ring plane, aka the \(\{\vec{e}_{x},\vec{e}_{z}\}\) momentum plane, and the spin precession plane is tilted with respect to the ring plane [24]. Wherever relevant, as will be the case in the discussion of the imperfection fields in Sec. VII, we will distinguish between the spin and momentum bases, and our reference to \(\vec{c}\) as the _vertical_ direction, and to the components of the spin in the spin precession plane as the _horizontal_ ones, should not cause any confusion.
We treat a particle on the reference orbit in the approximation of vanishing spin decoherence. Then the idle precession spin transfer matrix per turn is given by
\[\mathbf{R}_{\mathrm{c}}(\theta_{\mathrm{s}})=\begin{pmatrix}\cos\theta_{ \mathrm{s}}&0&\sin\theta_{\mathrm{s}}\\ 0&1&0\\ -\sin\theta_{\mathrm{s}}&0&\cos\theta_{\mathrm{s}}\end{pmatrix}\,. \tag{4}\]
The Wien filter axis \(\vec{w}\) is along its magnetic field \(\vec{B}_{\mathrm{WF}}\). The spin kick per pass of the Wien filter of length \(L_{\mathrm{WF}}\) equals
\[\chi(n)=\chi_{\mathrm{WF}}\cos(\theta_{\mathrm{WF}}n) \tag{5}\]
with the amplitude
\[\chi_{\mathrm{WF}}=-\frac{q(1+G)B_{\mathrm{WF}}L_{\mathrm{WF}}}{m\gamma^{2} \beta}\,, \tag{6}\]
where \(q\), \(m\), \(\beta\) and \(G\) are the charge, mass, velocity, and magnetic anomaly of the orbiting particles. The Wien
\begin{table}
\begin{tabular}{l c c} Parameter/Variable & Notation & Defined in or near \\ \hline Turn number & \(n\) & Eq. (2) \\ Spin tune & \(\nu_{\mathrm{s}}\) & Eq. (2) \\ Spin phase increment per turn & \(\theta_{\mathrm{s}}\) & Eq. (2) \\ Spin stable axis & \(\vec{c}\) & Eqs. (2), (3) \\ Wien filter tune & \(\nu_{\mathrm{WF}}\) & Eq. (6) \\ Wien filter side band & \(K\) & Eqs. (6), (7) \\ Wien filter phase increment per turn & \(\theta_{\mathrm{WF}}\) & Eq. (6) \\ Spin kick in the Wien filter & \(\chi_{\mathrm{WF}}\) & Eq. (6) \\ Magnetic anomaly of a particle & \(G\) & Eq. (6) \\ Beam velocity in units of the speed of light & \(\beta\) & Eq. (6) \\ Relativistic factor & \(\gamma\) & Eq. (6) \\ Polarization vector & \(\vec{S}\) & Eqs. (2), (8) \\ Polarization envelope & \(\vec{p}\) & Eq. (8) \\ Spin-flip oscillation phase & \(x\) & Eqs. (17), (35) \\ Spin-flip tune on the exact spin resonance & \(\nu_{\mathrm{SF}}^{0}\) & Eq. (18) \\ Initial phase of the in-plane polarization & \(\Phi_{\mathrm{in}}\) & Eq. (22) \\ Spin precession vs. Wien filter frequency detuning parameter & \(\delta\) & Eq. (26) \\ Spin-flip tune off the exact spin resonance & \(\nu_{\mathrm{SF}}\) & Eq. (30) \\ Angle of orientation of the spin envelope precession axis & \(\rho\) & Eq. (31),(37) \\ Shift of the spin-flip symmetric interval \(x\in[\zeta,2\pi+\zeta]\) & \(\zeta\) & Eq. (43) \\ In-plane polarization envelope phase during continuous spin flips & \(\phi(x)\) & Eq. (48) \\ Spin precession feedback period & \(t_{\mathrm{fb}}\) & Eq. (92) \\ Spin precession phase walk during feedback period & \(\sigma_{\mathrm{fb}}\) & Eq. (92) \\ In-plane polarization damping per turn in the exponential approximation & \(\Gamma\) & Eqs. (103), (104) \\ Spin coherence time & \(\tau_{\mathrm{ST}}\) & Eqs. (104), (141) \\ Fractional cyclotron phase of a particle in the bunch & \(\phi\) & Eq. (113) \\ Slip factor & \(\eta\) & Eq. (118) \\ Gaussian rms width of the synchrotron oscillation amplitude distribution & \(\sigma_{\mathrm{sy}}\) & Eq. (120) \\ Amplitude of the synchrotron oscillations in the spin precession phase & \(\psi_{\mathrm{sy}}\) & Eqs. (121) \\ Normalized synchrotron oscillation amplitude & \(\xi\) & Eq. (121) \\ Synchrotron oscillation amplitude distribution function & \(F(\xi)\) & Eq. (122) \\ Parameter of the synchrotron oscillation driven slip of the Wien filter phase & \(C_{\mathrm{WF}}\) & Eq. (123) \\ Synchrotron oscillation strength in the spread of the spin-flip phase & \(Q_{\mathrm{sy}}\) & Eq. (133), (134) \\ Tilt of the spin stable axis by the electric dipole moment of a particle & \(\xi^{\mathrm{EDM}}\) & Eq. (156) \\ Gaussian rms length of the signal (s) bunch in the pilot (p) Bunch experiment & \(\sigma_{\mathrm{s, p}}\) & Appendix A \\ \end{tabular}
\end{table}
Table 2: Glossary of frequently used parameters and variables (auxiliary variables derived are omitted).
filter is operated at the frequency \(f_{\rm WF}\), the WF tune is given by \(\nu_{\rm WF}=f_{\rm WF}/f_{\rm c}\) and \(\theta_{\rm WF}=2\pi\nu_{\rm WF}\). Evidently, the spin rotation in the WF is identical for all side bands \(\nu_{\rm WF}\Rightarrow\nu_{\rm WF}+K,\quad K=0,\ \pm 1,\ \pm 2,\ \ldots\) Without loss of generality, we can focus the discussion on the so-called magnetic-dipole moment (MDM) mode, when \(\vec{w}=\vec{e}_{\rm r}\) and \(|\vec{c}\times\vec{w}|=1\). The spin transfer matrix for pass \(n\) through the Wien filter equals
\[{\bf R}_{\rm WF}(n)=\begin{pmatrix}1&0&0\\ 0&\cos\chi(n)&-\sin\chi(n)\\ 0&\sin\chi(n)&\cos\chi(n)\end{pmatrix}=1+{\bf W}(n)\,. \tag{7}\]
Note that the evolution of the experimentally observed polarization vector is identical to that of the quantum spin operator, and we retain \(\vec{S}\) as notation for the polarization vector in what follows.
### Bogoliubov-Krylov averaging for exact spin resonance
The above outlined hierarchy of spin evolution frequencies (Table 1) dictates invoking the Bogoliubov-Krylov (BK) averaging [26] as a tool for a solution of the master equation (2). To give some background, we illustrate the main points of the case of exact resonance \(\nu_{\rm s}=\nu_{\rm WF}\) following the treatment in Ref. [5]. The starting point is the interaction representation
\[\vec{S}(n)=\Big{|}\vec{S}(0)\Big{|}\,{\bf R}_{\rm c}(n\theta_{\rm WF})\vec{p} (n)\,, \tag{8}\]
where \(\vec{p}(n)\) is the spin envelope with initial condition \(|\vec{p}(0)|=1\) defining the polarization as seen by a stationary observer in the co-rotating reference frame rotating about the axis \(\vec{c}\) with frequency \(f_{\rm WF}\). Without loss of generality, in the following we set \(|\vec{S}(0)|=1\).
A brief digression on this choice of the co-rotating frame is in order. The choice is dictated by the point that \(f_{\rm WF}\) is the only _known_ primary frequency in the problem. The spread of spin tunes in the bunch and the _unknown_ walk of the spin precession frequency necessitate a continuous measurement of this _unknown_ frequency in order to obtain a feedback for setting the Wien filter to another known frequency, etc. (In practice, of course, the beam interacts stroboscopically with the polarimeter target once per turn.) To the extent that intrabeam interactions are weak to depolarize the beam (see for instance Ref. [13] and the related discussion in Sec. I), the bunch can be treated as an ensemble of independent particles, so that we solve first the one-particle problem and then take the average over the ensemble.
To the laboratory-frame observer the idle precessing in-plane polarization is described by
\[\begin{split}\vec{u}_{\rm r}(n)&=\ \ \vec{e}_{\rm r}\cos(\theta_{\rm WF}n)+\vec{e}_{\rm t}\sin(\theta_{\rm WF}n)\,, \\ \vec{u}_{\rm t}(n)&=-\vec{e}_{\rm r}\sin(\theta_{\rm WF }n)+\vec{e}_{\rm t}\cos(\theta_{\rm WF}n)\,.\end{split} \tag{9}\]
The master equation for the spin envelope takes the form
\[\vec{p}(n)={\bf R}_{\rm c}(-n\theta_{\rm WF}){\bf R}_{\rm WF}(n){\bf R}_{\rm c }(n\theta_{\rm WF})\vec{p}(n-1)\,. \tag{10}\]
In view of \(\chi_{\rm WF}\ll 1\), the stroboscopic Eq. (10) can be cast in the differential form
\[\frac{\vec{p}(n)}{{\rm d}n}={\bf R}_{\rm c}(-n\theta_{\rm WF}){\bf W}(n){\bf R }_{\rm c}(n\theta_{\rm WF})\vec{p}(n)\,. \tag{11}\]
To the leading order in the small parameter \(\chi_{\rm WF}\) the BK averaging over the spin precession periods proceeds as follows:
\[\begin{split}\langle{\bf R}_{\rm c}(-n\theta_{\rm WF}n){\bf W}( n){\bf R}_{\rm c}(n\theta_{\rm WF}n)\rangle&=\left\langle \begin{pmatrix}0&-\chi(n)\sin(n\theta_{\rm WF})&0\\ \chi(n)\sin(n\theta_{\rm WF})&0&-\chi(n)\cos(n\theta_{\rm WF})\\ -0&\chi(n)\cos(n\theta_{\rm WF})&0\end{pmatrix}\right\rangle\\ &=\begin{pmatrix}0&0&0\\ 0&0&-\frac{1}{2}\chi_{\rm WF}\\ -0&\frac{1}{2}\chi_{\rm WF}&0\end{pmatrix}=2\pi\nu_{\rm SF}\begin{pmatrix}0&0&0 \\ 0&0&-1\\ 0&1&0\end{pmatrix}=2\pi\nu_{\rm SF}{\bf U}\,,\end{split} \tag{12}\]
where we applied
\[\langle\cos^{2}(\theta_{\rm WF}n)\rangle\to\frac{1}{2}\quad\text{and}\quad \langle\cos(\theta_{\rm WF}n)\sin(\theta_{\rm WF}n)\rangle\to 0\,. \tag{13}\]
The solution of Eq. (11) for the envelope will be
\[\vec{p}(x)=\exp(2\pi\nu_{\rm SF}n{\bf U})\vec{p}(0)={\bf E}_{0}(x)\vec{p}(0)\,, \tag{14}\]
where the subscript \(0\) stands for zero detuning. Making use of the recursive relations,
\[{\bf U}^{2n+1}=(-1)^{n}{\bf U}\,,\quad{\bf U}^{2n}=(-1)^{n-1}{\bf U}^{2}\,, \tag{15}\]
we decompose the Taylor expansion of \({\bf E}_{0}(x)\) into sums of the odd and even powers of \({\bf U}\) with the result
\[\begin{split}{\bf E}_{0}(x)&=\sum_{k=0}\frac{(-1)^{k}x^{2 k}}{(2k)!}{\bf U}^{2k}+\sum_{k=0}\frac{(-1)^{k}x^{2k+1}}{(2k+1)!}{\bf U}^{2k+1}\\ &={\bf 1}+\sin x{\bf U}+(\cos x-1){\bf U}^{2}\\ &=\begin{pmatrix}1&0&0\\ 0&\cos x&-\sin x\\ 0&\sin x&\cos x\end{pmatrix}\,,\end{split} \tag{16}\]
where
\[x=2\pi\nu_{\rm SF}^{0}n \tag{17}\]
is the SF phase with the SF tune
\[\nu_{\rm SF}^{0}=\frac{1}{4\pi}\chi_{\rm WF}\left|\vec{c}\times\vec{w}\right|\,, \tag{18}\]
which defines the SF frequency \(f_{\rm SF}=\nu_{\rm SF}^{0}f_{\rm c}\). The factor \(\left|\vec{c}\times\vec{w}\right|\) emerges for generic orientation of the Wien filter axis \(\vec{w}\)[5, 24]. For instance, the so-called EDM mode corresponds to \(\vec{w}\approx\vec{e}_{y}\).
Note that SFs proceed via rotation of the vertical envelope to the tangential one with the frequency \(f_{\rm SF}\), while the radial envelope remains a spectator,
\[p_{\rm r}(x) =p_{\rm r}(0)\,, \tag{19}\] \[p_{\rm c}(x) =p_{\rm c}(0)\cos x-p_{\rm t}(0)\sin x\,,\] \[p_{\rm t}(x) =p_{\rm c}(0)\sin x-p_{\rm t}(0)\cos x\,.\]
The final result for the polarization is
\[\vec{S}(n)={\bf R}_{\rm c}(n\theta\theta_{\rm WF}){\bf E}_{0}(x)\vec{S}(0)\,, \tag{20}\]
with the expansion
\[\begin{split}\vec{S}(n)&={\bf R}_{\rm c}(n\theta _{\rm WF}){\bf E}_{0}(x)\vec{S}(0)\\ &=\left|\vec{S}(0)\right|\left\{p_{\rm r}(x)\vec{u}_{\rm r}(n)+p_ {\rm c}(x)\vec{c}_{\rm r}+p_{\rm t}(x)\vec{u}_{\rm t}(n)\right\}\,.\end{split} \tag{21}\]
The generic initial condition is defined by the initial spin precession phase \(\Phi_{\rm in}\). Our convention is
\[\begin{split}\vec{p}(0)&=p_{\rm c}(0)\vec{c}+p_{\rm r }(0)\vec{e}_{\rm r}+p_{\rm t}(0)\vec{e}_{\rm t}\\ &=p_{\rm c}(0)\vec{c}+p_{\rm rt}(0)(\cos\Phi_{\rm in}\vec{e}_{ \rm r}+\sin\Phi_{\rm in}\vec{e}_{\rm t})\,,\end{split} \tag{22}\]
where \(p_{\rm rt}=\sqrt{p_{\rm r}^{2}+p_{\rm t}^{2}}\) denotes the modulus of the in-plane polarization. These features of the radiofrequency driven polarization are shown in Fig. 1. For the pure in-plane initial polarization \(p_{\rm c}(0)=0\), the envelope of the vertical polarization evolves as \(p_{\rm c}(x)=-\sin\Phi_{\rm in}\sin x\).
Unitarity features of the master equation (2) are noteworthy. Here two unitary spin transfer matrices do describe sequential rotations with preservation of the magnitude of the polarization. Our final result in Eq. (20) has precisely the same unitarity property.
In order to estimate the higher-order corrections to the SF tune, one must proceed in Eq. (12) with the BK averaging of the exact expression \(\sin\chi(n)\cos(n\theta_{\rm WF})\), instead of the perturbative expression \(\chi(n)\cos(n\theta_{\rm WF})\), with the result
\[\langle\sin\chi(n)\cos(n\theta_{\rm WF})\rangle=J_{1}(\chi_{\rm WF})\,, \tag{23}\]
where where \(J_{n}(z)\) is the Bessel function,
\[J_{\rm n}(z)=\left(\frac{z}{2}\right)^{n}\sum_{m=0}^{\infty}\frac{(-1)^{m}}{m! (m+n)!}\left(\frac{z^{2}}{4}\right)^{m}\,. \tag{24}\]
For conditions of the typical JEDI experiments with deuterons, we have an extremely small argument in the Bessel function,
\[\frac{\chi_{\rm WF}}{2}=2\pi\nu_{\rm SF}^{0}=2\pi\frac{f_{\rm SF}}{f_{\rm c}} \approx 10^{-6}\,, \tag{25}\]
and the correction to the linear approximation for the SF tune amounts to \(\approx 10^{-12}\). This gives a time independent renormalization of the polarization and can safely be neglected, see the related discussion of Eq. (126) in Sec. (V.4.2).
### Off-resonance spin rotations
We have at our disposal two _known_ parameters: the Wien-filter frequency \(f_{\rm WF}\) and the Wien-filter strength \(\chi_{\rm WF}\) (spin kick). Detuning is parameterized in terms of the small angle
\[\delta=\theta_{\rm s}-\theta_{\rm WF}=2\pi(\nu_{\rm s}-\nu_{\rm WF})=2\pi \frac{\Delta f_{\rm s}}{f_{\rm c}}\,. \tag{26}\]
Correspondingly, we define the interaction representation in terms of the _known_ Wien filter frequency as in Eq. (8), and cast the spin evolution in Eq. (2) in the form
\[{\bf R}_{\rm c}(n\theta_{\rm WF})\vec{p}(n)={\bf R}_{\rm WF}(n){\bf R}_{\rm c }(\delta){\bf R}_{\rm c}(n\theta_{\rm WF})\vec{p}(n-1)\,. \tag{27}\]
Figure 1: Evolution of the spin envelope in the reference frame, co-rotating at the idle spin precession frequency \(f_{\rm s}\). The initial polarization \(\vec{p}(0)\) is in the horizontal {rt} ring plane. The spectator radial component \(p_{\rm r}(0)=p(0)\cos\Phi_{\rm in}\) is immune to the radiofrequency Wien filter and continues to precess unchanged. The active tangential component \(p_{\rm r}(0)=p(0)\sin\Phi_{\rm in}\) starts rotations driven by the Wien filter in the vertical {ct} plane with the spin-flip frequency \(f_{\rm SF}=\nu_{\rm SF}f_{\rm rev}\). To the observer in the co-rotating frame, the idly precessing unit vectors \(\vec{u}_{\rm r}(n)\) and \(\vec{u}_{\rm t}(n)\) appear as being constant along the radial and tangential directions.
Following Eq. (7), we introduce the detuning corrected expression \(\mathbf{W}(n)=\mathbf{R}_{\mathrm{WF}}(n)\mathbf{R}_{\mathrm{c}}(\delta)-\mathbf{1}\) and proceed to the BK averaging of
\[\mathbf{R}_{\mathrm{c}}(-n\theta_{\mathrm{WF}}n)\mathbf{W}(n)\mathbf{R}_{ \mathrm{c}}(n\theta_{\mathrm{WF}}n)=\begin{pmatrix}0&-\chi(n)\sin(n\theta_{ \mathrm{WF}})&\delta\\ \chi(n)\sin(n\theta_{\mathrm{WF}})&0&-\chi(n)\cos(n\theta_{\mathrm{WF}})\\ -\delta&\chi(n)\cos(n\theta_{\mathrm{WF}})&0\end{pmatrix}\,. \tag{28}\]
The corresponding matrix \(\mathbf{U}\) takes the form
\[\mathbf{U}=\begin{pmatrix}0&0&\cos\rho\\ 0&0&-\sin\rho\\ -\cos\rho&\sin\rho&0\end{pmatrix}\,. \tag{29}\]
The detuning modified SF tune equals
\[\nu_{\mathrm{SF}}=\frac{\sqrt{\chi_{\mathrm{WF}}^{2}+4\delta^{2}}}{4\pi}= \frac{\nu_{\mathrm{SF}}^{0}}{\sin\rho}\,, \tag{30}\]
where we parameterize detuning in terms of the angle \(\rho\) such that
\[\sin\rho=\frac{\chi_{\mathrm{WF}}}{4\pi\nu_{\mathrm{SF}}}\,,\quad\cos\rho= \frac{2\delta}{4\pi\nu_{\mathrm{SF}}}\,. \tag{31}\]
We reiterate that in the generic case the substitution \(\chi_{\mathrm{WF}}\Rightarrow|\vec{c}\times\vec{w}|\,\chi_{\mathrm{WF}}\) is in order, so that
\[\nu_{\mathrm{SF}}^{2}=\frac{1}{16\pi^{2}}\left(\chi_{\mathrm{WF}}^{2}\left| \vec{c}\times\vec{w}\right|^{2}+4\delta^{2}\right)\,. \tag{32}\]
The above derived \(\mathbf{U}\) satisfies the recursive relations from Eq. (15), so that application of the decomposition in Eq. (16) yields
\[\mathbf{E}(x)=\begin{pmatrix}E_{\mathrm{rr}}(x)&E_{\mathrm{rc}}(x)&E_{\mathrm{ rt}}(x)\\ E_{\mathrm{cr}}(x)&E_{\mathrm{cc}}(x)&E_{\mathrm{cc}}(x)\\ E_{\mathrm{tr}}(x)&E_{\mathrm{tc}}(x)&E_{\mathrm{tt}}(x)\end{pmatrix}= \begin{pmatrix}\sin^{2}\rho+\cos^{2}\rho\cos x&\cos\rho\sin\rho(1-\cos x)&\cos \rho\sin x\\ \cos\rho\sin\rho(1-\cos x)&\cos^{2}\rho+\sin^{2}\rho\cos x&-\sin\rho\sin x\\ -\cos\rho\sin x&\sin\rho\sin x&\cos x\end{pmatrix}\,, \tag{33}\]
which describes the envelope rotations about the axis
\[\vec{m}=\sin\rho\,\vec{e}_{\mathrm{r}}-\cos\rho\,\vec{c}\,, \tag{34}\]
with the SF phase
\[x=2\pi\nu_{\mathrm{SF}}n=2\pi\nu_{\mathrm{SF}}f_{c}t\,. \tag{35}\]
(for generic SO(3) rotations, see Ref. [27]). In the subsequent discussion, the \(x\)-dependence and the time-dependence are interchangeable.
Within the spinor formalism, an early derivation of Eq. (33) was already presented in the 2017 JEDI publication [5], and the alternative and equivalent treatment of the same problem was reported in the follow-up JEDI publication in 2018 [16]. The above outlined SO(3) formalism will play a pivotal role in the subsequent incorporation of the spin-decoherence effects that will be discussed in Sec. V.
### Radiofrequency solenoid as a spin rotator
The above formalism is fully applicable as well to the orbit preserving radiofrequency solenoid as a spin rotator. In that case, one needs to interchange \(\vec{e}_{\mathrm{t}}\Rightarrow\vec{e}_{\mathrm{r}}\), \(\vec{e}_{\mathrm{r}}\Rightarrow-\vec{e}_{\mathrm{t}}\) and also the corresponding indices \(\mathrm{r}\Leftrightarrow\mathrm{t}\) in the matrix elements of \(\mathbf{E}\). The spin kick \(\chi_{\mathrm{WF}}\) in the Wien filter must be swapped for the spin kick in the solenoid \(\chi_{\mathrm{sol}}\),
\[\chi_{\mathrm{WF}}\Rightarrow\chi_{\mathrm{sol}}=-\frac{q(1+G)}{mv}\int dzB(z)\,, \tag{36}\]
where \(B(z)\) is the longitudinal magnetic field in the solenoid. In the co-rotating frame of reference, the spin envelope would precess about the axis
\[\vec{m}=-\sin\rho\,\vec{e}_{\mathrm{t}}+\cos\rho\,\vec{c}\,. \tag{37}\]
In the limit of vanishing detuning, \(\cos\rho=0\), the spectator in-plane polarization will be directed along \(\vec{e}_{\mathrm{t}}\). In addition, the convention for the initial spin phase has to be modified such that \(\Phi_{\mathrm{in}}\rightarrow\Phi_{\mathrm{in}}+\nicefrac{{\pi}}{{2}}\).
## III Impact of detuning on the vertical polarization
### Evolution of vertical polarization
We start with the beam polarization stored along the spin stable axis \(\vec{c}\), so that \(p_{\mathrm{c}}(0)=1\) and \(p_{r}(0)=p_{t}(0)=0\). Note that the notion of an initial spin phase \(\Phi_{\mathrm{in}}\) is meaningful only for a non-vanishing precessing horizontal component of the polarization. With operating Wien filter, the vertical polarization will evolve as
\[p_{\mathrm{c}}(x)=E_{\mathrm{cc}}(x)p_{\mathrm{c}}(0)=(\cos^{2}\rho+\sin^{2} \rho\cos x)p_{\mathrm{c}}(0)\,. \tag{38}\]
This result nicely illustrates the interplay of the detuning by \(\delta\) [see Eqs. (26) and (31)] and the spin kick \(\chi_{\mathrm{WF}}\) in the Wien filter:
1. The envelope exhibits oscillations with amplitude \(\sin^{2}\rho\leq 1\) on top of the offset \(\cos^{2}\rho\).
2. In the regime of negligible detuning, the offset \(\cos^{2}\rho\ll 1\) can be neglected and the vertical polarization will oscillate with full amplitude \(p_{\mathrm{c}}(x)=p_{\mathrm{c}}(0)\cos x\).
3. As the detuning increases, the oscillation amplitude decreases, and at \(\sin^{2}\rho<\sfrac{1}{2}\) the SF is incomplete: the offset term takes over and the vertical polarization no longer passes through zero.
4. At finite detuning, \(\cos^{2}\rho<\sfrac{1}{2}\), the pure horizontal polarization is reached at the envelope phase \[\cos x_{0}=-\cot^{2}\rho\,.\] (39)
5. Conversely, to achieve the often-required \(\sfrac{\tau}{2}\) rotation from the vertical to the horizontal spin orientation, usually performed on a time scale of approximately \(1\,\mathrm{s}\) with the radiofrequency solenoid [28], the detuning needs to satisfy only the very liberal condition that \[\Delta f_{\mathrm{s}}<\frac{1}{\sqrt{2}}f_{\mathrm{SF}}\,.\] (40)
6. The detuning can be constrained by a comparison of the flipped, \(S_{\mathrm{c}}(\pi)\), and initial, \(S_{\mathrm{c}}(0)\), vertical polarizations, \[2\cos^{2}\rho=1-\frac{S_{\mathrm{c}}(\pi)}{S_{\mathrm{c}}(0)}\,.\] (41) The \(\cos^{2}\rho\) thus determined must not be confused with the \(\epsilon_{\mathrm{flip}}\), which is determined from the exponential attenuation of the vertical polarization [13].
7. In the limiting case of strong detuning, \(\cos^{2}\rho\to 1\), the amplitude of the oscillating term vanishes, the rotation axis of the envelope becomes equal to the vertical axis, \(\vec{m}=\vec{c}\), and the vertical polarization is preserved, \(p_{\mathrm{c}}(x)=p_{\mathrm{c}}(0)\).
8. The phase locking of spin precession with the radiofrequency Wien filter developed by the JEDI collaboration requires continuous feedback. In practice, continuous means stepwise, since one must collect statistics for \(t_{\mathrm{fb}}=5-10\,\mathrm{s}\) to measure the spin precession frequency with sufficient accuracy. The implications of the emerging detuning with changing sign of Eq. (1) will be discussed in Sec. V.1.
### Build-up of vertical polarization from in-plane polarization
In this case, the initial conditions are \(p_{\mathrm{c}}(0)=0\) and \(p_{\mathrm{rt}}(0)=1\), and the initial in-plane polarization can be parameterized in terms of the initial spin phase \(\Phi_{\mathrm{in}}\), as given in Eq. (22).
Reading \(E_{\mathrm{cr}}(x)\) and \(E_{\mathrm{ct}}(x)\) from the envelope evolution matrix \(\mathbf{E}(x)\) of Eq. (33), we find
\[\begin{split} p_{\mathrm{c}}(x)&=E_{\mathrm{cr}}(x)p_{ \mathrm{r}}(0)+E_{\mathrm{ct}}(x)p_{\mathrm{t}}(0)\\ &=\sin\rho\left(\cos\rho\cos\Phi_{\mathrm{in}}(1-\cos x)-\sin \Phi_{\mathrm{in}}\sin x\right)\\ &=q(\Phi_{\mathrm{in}},\rho)\sin\rho\sin\left(\frac{x}{2}\right) \sin\left(\frac{x}{2}-\zeta\right)\,,\end{split} \tag{42}\]
where
\[\begin{split} q(\Phi_{\mathrm{in}},\rho)&=\sqrt{ \sin^{2}\Phi_{\mathrm{in}}+\cos^{2}\rho\cos^{2}\Phi_{\mathrm{in}}}\,,\\ \sin\zeta&=\frac{\sin\Phi_{\mathrm{in}}}{\sqrt{\sin^{2 }\Phi_{\mathrm{in}}+\cos^{2}\rho\cos^{2}\Phi_{\mathrm{in}}}}\,,\mathrm{and}\\ \cos\zeta&=\frac{\cos\rho\cos\Phi_{\mathrm{in}}}{ \sqrt{\sin^{2}\Phi_{\mathrm{in}}+\cos^{2}\rho\cos^{2}\Phi_{\mathrm{in}}}}\,. \end{split} \tag{43}\]
In the case of \(\zeta=0\), the vertical polarization is invariant under the interchange \(x\Leftrightarrow 2\pi-x\) within the symmetric period interval \([0,2\pi]\), while for finite \(\zeta\) the related invariance under \(x-\zeta\Leftrightarrow 2\pi-(x-\zeta)\) persists in the shifted symmetric interval \([\zeta,2\pi+\zeta]\).
It is noteworthy that in the case exactly on resonance, \(\cos\rho=0\), and
\[p_{\mathrm{c}}(x)=-p_{\mathrm{t}}(0)\sin x=-\sin\Phi_{\mathrm{in}}\sin x\,, \tag{44}\]
so that only the initial tangential polarization is the active one, while the radial component of the horizontal polarization remains a spectator component and does not contribute at all to the build-up of the vertical polarization.
Polarimetry of the in-plane polarization
### Amplitude and phase conventions
In the generic case, the polarization components are given by Eq. (21)
\[\begin{split} S_{\mathrm{r}}(x,n)=&\quad p_{\mathrm{r} }(x)\cos(n\theta_{\mathrm{WF}})+p_{\mathrm{t}}(x)\sin(n\theta_{\mathrm{WF}})\,, \\ S_{\mathrm{c}}(x,n)=&\quad p_{\mathrm{c}}(x)\,,\\ S_{\mathrm{t}}(x,n)=&-p_{\mathrm{r}}(x)\sin(n \theta_{\mathrm{WF}})+p_{\mathrm{t}}(x)\cos(n\theta_{\mathrm{WF}})\,.\end{split} \tag{45}\]
The running envelope \(\vec{p}(x)\) is given by Eq. (14) with the \(\rho\)- and \(x\)-dependent evolution matrix of Eq. (33), subject to the \(\Phi_{\mathrm{in}}\)-dependent initial envelope \(\vec{p}(0)\) of Eq. (22). The spin-flip phase \(x\) and the turn number \(n\) are related by Eq. (35), we kept both on purpose to distinguish spin-flip rotations of the envelopes from the idle spin precession. Because of parity conservation in strong interactions, the tangential (longitudinal) polarization at the polarimeter \(S_{\mathrm{t}}(x,n)\) is not measurable. The up-down asymmetry in the polarimeter measures the radial (transverse) polarization \(S_{\mathrm{r}}(x)\). This measurement takes place stroboscopically once per revolution of the beam. The polarimeter signal as a function of turn number \(n\) is Fourier-analyzed bin by bin, with a bin duration corresponding to about \(10^{6}\) turns in the machine, but still sufficiently short so that the variation of the spin-flip phase \(x\) and the walk of the in-plane-polarization envelopes \(p_{\mathrm{r}}(x)\) and \(p_{\mathrm{t}}(x)\) can be neglected.
A cartoon of the Fourier analysis boils down to the evaluation of
\[\begin{split} p_{\mathrm{r}}(x)&=\frac{2}{N}\sum_{ k=1}^{N}S_{\mathrm{r}}(x,k)\cos k\xi_{\mathrm{WF}}\,,\\ p_{\mathrm{t}}(x)&=\frac{2}{N}\sum_{k=1}^{N}S_{ \mathrm{t}}(x,k)\sin k\xi_{\mathrm{WF}}\,.\end{split} \tag{46}\]
where \(k\) is the turn number of the corresponding event in the polarimeter, and \(N\) is a total number of events in the bin. These definitions are supported by the least squares analysis, and both \(p_{\mathrm{r}}(x)\) and \(p_{\mathrm{t}}(x)\) take their maximal magnitudes at \(\xi_{\mathrm{WF}}=\pm\theta_{\mathrm{WF}}\). Because only one component of the rotating spin vector \(\vec{S}(x,k)\) is observed, there is a non-essential sign ambiguity in \(p_{\mathrm{t}}(x)\).
The orientation of \(\vec{p}_{\mathrm{rt}}\) is given by the phase \(0<\psi(x)<2\pi\), specified in terms of
\[\begin{split}\sin\psi(x)&=\frac{p_{\mathrm{r}}(x) }{\sqrt{p_{\mathrm{r}}^{2}(x)+p_{\mathrm{t}}^{2}(x)}}\,,\\ \cos\psi(x)&=\frac{p_{\mathrm{t}}(x)}{\sqrt{p_{ \mathrm{r}}^{2}(x)+p_{\mathrm{t}}^{2}(x)}}\,.\end{split} \tag{47}\]
The full-fledged four-quadrant determination of \(\psi(x)\) is well possible, but without any loss of information, it is convenient to map the phase \(\psi(x)\) onto the band \(0<\phi(x)<\pi\), where
\[\begin{split}\phi(x)&=\arccos\left(\frac{p_{ \mathrm{t}}(x)}{\sqrt{p_{\mathrm{r}}^{2}(x)+p_{\mathrm{t}}^{2}(x)}}\right)\\ &=\arccos\left[(\cos\psi(x)\right]\,.\end{split} \tag{48}\]
It terms of the four-quadrant definition, this amounts to assigning to the radial polarization its modulus,
\[|p_{\mathrm{r}}(x)|=p_{\mathrm{rt}}(x)|\sin\psi(x)|=p_{\mathrm{rt}}(x)\sin\phi (x)\,. \tag{49}\]
A comment on the statistical limitations is in order. With limited statistics, the magnitude \(p_{\mathrm{rt}}(x)\) of the in-plane component of the close-to-vertical polarization can only be measured to a certain accuracy \(\Delta p_{\mathrm{rt}}\), and the accuracy of determination of the phase of \(p_{\mathrm{rt}}(x)\) deteriorates for small in-plane polarization, \(\Delta\phi(x)\propto\Delta p_{\mathrm{rt}}/p_{\mathrm{rt}}\).
### Continuous spin rotation by the WF: build-up of pure initial in-plane polarization
We find it instructive to illustrate the RF-driven spin dynamics on the special case of _continuous_ spin rotations by the Wien filter. In terms of the generic three-stage process, outlined in Sec. I, in stage I, instead of making use of the the radiofrequency solenoid, the spins are rotated by the Wien filter. Stage II is skipped altogether and stage III begin at the instant when the vanishing vertical polarization has been reached in stage I. While in the generic three-stage process the detuning of the Wien filter in stage III can be different from the detuning of the radiofrequency -solenoid, in stage I, due to the tuning of the Wien filter to the spin precession frequency, measured in stage II, in the regime of _continuous_ Wien filter operation the detuning angle \(\rho\) is kept constant from stage I to stage III on.
Now we treat the spin evolution starting with the initial polarizations \(p_{\mathrm{c}}(0)=1\) and \(p_{\mathrm{r}}(0)=p_{\mathrm{t}}(0)=0\). The envelope rotation phase \(x=0\) corresponds to the time at which the spin rotator is switched on. The radial and tangential polarization envelopes are given by
\[\begin{split} p_{\mathrm{r}}(x)&=E_{\mathrm{rc}}(x)p _{\mathrm{c}}(0)=\cos\rho\sin\rho\left(1-\cos x\right),\\ p_{\mathrm{t}}(x)&=E_{\mathrm{tc}}(x)p_{\mathrm{c}}( 0)=\sin\rho\sin x\,p_{\mathrm{c}}(0)\,.\end{split} \tag{50}\]
It is interesting to note that although \(p_{\mathrm{r}}(x)\) is zero at \(\cos x=1\), in this regime it does not change its sign at any value of \(x\). The positively defined envelope \(p_{\mathrm{rt}}(x)\) of the in-plane polarization equals
\[\begin{split} p_{\mathrm{rt}}(x)&=\sqrt{p_{\mathrm{r} }^{2}(x)+p_{\mathrm{t}}^{2}(x)}\\ &=2|\sin\rho|\cdot|\sin\frac{x}{2}|\sqrt{\cos^{2}\frac{x}{2}+\cos^{ 2}\rho\sin^{2}\frac{x}{2}}\,.\end{split} \tag{51}\]
### Cross talk of vertical, tangential and radial polarizations
Special features of the case exactly on resonance (\(\cos\rho=0\)) are noteworthy. Although mathematically exact resonance is a special case, we will always come across its special properties, and it is still instructive. In this case the envelope rotation axis \(\vec{m}\) of Eq. (34) is a purely radial one. Viewed in the co-rotating frame, the vertical polarization can not rotate into the radial one along the rotation axis. Indeed, according to Eq. (50), in this case \(p_{\rm r}(x)\) would vanish. In other words, the spectator radial polarization decouples from the vertical one, while the active tangential envelope will oscillate with the full amplitude \(p_{\rm c}(0)\). Similarly, the tangential polarization cannot rotate into the radial one. Alternatively formulated, the polarization along the rotation axis \(\vec{m}\) is _immune_ to the RF-driven rotations and is preserved.
This decoupling of both the vertical component from the spectator in-plane component and the active component from the spectator in-plane component is lifted once \(\cos\rho\neq 0\). In the former case, this is clear from Eq. (50). In the latter case, the cross talk of radial and tangential polarizations is given by the matrix elements \(E_{\rm rt}(x)=-E_{\rm tr}(x)\) in Eq. (33). For instance, if \(p_{\rm c}(0)=p_{\rm r}(0)=0\) and \(p_{\rm t}(0)=1\), then
\[p_{\rm r}(x)=\cos\rho\sin x\,p_{\rm t}(0)\,. \tag{52}\]
Vice versa, at \(p_{\rm c}(0)=p_{\rm t}(0)=0\) and \(p_{\rm r}(0)=1\), we find
\[p_{\rm t}(x)=-\cos\rho\sin x\,p_{\rm r}(0)\,. \tag{53}\]
This cross talk is a natural consequence of the vertical component \(\cos\rho\vec{c}\) of the rotation axis \(\vec{m}\) of the envelope.
### Continuous spin rotation by the Wien filter and envelope of in-plane polarization
The result for \(p_{\rm rt}(x)\) has already been given in Eq. (51). The predicted dependence of the spin envelope on the detuning is depicted in Fig. 2 for \(\cos\rho\geq 0\). As a function of the phase \(x\), the envelope \(p_{\rm rt}(x)\) is a periodic function with a period of \(2\pi\), but in order to better demonstrate the periodicity properties of the in-plane polarization, we show the results for \(x\in[0,4\pi]\). We start with the special case of vanishing detuning, _i.e._, with \(\cos\rho=0\) and \(\sin\rho=1\), when we recover the second line of Eq. (50),
\[p_{\rm rt}(x)=|2\left|p_{\rm c}(0)\sin\left(\frac{x}{2}\right)\cos\left(\frac{ x}{2}\right)\right|\,. \tag{54}\]
In the interval \([0,\,2\pi]\) the envelope has two end-point zeros at \(x_{1}=0\) and \(x_{2}=2\pi\), stemming from \(\sin(x/2)=0\). There is still another zero at midpoint \(x_{3}=\pi\), stemming from \(\cos(x/2)=0\). There are two maxima at \(x_{4}=\pi/2\) and \(x_{5}=\pi/2+\pi\), stemming from \(p^{\prime}_{\rm rt}(x)=|p_{\rm c}(0)|\cos x=0\). The change of the sign of \(\sin\rho\Leftrightarrow\) corresponds to the change \(\phi(x)\Leftrightarrow\pi-\phi(x)\).
The walk of these zeros and extrema with \(\rho\) is as follows. The functional form in Eq. (51) retains the end-point zeros at \(\sin(x/2)=0\), _i.e.,_ the \(\rho\)-independent \(x_{1}=0\) and \(x_{2}=2\pi\). However, as soon as \(\cos\rho\neq 0\), the midpoint zero disappears, and one has to look for zeros of the derivative \(S^{\prime}_{\rm rt}=0\), which are roots of the equation
\[\cos\left(\frac{x}{2}\right)\left[(1-2\sin^{2}\rho+2\sin^{2}\rho\cos^{2} \left(\frac{x}{2}\right)\right]=0\,. \tag{55}\]
Here \(\cos(x/2)=0\) gives the mid-point extremum at \(x_{3}=\pi\), where
\[p_{\rm rt}(x_{3})=|p_{\rm c}(0)\sin 2\rho|\,\,. \tag{56}\]
The two other extrema are roots of the equation
\[\cos^{2}\left(\frac{x}{2}\right)=1-\frac{1}{2\sin^{2}\rho}\,, \tag{57}\]
which has solutions only at \(\sin^{2}\rho\geq\sfrac{1}{2}\),
\[x_{4,5}(\rho)=\pi\pm 2\arcsin\sqrt{1-\frac{1}{2\sin^{2}\rho}}\,. \tag{58}\]
The separation of these two roots,
\[x_{5}(\rho)-x_{4}(\rho)=4\arcsin\sqrt{1-\frac{1}{2\sin^{2}\rho}} \tag{59}\]
starts at \(\pi\) at \(\sin^{2}\rho=1\) and vanishes at \(\sin^{2}\rho=\sfrac{1}{2}\), when the roots \(x_{4}\) and \(x_{5}\) merge with \(x_{3}=\pi\). Note that prior to this merger, the minimum of the envelope \(p_{\rm rt}(x)=|p_{\rm c}(0)\sin 2\rho|<|p_{\rm c}(0)|\) will be sandwiched between the maxima \(p_{\rm rt}(x_{4,5})=|p_{\rm c}(0)|\), while at still smaller \(\sin^{2}\rho<\sfrac{1}{2}\), the envelope will exhibit a single bump with height \(p_{\rm rt}(x)=|p_{\rm c}(0)\sin 2\rho|\).
### Continuous spin rotation by the Wien filter and phase of in-plane polarization
The expected phase motion for \(\cos\rho>0\) is depicted in Fig. 3 for several values of \(\rho\). According to Eq. (50), in the considered case the radial envelope does not change its sign at all, _i.e._, \(\operatorname{sgn}(p_{\rm r}(x))=+1\), while \(p_{t}(x)\) changes the sign at \(x=\pi\). Still, at \(x\neq\pi\) the phase remains well defined. Making use of \(p_{\rm t}(x)\) from Eq. (50) and \(p_{\rm rt}(x)\) from Eq. (51), we obtain
\[\phi(x)=\arccos\left(\frac{\operatorname{sgn}(\sin x)\operatorname{sgn}(\sin \rho)}{\sqrt{1+\cos^{2}\rho\tan^{2}\frac{\pi}{2}}}\right)\,. \tag{60}\]
Evidently, the change of the sign, \(\sin\rho\Leftrightarrow-\sin\rho\), entails the change of phase \(\phi(x)\Leftrightarrow\pi-\phi(x)\). We predict
\(\cos\phi(x)=0\) and \(\phi(x)=\pi/2\) at \(x\to\pi\), _regardless_ of the detuning angle \(\rho\). The approach to \(\phi(x)=\pi/2\) is singular in a sense that for \(\cos^{2}\rho\ll 1\), it takes place in the very narrow range of \(x\) in the vicinity of \(x=\pi\), which is best seen from
\[\cot\psi(x)=\frac{1}{\cos\rho}\cot\frac{x}{2}\,. \tag{61}\]
One readily finds that at \(x=\pi\), the derivative of the phase equals \(\phi^{\prime}(x)=2/\cos\rho\) which is singular at \(\cos\rho\to 0\), thus the phase motion degenerates into the step function. Still more singular is the case of \(x=2\pi\), when
\[\cos\phi(x)=\operatorname{sgn}(\sin x)\operatorname{sgn}(\sin\rho) \tag{62}\]
and changes sign from \(-1\) for \(x=2\pi-0\) to \(+1\) for \(x=2\pi+0\), _i.e.,_ the envelope phase has a phase jump by \(-\pi\) irrespective of the detuning. Finally, Eq. (60) predicts the slope at \(x=+0\) and \(x=2\pi-0\),
\[\phi^{\prime}(+0)=\phi^{\prime}(2\pi-0)=\phi^{\prime}(2\pi+0)=\frac{1}{2}|\cos \rho|\,. \tag{63}\]
### Interplay of detuning and initial phase in the generic three-stage regime
In spin physics experiments on tests of fundamental symmetries such as the search for parity and time-reversal-invariance violating permanent charged particle electric dipole moments [18; 19; 20], of major interest is the signal of spin rotations during stage III, where we make use of the radiofrequency Wien filter starting with in-plane polarization. In principle, alongside the measured spin precession frequency, the polarimetry of the idle spin precession during stage II gives access also to the orientation of the in-plane polarization at the activation of the Wien filter in stage III. The JEDI collaboration demonstrated the continuous retention of the corresponding phase \(\Phi_{\text{in}}\) to an accuracy of \(0.21\,\text{rad}\)[22]. While the proof of principle for the pilot bunch concept consists in the mere observation that the radiofrequency Wien filter does not affect the pilot bunch spins, in the detailed treatment the initial spin phase \(\Phi_{\text{in}}\) becomes another free parameter that has to be determined by fitting the experimental data. The clocks for the in-plane precession phase gain on top of \(\Phi_{\text{in}}\) and the spin envelope phase \(x\) [Eq. (17)] begins to count when the Wien filter is switched on. The generic solution for the vertical polarization as a function of \(\Phi_{\text{in}}\) is given by Eq. (42).
In the evolution of the horizontal polarization, the dependence on \(\Phi_{\text{in}}\) is much more subtle and deserves a dedicated analysis.
Figure 2: Pattern of the time dependence of the envelope of the horizontal polarization, which evolves from the pure vertical initial polarization \(p_{\text{c}}(0)=1\), under the RF-driven continuous full or partial spin flips for different detunings, as given by Eq. (51). Note that the central zero of \(p_{\text{rt}}(x)\) at \(x=\pi\) and \(x=3\pi\) (full spin flip) occurs exclusively at zero detuning, _i.e._, for \(\delta=0\) or \(\cos^{2}\rho=0\). Within each period, the double hump structure with hump height \(p_{\text{rt}}=1\) persists for \(\cos^{2}\rho<\sfrac{1}{2}\). At even greater detuning, for \(\cos^{2}\rho\geq\sfrac{1}{2}\), \(p_{\text{rt}}(x)\) exhibits a single hump whose height vanishes in the limit \(\rho\to 0\).
#### iv.2.1 Envelope of in-plane polarization
Resorting to the envelope evolution matrix \(\mathbf{E}(x)\) of Eq. (33), we obtain
\[\begin{split} p_{\mathrm{r}}(x)&=E_{\mathrm{rr}}(x) \cos\Phi_{\mathrm{in}}+E_{\mathrm{rt}}(x)\sin\Phi_{\mathrm{in}}\\ &=(\sin^{2}\rho+\cos^{2}\rho\cos x)\cos\Phi_{\mathrm{in}}+\cos \rho\sin\Phi_{\mathrm{in}}\sin x=\sin^{2}\rho\cos\Phi_{\mathrm{in}}+q(\Phi_{ \mathrm{in}},\rho)\cos\rho\cos y\,,\\ p_{\mathrm{t}}(x)&=E_{\mathrm{rr}}(x)\cos\Phi_{ \mathrm{in}}+E_{\mathrm{rt}}(x)\sin\Phi_{\mathrm{in}}=-\cos\rho\cos\Phi_{ \mathrm{in}}\sin x+\sin\Phi_{\mathrm{in}}\cos x=-q(\Phi_{\mathrm{in}},\rho) \sin y\,,\end{split} \tag{64}\]
where \(y=x-\zeta\) [see also Eq. (43)]. The predicted dependence of \(p_{\mathrm{rt}}(x)\) on the initial spin precession phase \(\Phi_{\mathrm{in}}\) is shown in Fig. 4. It is instructive to start the discussion exactly on resonance, _i.e._, when \(\cos\rho=0\) and \(\sin\rho=1\). Under these conditions, we have
\[\begin{split} p_{\mathrm{r}}(x)&=\cos\Phi_{\mathrm{ in}}\,,\\ p_{\mathrm{t}}(x)&=\sin\Phi_{\mathrm{in}}\cos x\,, \\ p_{\mathrm{rt}}(x)&=\sqrt{\cos^{2}\Phi_{\mathrm{in}}+ \sin^{2}\Phi_{\mathrm{in}}\cos^{2}x}\,.\end{split} \tag{65}\]
This result nicely illustrates the emergence of the spectator radial polarization \(p_{\mathrm{r}}\), which is immune to the radiofrequency-driven rotations, and the active tangential polarization \(p_{\mathrm{t}}\), which is a partner to the vertical polarization. The distinctive appearance of a spectator polarization component is an exclusive feature of the case of vanishing detuning with \(\cos\rho=0\). The envelope \(p_{\mathrm{rt}}(x)\) is a smooth function of \(x\) with minima at \(x_{4}=\pi/2\) and \(x_{5}=\pi/2+\pi\), and the maxima, \(p_{\mathrm{rt}}=1\), at \(x_{3}=\pi\) and at the end-points \(x_{1}=0\) and \(x_{2}=2\pi\). These features are evident from Fig. 2, since \(p_{\mathrm{rt}}=(1-p_{c}^{2})^{1/2}\).
We recall that the result from Eq. (51) for the continuous operation of the Wien filter beginning with pure vertical polarization, shown in Fig. 2, is symmetric with respect to the substitution \(x\Leftrightarrow 2\pi-x\). This symmetry is manifestly broken for non-vanishing values of \(\Phi_{\mathrm{in}}\) and \(\cos\rho\) [see Eq. (42)], and we obtain
\[\begin{split}& p_{\mathrm{rt}}^{2}(x)-p_{\mathrm{rt}}^{2}(2\pi-x) =p_{\mathrm{c}}^{2}(2\pi-x)-p_{\mathrm{c}}^{2}(x)\\ &=4\sin\Phi_{\mathrm{in}}\cos\Phi_{\mathrm{in}}\sin^{2}\rho\cos \rho\sin x\,\left(1-\cos x\right).\end{split} \tag{66}\]
Figure 3: Phase motion of the horizontal polarization envelope during the RF-driven continuous spin flips for different detunings, as predicted by Eq. (60) for \(\cos\rho\in[0,1]\). The phase exhibits a jump by \(-\pi\) from \(x=2\pi-0\) to \(2\pi+0\), which repeats itself periodically at any \(x=2\pi M,\text{ where }M=0,\ 1,\ 2,\,3,...\) In the vicinity of the phase jump, the slope \(\phi^{\prime}(2\pi-0)=\phi^{\prime}(2\pi+0)=\frac{1}{2}\cos\rho\). Yet another jump by \(+\pi\) develops at \(x=\pi+2\pi M\), where the slope, \(\phi^{\prime}(x=\pi)=2/\cos\rho\), of the phase becomes singular for \(\cos^{2}\rho\to 0\).
For finite \(\xi(\Phi_{\rm in},\rho)\), one rather has an invariance of \(p_{\rm rt}(x)\) with respect to the interchange \(x-\zeta\Leftrightarrow 2\pi-(x-\zeta)\) within the shifted symmetric interval \([\zeta,2\pi+\zeta]\) [see the related discussion of Eq. (42)].
iv.2.2 Phase of in-plane polarization envelope for pure radial and longitudinal initial polarizations
The motion of the phase \(\phi(x)\) of the envelope \(\vec{p}_{\rm rt}\) is quite sensitive to the initial phase \(\Phi_{\rm in}\) and the detuning angle \(\rho\). It is sufficient to treat the case \(\cos\rho\geq 0\), an extension of the results to \(\cos\rho<0\) is straightforward.
We start from Eq. (64) with the pure radial initial polarization case of \(\Phi_{\rm in}=0\), when \(p_{\rm r}(x)=\sin^{2}\rho+\cos^{2}\rho\cos x\) and \(p_{\rm t}(x)=-\cos\rho\sin x\). The results are shown in Fig. 5. First of all, \(\phi(x)\) is antisymmetric with respect to \(x\Leftrightarrow 2\pi-x\). Second, for all detuning angles we find \(\phi(x)=\pi/2\) at \(x=0,\ \pi,\ 2\pi,\). Third, \(\cos\phi(x_{1})=-\operatorname{sgn}(\cos\rho\sin x_{1})=\pm 1\), _i.e.,_\(\phi_{1}=0,\ \pi\), can be reached only if \(p_{\rm r}(x_{1})=0\), _i.e.,_, at
\[\cos x_{1}=-\tan^{2}\rho\,, \tag{67}\]
which is only possible for \(\cos^{2}\rho\geq\nicefrac{{1}}{{2}}\).
The phase motion about the pointed tips at \(|\cos\phi(x_{1})|=1\) can be understood as follows. In the vicinity of \(x_{1}\), we have \(p_{r}(x)=-\cos^{2}\rho\sin x_{1}\cdot(x-x_{1})\) and
\[|\cos\phi(x)| =1-\frac{1}{2}[\phi(x)-\phi_{1}]^{2} \tag{68}\] \[=\frac{1}{\sqrt{1+\cos^{2}\rho\ (x-x_{1})^{2}}}\] \[=1-\frac{1}{2}\cos^{2}\rho\ (x-x_{1})^{2}\,,\]
which yields the slope
\[\phi(x)-\phi_{1}=\pm|\cos\rho|\ |x-x_{1}|\,. \tag{69}\]
Note that the magnitude of the slope at the tip, \(|\cos\rho|\), varies from \(1/\sqrt{2}\) to \(1\).
In the opposite case of \(\cos^{2}\rho<\nicefrac{{1}}{{2}}\), the envelope phase span is less than \(\pi\). Indeed, at \(|\cos\rho|\ll 1\) we have
\[\begin{split}\cos\phi(x)&\approx-\cos\rho\sin x\,, \quad\text{and}\\ \phi(x)&\approx\frac{3\pi}{2}+\cos\rho\sin x\,,\end{split} \tag{70}\]
with a phase span of \(\phi_{\rm max}-\phi_{\rm min}\approx 2|\cos\rho|\). For generic \(\cos\rho<1/2\), the extremal values of \(\phi(x)\) come from the equation \((\cos\psi(x))^{\prime}=0\), which takes the form
\[\cos x+\sin^{2}\rho\cos^{2}\rho\ (1-\cos x)^{2}=0\,, \tag{71}\]
and yields the root \(\cos x=-\cot^{2}\rho\). The resulting phase span equals
\[\phi_{\rm max}-\phi_{\rm min}=2\arccos|\cot\rho|\,. \tag{72}\]
Figure 4: Pattern of the \(x\)-dependence of the horizontal polarization envelope \(p_{\rm rt}\), which evolves from the initial horizontal polarization with different initial spin precession phases \(\Phi_{\rm in}\). Within the interval \([0,2\pi]\), the left-right symmetry of the envelope polarization at \(\Phi_{\rm in}=0,\,\pi\) is broken at \(0<\Phi_{\rm in}<\pi\) [see Eq. (66)]. However, the left-right symmetry is recovered within the symmetric period \([\zeta,2\pi+\zeta]\), see the discussion of symmetry properties of Eq. (42).
Finally, note how with approach to the boundary of the two regimes, \(\cos^{2}\rho\to\nicefrac{{1}}{{2}}\), the phase motion evolves into the phase jump.
The next interesting case we would like to discuss is the pure tangential initial polarization, characterized by
\[\begin{split}\Phi_{\rm in}&=\pi/2\,,\\ p_{\rm r}(x)&=\cos\rho\sin x\,,\\ p_{\rm t}(x)&=\cos x\,,\\ p_{\rm rt}&=\sqrt{\cos^{2}\rho\sin^{2}x+\cos^{2}x} \,,\end{split} \tag{73}\]
so that
\[\cos\phi(x)=\frac{\text{sgn}(\cos x)}{\sqrt{1+\cos^{2}\rho\tan^{2}x}}\,. \tag{74}\]
The corresponding results are presented in Fig. 6. The phase \(\phi(x)\) is symmetric with respect to \(x\Leftrightarrow 2\pi-x\) and the phase swing \(\phi_{\rm max}-\phi_{\rm min}=\pi\) for all \(\rho\). It exhibits pointed tips at \(x=x_{1}\), when \(\tan^{2}x_{1}=0\), _i.e.,_ when \(\phi(x_{1})=0\) for \(x_{1}=0\), \(2\pi,...\) and when \(\phi(x_{1})=\pi\) for \(x_{1}=\pi\), \(3\pi,...\) In the vicinity of the pointed tip at \(x=x_{1}\), the phase motion is given by
\[\phi_{1}-\phi(x)=\pm|\cos\rho|\cdot|x-x_{1}|\,, \tag{75}\]
yielding exactly the same slope as in Eq. (69). The only distinction to the case of \(\Phi_{\rm in}=0\) is that here \(|\cos\rho|\leq 1/\sqrt{2}\). Note that \(\phi(\pi/2)=\pi/2\), and at \(|\cos\rho|\ll 1\), the phase \(\phi(x)\) passes \(\pi/2\) steeply in the narrow range of \(|x-\pi/2|<|\cos\rho|\). This steep variation of \(\phi(x)\) about \(x=\pi/2\) tends to a step function as \(|\cos\rho|\to 0\).
iv.2.3 Evolution of the phase of in-plane polarization envelope for generic orientation of the initial polarization
The analysis is based on Eqs. (64) and (48). The salient features of \(\phi(x)\) for generic \(\Phi_{\rm in}\) are illustrated in Fig. 7 for the example that \(\Phi_{\rm in}=\pi/4\). To start with, at \(x=0\) and \(x=2\pi\), Eq. (64) implies that
\[\phi(0)=\phi(2\pi)=\frac{\pi}{2}-\Phi_{\rm in}\,, \tag{76}\]
_independent_ of the detuning parameter \(\rho\).
The subsequent analytic discussion is most conveniently performed in terms of the variables \(y=x-\zeta(\Phi_{\rm in},\rho)\) and \(q(\Phi_{\rm in},\rho)\) [see Eqs. (43) and (64)]. A major finding is that the same universal slope at the tip, \(\pm|\cos\rho|\), persists for all \(\Phi_{\rm in}\). Indeed, according to Eq. (64), we have \(p_{\rm r}(x)=0\) at
\[\cos y_{1}=-\frac{\sin^{2}\rho\cos\Phi_{\rm in}}{q(\Phi_{\rm in},\rho)\cos\rho }\,. \tag{77}\]
This solution is only possible if
\[\cos^{2}\rho\geq\cos^{2}\rho_{\rm m}=\frac{\cos^{2}\Phi_{\rm in}}{1+\cos^{2} \Phi_{\rm in}}\,, \tag{78}\]
where \(\rho_{m}\) denotes the boundary detuning angle for which the solution (77) does still exist.
In close similarity to the case \(\Phi_{\rm in}=0\), shown in Fig. 5, the phase \(\phi(x)\) exhibits pointed tips \(x_{1}=y_{1}+\zeta\). In the vicinity of the tips we have
\[\begin{split} p_{\rm r}(x)&=-q(\Phi_{\rm in},\rho )\cos\rho\sin y_{1}\cdot(x-x_{1})\\ &=p_{\rm r}(x_{1})\cos\rho\sin y_{1}\cdot(x-x_{1})\,,\end{split} \tag{79}\]
which entails
\[\cos\phi(x)=\frac{\operatorname{sgn}(\cos x)}{\sqrt{1+\cos^{2}\rho\,\left(x-x_{1} \right)^{2}}}\,, \tag{80}\]
and we recovered Eq. (68) and the familiar slope \(\pm\cos\rho\) at the pointed tips.
In the evaluation of the phase span at
\[\cos^{2}\rho\leq\cos^{2}\rho_{\text{m}}=\frac{\cos^{2}\Phi_{\text{in}}}{1+\cos ^{2}\Phi_{\text{in}}}\,, \tag{81}\]
we follow the procedure developed for the case of \(\Phi_{\text{in}}=0\). The phase extrema are roots of the equation \((\cos\phi(x_{m}))^{\prime}=0\), which takes the form [here below \(q=q(\Phi_{\text{in}},\rho)\)]
\[\cos^{2}y+2w\cos y+1=0\,, \tag{82}\]
with the roots
\[\cos y_{\pm}=w\pm\sqrt{w^{2}-1}\,, \tag{83}\]
where
\[w=\frac{\sin^{2}\rho(q^{2}+\cos^{2}\rho\cos^{2}\Phi_{\text{in}})-1}{2q\sin^{ 2}\rho\cos\Phi_{\text{in}}}\,. \tag{84}\]
The solutions exist for \(w^{2}\geq 1\). It is easy to check that the boundary case, \(w=1\), corresponds to the exact equality in the condition (81). Subject to the constraint \(|\cos y_{\pm}|\leq 1\), the admissible roots are \(\cos y_{-}\) at \(w\geq 1\), and \(\cos y_{+}\) at \(w\leq-1\), and the two branches are related by
\[\cos y_{-}(w)=-\cos y_{+}(-w)\,. \tag{85}\]
The limit of \(w^{2}\gg 1\) corresponds to \(|\cos\rho|\ll|\tan\Phi_{\text{in}}|\), when
\[q^{2} \to\sin^{2}\Phi_{\text{in}}\,,\] \[\cos y_{\pm} \to\frac{1}{2w}=-\frac{|\sin\Phi_{\text{in}}|}{\cos\Phi_{\text{ in}}}\cos\rho\,, \tag{86}\] \[\zeta \to\frac{\pi}{2}\operatorname{sgn}(\sin\Phi_{\text{in}})\,.\]
Now we focus on the boundary case \(\cos\rho=\cos\rho_{\text{m}}>0\). According to Eq. (64), \(p_{\text{t}}(x)\) changes the sign at \(y=\pi\), and we encounter the by now familiar phase jump depicted in Fig. (5). Upon some algebra, we find
\[\cos\zeta(\Phi_{\text{in}},\rho_{m})=\cos^{2}\Phi_{\text{in}}\,, \tag{87}\]
which in our case \(\Phi_{\text{in}}=\pi/4\) entails \(\zeta(\Phi_{\text{in}},\rho_{m})=\pi/3\), and we predict
\[x_{m}=\pi+\arccos\left(\cos^{2}(\Phi_{\text{in}})\right)=\frac{4}{3}\pi\,, \tag{88}\]
in perfect agreement with the numerical results shown in Fig. 7.
As we observed in Sec. IV.6, a finite initial phase \(\Phi_{\text{in}}\) introduces an asymmetry with respect to \(x\Leftrightarrow 2\pi-x\). The symmetry is restored in the exceptional case of \(\cos\rho=0\) [see Eq. (66)], when we predict \(\phi(x=\pi)=3\pi/4\) in agreement with the numerical results shown in Fig. 7.
Finally, we consider the case of \(\Phi_{\text{in}}=-\pi/4\). The corresponding phase motion is shown in Fig. 8. First, according to Eq. (76), we get
\[\phi(0)=\phi(2\pi)=\frac{\pi}{2}-\Phi_{\text{in}}=\frac{3}{4}\pi \tag{89}\]
Figure 6: Phase motion of the horizontal polarization envelope for \(\Phi_{\text{in}}=\pi/2\) as predicted by Eq. (48). In the limit of \(\cos\rho\to 0\), the phase motion evolves into the phase jumps and the central bumps at \(x=\pi\) and \(3\pi\) exhibit a rectangular shape.
Second, according to Eq. (43), now we must take a branch \(\zeta=-\arccos\left(\cos^{2}(\Phi_{\text{in}})\right)\). As far as the \(x\)-dependence of the phase \(\phi(x)\) is concerned, a chain of substitutions
\[\begin{split} y=x-\zeta|_{\pi/2}&\Rightarrow x-\zeta|_{-\pi/2}=x+\zeta|_{\pi/2}\\ &\Rightarrow\tilde{y}=-[(-x)-\zeta|_{\pi/2}]\,,\end{split} \tag{90}\]
amounts to the inversion of the \(x\)-axis accompanied by the shift by \(2\pi\), and simultaneous phase inversion \(\phi(x)\Rightarrow\pi-\phi(x)\).
We found a very rich pattern of the in-plane-envelope phase motion depending on the detuning and the initial spin phase. Still, there are certain universal features of the graphs shown in Figs. 5, 6, 7 and 8 which are worth of emphasis. Irrespective of \(\Phi_{\text{in}}\), in all graphs the envelope phase exhibits the phase jump by \(\pi\) with the known \(\Phi_{\text{in}}\)-dependence of the location of the jump. The same is true for the continuous spin rotation [see Fig. 3], although this case has certain exceptional features to be discussed below. For non-vanishing detuning, \(\phi(x)\) exhibits pointed tips with a universal slope equal to \(\pm\cos\rho\) at the tip, irrespective of the initial spin phase, while in Fig. 3, the rated slope equals \(|\cos\rho|/2\). Finally, the phase continuity condition \(\phi(x=0)=\phi(x=2\pi)\) holds for all \(\Phi_{\text{in}}\) with the detuning-independent \(\phi(0)\), again with the exception of Fig. 3. Regarding the pointed tips, according to Eq. (78), they persist for a finite range of detunings, apart from the exceptional cases \(\Phi_{\text{in}}=\pm\pi/2\), when the tips for all \(\rho\) share identical locations at \(x=0,\pi,2\pi,...\)
The WF-driven _continuous_ evolution from the pure vertical initial polarization is distinct from the generic three-stage evolution used in actual JEDI experiments. As explained in Sec. IV.2, here Wien filter operates in the capacity of the spin rotator in stage I and continuous on to stage III at one and the same detuning angle \(\rho\). Specifically, the rotation of the polarization into the horizontal plane happens at \(\cos x_{0}(\rho)=-\cot^{2}\rho\) [see Eq. (39)]. In the spirit of generic three-stage process, this instant can be viewed as a start of stage III with the initial phase \(\Phi_{\text{in}}\) defined by
\[\begin{split}\cos\Phi_{\text{in}}&=p_{\text{r}}(x_ {0})=\cot\rho\,,\\ \sin\Phi_{\text{in}}&=p_{\text{t}}(x_{0})=\text{sgn} (\sin\rho)\sqrt{1-\cot^{2}\rho}\,.\end{split} \tag{91}\]
Our convention for stage III is that the envelope evolution phase starts with \(x=0\). Evidently, the further evolution of \(p_{\text{r,t}}(x)\) will be still described by Eq. (50) subject to the trivial substitution \(x\to x+x_{0}(\rho)\). This way in Fig. 3 we lumped together the detuning dependence of \(\phi(x)\) for a very special subset of initial phases \(\Phi_{\text{in}}(\rho)\) as opposed to the \(\rho\)-independent initial phase in other cases. This distinctive feature of continuous evolution is behind the \(\rho\)-independent phase jump at \(0\), \(2\pi\), \(4\pi\),..., and the degeneracy of the tip and jump locations, and a phase slope at the tip, \(\frac{1}{2}\cos\rho\), which is half of that in the generic case.
The above analysis suggests that the phase of the envelope of the horizontal polarization has a great potential for the diagnostics of the RF-driven spin dynamics (see also early considerations in Ref. [16]). We demonstrated a remarkably strong sensitivity of the phase motion to the initial phase of the horizontal spins and to the de
Figure 7: Phase motion of the horizontal polarization envelope for \(\Phi_{\text{in}}=\pi/4\), as predicted by Eq. (48). For \(\cos^{2}\rho\geq\cos^{2}\rho_{m}=\nicefrac{{1}}{{3}}\) [see Eq. (81)], the pattern of the phase motion resembles that for \(\Phi_{\text{in}}=0\), depicted in Fig. 5. The phase jump for \(\cos^{2}\rho=\cos^{2}\rho_{m}\) is located at \(x=x_{m}=4\pi/3\), as predicted by Eq. (88). In contrast to the case of \(\Phi_{\text{in}}=0\) in Fig. 5, the phase motion for \(\cos^{2}\rho<\cos^{2}\rho_{m}\) has no symmetry center.
tuning of the spin precession frequency. This phase remained the as yet unexplored feature of the RF-driven spin dynamics in storage rings and we make a point that variations of the dependence of this phase with respect to time may prove as a good indicator of the stability of the detuning during the cycle, or as an indicator for the lack or presence of unwanted phase walks.
## V Spin decoherence incorporated
### Decoherence through feedback to compensate for spin precession walk
As mentioned in Sec. I, the observed idle spin precession phase walk during the feedback (fb) time interval \(t_{\rm fb}=(5-10)\,\)s on the scale of \(\sigma_{\rm fb}\approx 0.2\,\)rad corresponds to a detuning of the spin precession on the rms scale of \(\Delta f_{\rm s}^{\rm(fb)}=\cos\rho_{\rm fb}f_{\rm SF}\approx 5\,\)mHz, where the perturbative parameter in the problem is
\[\cos\rho_{\rm fb}=\frac{\sigma_{\rm fb}}{2\pi f_{\rm SF}t_{\rm fb}}\,. \tag{92}\]
When the ring instabilities are slow on the time scale \(t_{\rm fb}\), the smooth spin phase walk can be approximated by constant detuning. Then the spin envelope evolution can be approximated by Eq. (33) with the spin-flip tune of Eq. (30):
\[\nu_{\rm SF}=\nu_{\rm SF}^{0}\left(1+\frac{1}{2}\cos^{2}\rho_{\rm fb}\right)\,. \tag{93}\]
To set the ballpark, for \(\sigma_{\rm fb}=0.2\), \(t_{\rm fb}=10\,\)s and \(f_{\rm SF}=80\,\)mHz as in the pilot bunch experiment [17], we obtain \(\cos^{2}\rho_{\rm fb}=0.0016\), but this parameter becomes as large as \(0.1\) for \(f_{\rm SF}=10\,\)mHz.
Qualitatively, the feedback follows the windshield-wiper pattern, which can be cast into a toy model of consecutive spin envelope rotations,
\[{\bf E}^{\rm(fb)}(2x_{\rm fb})={\bf E}(-\cos\rho_{\rm fb},x_{\rm fb}){\bf E}( \cos\rho_{\rm fb},x_{\rm fb}) \tag{94}\]
where \(x_{\rm fb}\) is the SF phase acquired per feedback period \(t_{\rm fb}\), and we show explicitly the dependence on \(\cos\rho\) in the SF matrix of Eq. (33). Here, the first envelope transfer matrix \({\bf E}(\cos\rho_{\rm fb},x_{\rm fb})\) parameterizes the experimentally measured spin phase walk in terms of the detuning \(\Delta f_{\rm s}^{\rm(fb)}=\cos\rho_{\rm fb}f_{\rm SF}\). In order to compensate the acquired relative phase walk during text next period \(t_{\rm fb}\), the Wien filter is operated at a frequency corrected by \(2\Delta f_{\rm s}^{\rm(fb)}\), _i.e._, with the flipped sign of the detuning, which is modeled by \({\bf E}(-\cos\rho_{\rm fb},x_{\rm fb})\). In the limit of vanishing spin walk \({\bf E}^{\rm(fb)}(2x_{\rm fb})={\bf E}(0,2x_{\rm fb})\), and we define the feedback matrix
\[{\bf R}_{\rm fb}={\bf E}^{\rm(fb)}(2x_{\rm fb}){\bf E}^{-1}(0,x_{\rm fb})\,. \tag{95}\]
The corresponding feedback-corrected envelope evolution matrix takes the familiar stroboscopic form
\[\vec{p}(2(k+1)x_{\rm fb})={\bf R}_{\rm fb}{\bf E}(0,x_{\rm fb})\vec{p}(2kx_{ \rm fb})\,. \tag{96}\]
Note that in our toy model, this matrix \({\bf R}_{\rm fb}\) is time independent. We skip the lengthy derivation of \({\bf E}^{\rm(fb)}(2x_{\rm fb})\)
and the corresponding BK averaging and give the behavior of the resulting SF matrix for large \(k\),
\[\mathbf{E}^{(\mathrm{fb})}(x)=\begin{pmatrix}\exp(-2\Gamma_{\mathrm{fb}}x)&0&0\\ 0&\exp(-\Gamma_{\mathrm{fb}}x)\cos x&-\exp(-\Gamma_{\mathrm{fb}}x)\sin x\\ 0&\exp(-\Gamma_{\mathrm{fb}}x)\sin x&\exp(-\Gamma_{\mathrm{fb}}x)\cos x\\ \end{pmatrix}\,, \tag{97}\]
which supports the spectator radial polarization.
The spin precession walk depolarizes the vertical polarization with the lifetime \(\tau_{\mathrm{fb}}\) given by
\[\frac{1}{\tau_{\mathrm{fb}}}=2\pi\Gamma_{\mathrm{fb}}f_{\mathrm{SF}}=\frac{ \cos\rho_{\mathrm{fb}}^{2}(1-\cos x_{\mathrm{fb}})^{2}}{t_{\mathrm{fb}}}\,, \tag{98}\]
while the spectator radial in-plane polarization depolarizes twice faster. The spin decoherence time for the active in-plane polarization, \(\tau_{\mathrm{STCT}}\), is equal to \(\tau_{\mathrm{fb}}\). Indeed, the detuning of the spin precession does not lead to a depolarization of the vertically oriented spins (see the related discussion below in Sec. V.3). The spin-flip tune acquires two corrections:
\[\nu_{\mathrm{SF}}=\nu_{\mathrm{SF}}^{0}\left[1+\frac{1}{2}\cos\rho_{\mathrm{ fb}}^{2}+\frac{\sin x_{\mathrm{fb}}}{2x_{\mathrm{fb}}}\cos^{2}\rho_{\mathrm{ fb}}(2-\cos\rho_{\mathrm{fb}})\right] \tag{99}\]
The first correction stems from Eq. (93), while the second one derives from spin-flip rotations during the feedback periods. The corresponding SF phase is given by \(x=2\pi\nu_{\mathrm{SF}}n\). The above toy-model corrections to the spin tune, as well as the rate of depolarization, must be regarded as gross estimations. Nevertheless, they are a good example of how the feedback to maintain phase locking between the spin precession and Wien-filter phases has a non-vanishing influence on the spin-flip dynamics. For instance, if taken at face value, for the conditions of the pilot bunch experiment and the above-given feedback parameters (\(\sigma_{\mathrm{fb}}=0.2\), \(t_{\mathrm{fb}}=10\,\mathrm{s}\), \(f_{\mathrm{SF}}=80\,\mathrm{mHz}\)), Eq. (98) predicts \(\tau_{\mathrm{fb}}\approx 10^{4}\,\mathrm{s}\), while at \(x_{\mathrm{fb}}<1\), it predicts
\[\tau_{\mathrm{fb}}=\frac{t_{\mathrm{fb}}}{\sigma_{\mathrm{fb}}{}^{2}}\approx 6 00\,\mathrm{s}\,. \tag{100}\]
### Recovering the spectator polarization
As a prelude to further discussion of the spin decoherence effects, we observe that the envelope evolution matrix in Eq. (33) can be cast in the form
\[\mathbf{E}(x) =\begin{pmatrix}\sin^{2}\rho+\cos^{2}\rho\cos x&\cos\rho\sin\rho( 1-\cos x)&\cos\rho\sin x\\ \cos\rho\sin\rho(1-\cos x)&\cos^{2}\rho+\sin^{2}\rho\cos x&-\sin\rho\sin x\\ -\cos\rho\sin x&\sin\rho\sin x&\cos x\\ \end{pmatrix} \tag{101}\] \[=\begin{pmatrix}\sin\rho&-\cos\rho&0\\ \cos\rho&\sin\rho&0\\ 0&0&1\\ \end{pmatrix}\cdot\begin{pmatrix}1&0&0\\ 0&\cos x&-\sin x\\ 0&\sin x&\cos x\\ \end{pmatrix}\cdot\begin{pmatrix}\sin\rho&\cos\rho&0\\ -\cos\rho&\sin\rho&0\\ 0&0&1\\ \end{pmatrix}\,,\]
which amounts to the rotation of coordinates such that the vector \(\vec{m}\) of Eq. (34) plays now the role of \(\vec{c}\) in the case of idle precessions. In this new reference frame, the matrix in Eq. (33) stems from the initial block-diagonal matrix \(\mathbf{E}_{0}(x)\) of Eq. (16), which features the spectator polarization. This observation serves as crucial guidance to link spin evolution to decoherence effects.
As a matter of fact, the presence of the hidden spectator component could have been directly guessed from the original envelope rotation matrix of Eq. (33). Indeed, besides the manifestly RF-driven terms \(\propto\sin x\) and \(\propto\cos x\), the four matrix elements of \(\mathbf{E}(x)\) do contain the non-rotating components: \(\sin^{2}\rho\) in \(E_{\mathrm{rr}}(x)\), \(\cos^{2}\rho\) in \(E_{\mathrm{cc}}(x)\), and \(\cos\rho\sin\rho\) in \(E_{\mathrm{rc}}(x)\) and \(E_{\mathrm{cr}}(x)\).
### Ansatz of exponential decoherence of the in-plane polarization
#### v.3.1 Damped spin rotations
The JEDI studies of spin decoherence have revealed [29] an enhancement of the spin-coherence time to the fine tuning of families of sextupole magnets to zero chromaticity to reduce the spread of spin tunes in the beam caused by orbit lengthening due to betatron oscillations [8]. In the spirit of the Bloch approach [30], we present here the ad hoc treatment of the residual spin decoherence in terms of the exponential attenuation of the in-plane polarization and preservation of the vertical polarization in the idle precession regime.
Correspondingly, the master equation (2) will be modified to yield
\[\vec{S}(n)=\mathbf{R}_{\mathrm{WF}}(n)\mathbf{R}_{\Gamma}\mathbf{R}_{\mathrm{c}}( \theta_{\mathrm{WF}})\vec{S}(n-1)\,, \tag{102}\]
where
\[\mathbf{R}_{\Gamma}=\begin{pmatrix}1-\Gamma&0&0\\ 0&1&0\\ 0&0&1-\Gamma\end{pmatrix}=\mathbf{1}+\mathbf{W}_{\Gamma} \tag{103}\]
describes the attenuation per turn, where in terms of the spin coherence time \(\tau_{\mathrm{SCT}}\), \(\Gamma\) is given by
\[\Gamma=\frac{1}{f_{\mathrm{c}}\tau_{\mathrm{SCT}}}\,. \tag{104}\]
We shall also use the small decoherence parameter,
\[Q=\frac{\Gamma}{4\pi\nu_{\mathrm{SF}}}\,, \tag{105}\]
which is defined such that \(\Gamma n=2Qx\).
#### vi.2.2 Sequential Bogoliubov-Krylov averaging
Anticipating the sequential BK averaging, we seek for a solution of the master equation (102) of the form
\[\vec{S}(n)=\mathbf{R}_{\mathrm{c}}(n\theta_{\mathrm{WF}})\mathbf{E}_{0}(n) \vec{g}(n-1)\,, \tag{106}\]
so that \(\vec{g}(n)\) embodies the impact of the spin decoherence on the earlier defined spin envelope: \(\vec{p}(n)=\mathbf{E}_{0}(n)\vec{g}(n)\). Then, the master equation for \(\vec{g}(n)\) reads
\[\vec{g}(n)=\mathbf{E}_{0}^{-1}(n)\mathbf{R}_{\mathrm{c}}^{-1}(n\theta_{ \mathrm{WF}})\mathbf{R}_{\mathrm{WF}}(n)\mathbf{R}_{\mathrm{c}}(n\theta_{ \mathrm{WF}})\mathbf{R}_{\Gamma}\mathbf{E}_{0}(n-1)\vec{g}(n-1)\,. \tag{107}\]
The first stage of the BK averaging over spin precession yields
\[\left\langle\mathbf{R}_{\mathrm{c}}^{-1}(n\theta_{\mathrm{WF}})\mathbf{R}_{ \mathrm{WF}}(n)\mathbf{R}_{\mathrm{c}}(n\theta_{\mathrm{WF}})\right\rangle= \mathbf{E}_{0}(1)\,. \tag{108}\]
Next we perform the BK averaging over spin flips which are fast compared to the spin damping,
\[\mathbf{U}_{\Gamma}=\left\langle\mathbf{E}_{0}^{-1}(n-1)\mathbf{W}_{\Gamma} \mathbf{E}_{0}(n-1)\right\rangle=\Gamma\left\langle\begin{pmatrix}1&0&0\\ 0&\sin^{2}x&0\\ 0&0&\cos^{2}x\end{pmatrix}\right\rangle=-\Gamma\begin{pmatrix}1&0&0\\ 0&\frac{1}{2}&0\\ 0&0&\frac{1}{2}\end{pmatrix}\,. \tag{109}\]
The corresponding solution of Eq. (107) is given by
\[\vec{g}(n)=\mathbf{E}_{\Gamma}(n)\vec{p}(0)=\exp(\mathbf{U}_{\Gamma}n)\vec{ p}(0)\,, \tag{110}\]
with
\[\mathbf{E}_{\Gamma}(x)=\begin{pmatrix}\exp(-2Qx)&0&0\\ 0&\exp(-Qx)&0\\ 0&0&\exp(-Qx)\end{pmatrix}\,. \tag{111}\]
While the idly precessing spectator component decoheres \(\propto\exp(-2Qx)\), the vertical and the in-plane active polarizations decohere at half this rate, \(\propto\exp(-Qx)\). Indeed, the polarization decoheres when it is in the \(rt\)-plane, while the attenuation of the upward or downward polarization is negligibly weak on the time scale of \(\tau_{\mathrm{SCT}}\)[31], see the related discussion of Eq. (97) in Sec. V.1. The corresponding damped envelope evolution reads \(\vec{p}(x)=\mathbf{E}_{\mathrm{D}}(x)\vec{p}(0)\) with the SF matrix
\[\mathbf{E}_{\mathrm{D}}(x)=\mathbf{E}_{0}(x)\mathbf{E}_{\Gamma}(x)=\begin{pmatrix} \exp(-2Qx)&0&0\\ 0&\exp(-Qx)\cos x&-\exp(-Qx)\sin x\\ 0&\exp(-Qx)\sin x&\exp(-Qx)\cos x\end{pmatrix}\,, \tag{112}\]
which replaces \(\mathbf{E}_{0}(x)\) in Eq. (101) with the result
\[\mathbf{E}_{\mathrm{exp}}(x)=\begin{pmatrix}e^{-2Qx}\sin^{2}\rho+e^{-Qx}\cos^ {2}\rho\cos x&\cos\rho\sin\rho(e^{-2Qx}-e^{-Qx}\cos x)&e^{-Qx}\cos\rho\sin x \\ -\cos\rho\sin\rho(e^{-2Qx}-e^{-Qx}\cos x)&e^{-2Qx}\cos^{2}\rho+e^{-Qx}\sin^{2} \rho\cos x&-e^{-Qx}\sin\rho\sin x\\ -e^{-Qx}\cos\rho\sin x&e^{-Qx}\sin\rho\sin x&e^{-Qx}\cos x\end{pmatrix}\,. \tag{113}\]
In this purely phenomenological approach, the attenuation does not affect the SF tune [32]. A treatment within
this exponential decoherence model of the experimental results of the pilot bunch experiment is reported in ref. [17].
### Spin decoherence by synchrotron motion
#### vi.4.1 Spread of synchrotron oscillation amplitudes
So far, we considered only central particles in the bunch. The synchrotron oscillations (SO) with frequency \(f_{\rm sy}\) modulate the particle momentum and the spin tune, and are endemic in storage rings. The emerging oscillating detuning between Wien filter and spin precession is a well defined dynamical mechanism of spin decoherence, and here we treat it as the leading one, supposing that the betatron oscillation effects have been taken care of by fine tuning of the sextupole families. We follow the technique of an earlier study [33] and extend these considerations.
The oscillations of the particles around the center of the bunch can be evaluated using the time distribution of the events recorded in the internal polarimeter. Following Ref. [17], it is convenient to represent the longitudinal profile of the bunch in terms of a fractional cyclotron phase \(\phi=\phi_{\rm c}-2\pi n\) such that \(\phi\in[0,2\pi]\). In the further discussion, the synchrotron motion for an individual particle is defined with respect to a center of the bunch, \(\phi=a\cos(2\pi\nu_{\rm sy}f_{\rm c}t+\lambda)\), where \(\nu_{\rm sy}=f_{\rm sy}/f_{\rm c}\) is the synchrotron tune and \(\lambda\in[0.2\pi]\) is the individual particle's random phase.
The one-particle contribution to the longitudinal density of the bunch \(N(\phi)\) is inversely proportional to the SO velocity, and the one-particle density of the bunch
\[N(\phi)=\frac{1}{\pi}\int_{\phi}^{\infty}\frac{{\rm d}aF(a)}{\sqrt{a^{2}-\phi ^{2}}}\,, \tag{114}\]
Clearly, for large-\(\phi\) the bunch density receives contributions only from particles with synchrotron amplitudes \(a>\phi\). Now we observe that Eq. (114) assumes the form of the Abel transform with the solution for the synchrotron amplitude distribution
\[F(a)=-2a\int_{a}^{\infty}\frac{{\rm d}\phi N^{\prime}(\phi)}{\sqrt{\phi^{2}-a ^{2}}}\,. \tag{115}\]
Using the Gaussian approximation,
\[N(\phi)\propto\exp(-\phi^{2}/2\sigma_{\rm sy}^{2})\,, \tag{116}\]
which represents well the experimentally observed longitudinal profile of the bunch [17], one easily finds
\[F(a)=\frac{a}{\sigma_{\rm sy}^{2}}\exp\left(-\frac{a^{2}}{2\sigma_{\rm sy}^{2 }}\right)\,. \tag{117}\]
Different functional form of \(N(\phi)\) and \(F(a)\) stems from the fact, that the small-\(\phi\) central section of the bunch receives as well contributions from particles with large synchrotron amplitudes.
The synchrotron modulation of the particle momentum \(\Delta p(n)\) and the revolution period \(\Delta T(n)\) are related by the slip factor \(\eta\),
\[\frac{\Delta T}{T}=\frac{\Delta\phi(n)}{2\pi}=\eta\cdot\frac{\Delta p(n)}{p}\,, \tag{118}\]
where \(\eta\)
\[\eta=\frac{1}{\gamma^{2}}-\frac{1}{\gamma_{tr}^{2}}\,, \tag{119}\]
and \(\gamma_{\rm tr}\) is the transition gamma-factor. In Eq. (118) we introduced \(\Delta\phi(n)\), an angular advance (retardation) of a particle per revolution \(n\) oscillating with time \(\propto\cos(2\pi\nu_{\rm sy}f_{\rm c}t)\). These one-turn synchrotron phase shifts sum precisely to the \(\phi\) defined above with an amplitude larger by the large factor \((2\pi\nu_{z})^{-1}\) than that of \(\Delta\phi(n)\). Averaging over the ensemble of particles yields the simple relationship
\[\sigma_{\rm sy}=\langle\phi^{2}\rangle^{1/2}=\frac{\eta}{\nu_{\rm sy}}\Big{<} \frac{\Delta p^{2}}{p^{2}}\Big{>}^{1/2}\,. \tag{120}\]
The corresponding phenomenology of the experimental results from the pilot bunch experiment will be presented in the Appendix A. The SOs generate a shift of the spin precession phase, \(\Delta\theta_{\rm s}(n)=\theta_{\rm s}(n)-\theta_{\rm s}n\), which is a sum of shifts per turn,
\[\begin{split}&\delta\theta_{\rm s}(n)=2\pi G\delta\gamma=2\pi G \gamma\beta^{2}\frac{\Delta p(n)}{p}\,,\\ &\Delta\theta_{\rm s}(n)=\xi\psi_{\rm sy}\sin(2\pi\nu_{z}n+ \lambda)\,,\\ &\psi_{\rm sy}=\sqrt{2}G\gamma\beta^{2}\frac{\sigma_{\rm sy}}{| \eta|}\,,\end{split} \tag{121}\]
where \(\xi\) is a convenient phase-slip relative amplitude with the distribution function,
\[F(\xi)=2\xi\exp(-\xi^{2})\,, \tag{122}\]
and normalization \(\langle\xi^{2}\rangle=1\) (_cf._ Eq. (117)).
The modulation \(\Delta T\) of the revolution time results in the corresponding SO-driven slip of the Wien filter phase,
\[\begin{split}\Delta\theta_{\rm WF}(n)&=\frac{f_{ \rm WF}}{f_{\rm s}}\cdot\frac{\eta}{\beta^{2}}\Delta\theta_{\rm s}=C_{\rm WF} \Delta\theta_{\rm s}(n)\,,\\ C_{\rm WF}&=1+\frac{K}{G\gamma}\,,\end{split} \tag{123}\]
which will show up in the spin-flip dynamics [34].
#### vi.4.2 Master equation for spin envelope
It suffices to consider the case of the exact resonance for the central particle, \(f_{\rm WF}=f_{\rm s}\), _i.e.,_\(\theta_{\rm s}=\theta_{\rm WF}\)[35]. The SO-modified one-turn spin transfer will be given by
\[\vec{S}(n)={\bf R}_{\rm WF}(n){\bf R}_{\rm c}(\theta_{\rm s}+\delta\theta_{\rm s }(n))\vec{S}(n-1)\,. \tag{124}\]
Bearing in mind the subsequent Fourier analysis of the in-plane polarization, we stick to the definition of the spin envelope via Eq. (8), _i.e.,_ we define the envelopes in the reference frame co-rotating with the fixed angular velocity \(\omega_{\rm WF}\).
Simple rotations in (124) do preserve the magnitude of the polarization of individual particles. However, experimentally one measures the average polarization of an ensemble of particles with a typical observation time that is much longer than the SO period. This averaging over the ensemble leads to spin decoherence and depolarization.
As an exercise, we first treat the simplest case of the pure idle precession of the in-plane polarization. Here the determination of the envelope \(p_{\rm rt}\) by the Fourier analysis amounts to the projection of the polarization on the unit vector rotating with fixed frequency \(f_{\rm WF}\). For an individual particle, the average over the SO period equals
\[p_{\rm rt}(\xi)=\langle\exp(i\Delta\theta_{\rm s}(n))\rangle=J_{0}(\xi\psi_{ \rm sy})\,, \tag{125}\]
and the average over the ensemble of particles in the bunch is
\[\begin{split} p_{\rm rt}&=\int_{0}^{\infty}2\xi \exp(-\xi^{2})J_{0}(\xi\psi_{\rm sy})d\xi\\ &=\exp(-\frac{1}{4}\psi_{\rm sy}^{2})\approx 1-\frac{1}{4}\psi_{ \rm sy}^{2}\,.\end{split} \tag{126}\]
This slight attenuation is independent of time. It is of rather academic value, because an instantaneous injection of the horizontal polarization is technically impossible. Equally impossible is a polarimetry with sufficient statistics at times shorter than the SO period. Consequently, in practice the attenuation in Eq. (126) is reabsorbed in the definition of the magnitude of the initial in-plane polarization, as determined experimentally prior to switching the RF spin rotator on.
Now we proceed to the WF-driven oscillations. The corresponding master equation for the envelope takes the form
\[\vec{p}(n)={\bf R}_{\rm c}(-n\theta_{\rm WF}){\bf R}_{\rm WF}(n){\bf R}_{\rm c }(\delta\theta_{\rm s}(n)){\bf R}_{\rm c}(n\theta_{\rm WF})\vec{p}(n-1)\,. \tag{127}\]
It is reminiscent of the master equation (27), but with oscillating instantaneous running flip of the spin phase per turn, \(\delta\theta_{\rm s}(n)\), and with much larger slip of the Wien filter phase \(\Delta\theta_{WF}(n)\). In the Fourier analysis, one is bound to sample trains of turns much longer than the SO period, so that the detuning per se averages out to zero, \(\langle\delta\theta_{\rm s}(n)\rangle=0\), but we have already seen the non-vanishing SO effect even in the case of idle precession, see Eq. (126).
In the BK averaging over rapid spin precessions of the corresponding counterpart of the matrix in Eq. (28), we encounter
\[\begin{split}\langle\cos(\theta_{\rm WF}n)\cos(\theta_{\rm WF} n+C_{\rm WF}\Delta\theta_{\rm s}(n))\rangle&\Rightarrow\frac{1}{2} \cos(C_{\rm WF}\Delta\theta_{\rm s}(n))\,,\\ \langle\sin(\theta_{\rm WF}n)\cos(\theta_{\rm WF}n+C_{\rm WF} \Delta\theta_{\rm s}(n))\rangle&\Rightarrow-\frac{1}{2}\sin(C_{ \rm WF}\Delta\theta_{\rm s}(n))\,,\end{split} \tag{128}\]
and obtain
\[{\bf U}_{\rm SO}(n)=\begin{pmatrix}0&-\frac{1}{2}\chi_{\rm WF}\sin(C_{ \rm WF}\Delta\theta_{\rm s}(n))&\delta_{\rm s}\theta(n)\\ \frac{1}{2}\chi_{\rm WF}\sin(C_{\rm WF}\Delta\theta_{\rm s}(n))&0&-\frac{1}{2} \chi_{\rm WF}\cos(C_{\rm WF}\Delta\theta_{\rm s}(n))\end{pmatrix}\,. \tag{129}\]
Next stage is BK averaging over the period of SOs that are much faster than the envelope rotations:
\[\begin{split}\langle\cos(C_{\rm WF}\Delta\theta_{\rm s}(n))\rangle& =\langle\cos(\xi C_{\rm WF}\psi_{\rm sy}\sin(2\pi\nu_{\rm sy}k+ \lambda))\rangle\\ &=J_{0}(\xi C_{\rm WF}\psi_{\rm sy})\,,\\ \langle\sin(C_{\rm WF}\Delta\theta_{\rm s}(n))\rangle&=0\,,\\ \langle\delta\theta_{\rm s}(n)\rangle&=0\,,\end{split} \tag{130}\]
so that we recover the familiar
\[\langle{\bf U}_{\rm SO}(n)\rangle=\frac{1}{2}\chi_{\rm WF}J_{0}(\xi C_{\rm WF }\psi_{\rm sy}){\bf U}\,. \tag{131}\]
Compared to a discussion in Sec. II-B, the principal change is the SO dependent renormalization of the SF tune
\[\nu_{\rm SF}\Rightarrow\nu_{\rm SF}(\xi)=\nu_{\rm SF}J_{0}(\xi C_{\rm WF} \psi_{\rm sy})\,. \tag{132}\]
In the case of weak to moderate SO effects, we can approximate
\[1-J_{0}(\xi C_{\rm WF}\psi_{\rm sy})\approx Q_{\rm sy}\xi^{2}\,, \tag{133}\]
where
\[Q_{\rm sy}=\frac{1}{4}C_{\rm WF}^{2}\psi_{\rm sy}^{2}=\frac{1}{2}(K+G\gamma)^{ 2}\sigma_{\rm sy}^{2}\,. \tag{134}\]
Note the strong dependence of \(Q_{\rm sy}\) on the angular length of the bunch and the Wien filter sideband \(K\), which is an important feature of the SO mechanism.
Evaluation of synchrotron oscillation-driven spin decoherence of the bunch polarization
The above defined \(Q_{\rm sy}\) is the principal parameter which defines the SO driven spread of the spin-flip tune (132) and the spin-flip phase,
\[x\Rightarrow x(\xi)=xJ_{0}(\xi C_{\rm WF}\psi_{\rm sy})\approx x-Q_{\rm sy}\xi^{ 2}x\,. \tag{135}\]
The SO-driven decoherence is quantified by the expectation value over the ensemble of particles in the bunch, \(\langle{\bf E}(x(\xi))\rangle_{\xi}\), with the weight function \(F(\xi)\) of Eq. (122). We need to evaluate
\[\begin{split}&\langle\exp(ix(\xi))\rangle_{\xi}\\ &=\exp(ix)\int_{0}^{\infty}d\xi F(\xi)\exp(-iQ_{\rm sy}\xi^{2}x) \\ &=\exp(ix)D(x)\exp(-i\varphi_{\rm sy}(x))\,.\end{split} \tag{136}\]
The corresponding envelope rotation matrix takes the form
\[{\bf E}_{\rm sy}(x_{\rm sy})=\begin{pmatrix}1&0&0\\ 0&D(x)\cos x_{sy}&-D(x)\sin x_{\rm sy}\\ 0&D(x)\sin x_{\rm sy}&D(x)\cos x_{\rm sy}\end{pmatrix}\,, \tag{137}\]
where
\[x_{\rm sy}=x-\varphi_{\rm sy}(x). \tag{138}\]
To the approximation in Eq. (133), we obtain
\[\langle\exp(ix(\xi))\rangle_{\xi}=\frac{\exp(ix)}{1+iQ_{\rm sy}x}\,, \tag{139}\]
yielding
\[\begin{split}& D(x)=\frac{1}{\sqrt{1+Q_{\rm sy}^{2}x^{2}}}\,,\\ &\varphi_{\rm sy}(x)=\arctan(Q_{\rm sy}x)\end{split} \tag{140}\]
The synchrotron oscillation mediated matrix \({\bf E}_{\rm sy}(x_{\rm sy})\) differs from the exponential-model matrix \({\bf E}_{\rm exp}(x_{\rm sy})\) in several aspects. In the SO mechanism, the time dependent spin decoherence takes place only in the spin-flip process. In contrast to the exponential attenuation Ansatz of Sec. V.3, see Eq. (112), in the SO mechanism the idly precessing spectator radial polarization doesn't decohere, see also the discussion of Eq. (126). The SO damping factor starts as \(D(x)\approx 1-\frac{1}{2}Q_{\rm sy}^{2}x^{2}\) at \(Q_{\rm sy}x\ll 1\) in contrast to \(\exp(-Qx)\approx 1-Qx\) for the exponential Ansatz, while the large-time attenuation \(D(x)\approx 1/(Q_{\rm sy}x)\) is slower than the exponential one. A signature of the SO dominated spin coherence time is that its scale is set by \(Q_{\rm sy}x\sim 1\) and exhibits strong dependence on the SF frequency:
\[\tau_{\rm SCT}\sim\frac{1}{2\pi f_{\rm SF}Q_{\rm sy}}\,. \tag{141}\]
In the above derivation, the exact spin resonance was assumed for the central particles in the bunch. Finally, the synchrotron oscillations entail a nonlinear spin-flip phase walk \(\varphi_{\rm sy}(x)\). It is an indispensable feature of the SO mechanism of spin decoherence, and it cannot be eliminated by the feedback process targeting the vanishing detuning. This phase walk \(\varphi_{\rm sy}(x)\) entails the running SF tune
\[\nu_{\rm SF}^{(\rm sy)}(x)=\nu_{\rm SF}^{(\rm sy)}\frac{dx_{\rm sy}(x)}{dx}= \nu_{\rm SF}^{(\rm sy)}\left(1-\frac{Q_{\rm sy}}{1+Q_{\rm sy}^{2}x^{2}}\right)\,, \tag{142}\]
where \(\nu_{\rm SF}^{(\rm sy)}\) is the constant spin-flip tune which defines the principal spin-flip phase \(x\) and is given by Eqs. (30,32) [see further Sec. VII].
#### v.2.4 Excursion on not compensated betatron oscillation effects
A strong enhancement of the spin coherence time by tuning the chromaticity, which suppresses orbit lengthening effects caused by betatron oscillations (BO), is well demonstrated experimentally [6; 7; 8]. Here we comment on the possibility that the residual spin decoherence is an artifact of under-compensated BO effects. BO tunes are large, for example in COSY \(\nu_{x,y}\approx 3.6\), some 4 orders of magnitude larger than the SO tune, yet the above treatment of SO effects can be extended to BOs as well. In fact, the prolongation of the orbit by BOs can be considered as a time-independent feature of individual particles. Its effect on the spin tune is proportional to the square of the BO amplitude,
\[\nu_{\rm s}(\xi)=(1-Q_{\rm sy}\xi^{2})\nu_{\rm s}\,, \tag{143}\]
which is equivalent to a finite detuning of
\[\delta(\xi)=2\pi\nu_{\rm WF}Q_{\rm sy}\xi^{2}\,, \tag{144}\]
where \(\xi\) is the relative amplitude of the BOs with the distribution function \(F(\xi)\) of Eq. (122). According to Refs. [6; 7; 8], by fine tuning the chromaticity the BO parameter \(Q_{\rm sy}\) could ideally be brought to zero.
We abstract from the dynamical considerations and comment here on the phenomenological consequences of the under-compensated BO effects. The most important point is a BO-dependent spread of the detuning, which results in a spread of SF tune. The small-\(\delta\) expansion of the SF tune of Eq. (30) gives
\[\begin{split}\nu_{\rm SF}(\xi)&=\nu_{\rm SF}^{0}(1+ \frac{1}{2}Q_{\beta}\xi^{4})\,,\quad\text{where}\\ Q_{\beta}&=Q_{\rm sy}^{2}\left(\frac{\nu_{\rm WF}}{ \nu_{\rm SF}^{0}}\right)^{2}\,.\end{split} \tag{145}\]
The BO correction to the SF tune starts with a term \(\propto\xi^{4}\) compared to the \(\propto\xi^{2}\) term in the SO Eq. (132), while the qualitative features are preserved.
Indeed, for the average over the ensemble, the BO-driven spread of the SF phase factor yields
\[\int_{0}^{\infty}d\xi F(\xi)\exp[i\frac{x}{2}Q_{\beta}\xi^{4}] =\frac{1}{\sqrt{1-i2Q_{\beta}x\rho_{\beta}(x)}} \tag{146}\] \[=D_{\beta}(x)\exp(i\varphi_{\beta}(x))\,,\]
with
\[\begin{split} D_{\beta}(x)&=\left\{1+4Q_{\beta}^{2} x^{2}\rho_{\beta}^{2}(x)\right\}^{1/4}\\ \varphi_{\beta}(x)&=\frac{1}{2}\arctan\left[2Q_{ \beta}x\rho_{\beta}(x)\right]\\ \rho_{\beta}(x)&\approx\frac{1+\pi^{-1}Q_{\beta}^{ 2}x^{2}}{1+Q_{\beta}^{2}x^{2}}\,,\end{split} \tag{147}\]
where \(\rho_{f}(x)\) interpolates the damping factor from \(D_{\beta}(x)\approx 1\) for \(Q_{\beta}x<1\) to
\[D_{\beta}(x)\approx\sqrt{\frac{\pi}{2Q_{\beta}x}} \tag{148}\]
for \(Q_{\beta}x\gg 1\).
For \(Q_{\beta}x\gg 1\), the phase \(\phi_{\beta}(x)\) saturates at \(\nicefrac{{\pi}}{{4}}\) compared to \(\nicefrac{{\pi}}{{2}}\) in the case of \(\phi_{\rm sol}(x)\). For \(Q_{\beta}x<1\) the interpolation function \(\rho_{\beta}(x)\approx 1\), while for \(Q_{\beta}x\gg 1\), it only controls small details of saturation at \(\nicefrac{{\pi}}{{4}}\), so that the corresponding running spin tune can be approximated by
\[\nu_{\rm SF}^{\beta}(x)\approx\nu_{\rm SF}\left(1-\frac{Q_{\beta}}{1+4Q_{\beta }^{2}x^{2}}\right)\,. \tag{149}\]
Here \(\nu_{\rm SF}\) is the SF tune defined by Eqs. (30,32). In summary, despite the very different hierarchy of frequencies involved, the synchrotron and betatron oscillations have quite a similar impact on the SF dynamics.
## VI Spin tomography of synchrotron oscillations
The remarkable feature of the SF tune, given in Eq. (132), is its dependence on the SO amplitude, which can be tested experimentally tagging events in the polarimeter by their angular coordinate \(\phi\). The first look at this effect was undertaken in the pilot bunch experiment [17], where the full data sample of \(\phi\in[-\xi_{\rm max},\xi_{\rm max}]\sigma_{\rm sy}=[-2,2]\sigma_{\rm sy}\) was split into the central set I (with \(\phi\in[-\xi_{\rm med},\xi_{\rm med}]\sigma_{\rm sy}=[-0.6,0.6]\sigma_{\rm sy}\)), and set II (with \(\xi\in[\xi_{\rm med},\xi_{\rm max}]\)), to be referred to as the head and tail set). The median \(\xi_{\rm med}=0.6\) was chosen to have about the same number of recorded events in the sets I and II.
Particles in the bunch do perpetually oscillate from the head to tail and vice versa, crossing back and forth the central region \(\xi\leq\xi_{\rm med}|\), and a fraction of the time they spend at \(|\phi_{\rm med}|<|\phi|<|\phi_{\rm max}|\) is given by the duty cycle
\[\mathcal{D}(\xi_{\rm max},\xi_{\rm med},\xi^{2})=\frac{2}{\pi}\left[\arccos \left(\frac{\xi_{\rm med}}{\xi}\right)-\arccos\left(\frac{\xi_{\rm max}}{\xi} \right)\right]. \tag{150}\]
For arbitrary domain \(\mathcal{R}\), the expectation value of the phase factor is given by
\[\langle\exp(ix(\xi))\rangle_{\xi}=\frac{\int_{\mathcal{R}}d\xi F(\xi) \mathcal{D}(\mathcal{R},\xi^{2})\exp(ix(\xi))}{\int_{\mathcal{R}}d\xi F(\xi) \mathcal{D}(\mathcal{R},\xi^{2})}\,. \tag{151}\]
The integrand in Eq. (151) has remarkable factorization properties. Consider the set \(\mathcal{R}\) of \(\xi\geq\xi_{\rm m}\). In terms of the convenient new variable \(\zeta_{\rm sy}=\xi^{2}-\xi_{\rm m}^{2}\), the expansion of Eq. (133) gives \(J_{0}(\xi C_{\rm WF}\psi_{\rm sy})\approx J_{0}(\xi_{\rm m}C_{\rm WF}\psi_{\rm sy })-Q_{\rm sy}\zeta_{\rm sy}\), so that the phase factor in the integrand factorizes. A similar factorization works for the Gaussian factor in \(F(\xi)\), and we obtain
\[\begin{split}&\langle\exp(ix(\xi))\rangle_{\xi}=\exp(ix(\xi_{\rm m }))\\ &\times\frac{\int_{\mathcal{R}}d\zeta\mathcal{D}(\mathcal{R},\xi_{ \rm m}^{2}+\zeta_{\rm sy})\exp(-(1+iQ_{\rm sy}x)\zeta_{\rm sy})}{\int_{ \mathcal{R}}d\zeta\mathcal{D}(\mathcal{R},\xi_{\rm m}^{2}+\zeta_{\rm sy})\exp( -\zeta_{\rm sy})}\,.\end{split} \tag{152}\]
In the generic case, the duty cycle prevents an analytic integration. For the sake of illustration, consider the domain \(\mathcal{R}=[\infty,\xi_{\rm m}]\). For sufficiently large \(\xi_{\rm m}>1\) one can use the approximation \(\mathcal{D}(\infty,\xi_{\rm m},\xi^{2})\approx\sqrt{\zeta_{\rm sy}/\xi_{\rm m}^ {2}}\). Then the integrals in Eq. (152) reduce to the Euler gamma-functions with the result
\[\langle\exp(ix(\xi))\rangle_{\xi}\approx\frac{\exp(ix(\xi_{\rm m}))}{1+iQ_{\rm sy }(\xi_{\rm m})x}\,, \tag{153}\]
where \(Q_{\rm sy}(x_{\rm m})=C(\xi_{\rm m})Q_{\rm sy}(x_{\rm m})\), and \(C(\xi_{\rm m}\gg 1)=3/2\), while for \(\xi_{\rm m}=0\), Eq. (140) corresponds to \(C(0)=1\). Hence we predict a more rapid depolarization of the head and tale portions of the bunch,
\[\frac{S_{\rm c}(\infty,\xi_{\rm m})}{S_{\rm c}(\infty,0)}\approx\sqrt{\frac{1+Q _{\rm sy}^{2}x^{2}}{1+C^{2}(\xi_{\rm m})Q_{\rm sy}^{2}x^{2}}}\,, \tag{154}\]
As another case of spin-flip tomography, we comment on the thought experiment with incomplete masking (gating-out) of the pilot bunch, in which the head and tail particles of the pilot bunch are subjected to spin-flips by the RF field of the WF, while the central body of the bunch is shielded from the RF field of the WF. The interplay between the finite time duration of the gate and the bunch length is as follows. At each turn, the head of the bunch with \(\phi>\xi_{\rm m}\sigma\) crosses the Wien filter still in operation, and the spins in the bunch are subjected to the spin flip kicks. The main part of the bunch traverses the already switched-off WF. In terms of SF, this masking can be considered as operation of the Wien filter with \(\chi_{WF}=0\). Since these particles spend part of the time in the central region of the bunch, their depolarization will mimic a partial depolarization of the central part of the bunch. We do not further discuss this effect, which can be easily quantified within the framework of the formalism presented above and will be taken up again elsewhere.
The above discussion can also be extended to transverse spin tomography of beam bunches. The transverse profile of the polarization was previously studied at
RHIC, where a significant variation of the transverse polarization from the core to the skin particles in the beam was observed [36]. In this case, the skin is populated by particles having large betatron amplitudes, while alongside the particles with small betatron amplitudes also large-amplitude particles spend part of their time in the core region.
## VII Implications for spin-flip tune mapping
Here we explore implications of detuning and spin decoherence on the search for the EDM of charged particles in all magnetic storage rings with emphasis on the activity of the JEDI collaboration.
The signal for an EDM is the spin rotation of particles spin in an electric field. In the co-moving frame in a magnetic field, the spins of charged particles are subject to the electric field generated by the Lorentz transformation. The familiar Frenkel-Thomas-BMT result for the angular velocity of the idle spin precession with respect to the particle momentum in a homogeneous magnetic field reads [37; 38]
\[\vec{\Omega}=-\frac{q}{m}\left[G\vec{B}+\left(\frac{1}{\beta^{2}}-1-G\right) \vec{\beta}\times\vec{E}+\frac{1}{2}\eta_{\rm EDM}(\vec{E}+[\vec{\beta}\times \vec{B}])\right]\,, \tag{155}\]
where \(\eta_{\rm EDM}\) defines the EDM in units of the nuclear magneton via \(d=\eta_{\rm EDM}q/(2m)\). In an ideal purely magnetic ring, the EDM tilts the spin stable axis \(\vec{c}\) according to,
\[\begin{split}&\xi^{\rm EDM}=\arctan\left(\frac{\eta_{\rm EDM}}{2G \beta}\right)\,,\\ &\vec{c}=\sin\xi^{\rm EDM}\vec{e}_{\rm r}+\cos\xi^{\rm EDM}\vec{ e}_{y}\,.\end{split} \tag{156}\]
If the Wien filter axis were aligned perpendicular to the momentum plane [39], \(\vec{w}=\vec{e}_{y}\), Eq. (18) would yield
\[|\vec{c}\times\vec{w}|=\sin\xi^{\rm EDM}\text{ and }\nu_{\rm SF}=\frac{1}{4 \pi}\nu_{\rm WF}\ \sin\xi^{\rm EDM}\,, \tag{157}\]
and the experimental measurement of the SF tune \(\nu_{\rm SF}\) would amount to the measurement of the EDM of the particle [40; 5]. However, since the spin stable axis is also tilted by imperfection magnetic fields, tangential \(a_{z}^{\rm MDM}\) and radial \(a_{x}^{\rm MDM}\), which are endemic in all-magnetic rings like COSY, so that
\[\vec{c}=\vec{c}_{y}+\sin\xi^{\rm EDM}\vec{e}_{x}+a_{x}^{\rm MDM}\vec{e}_{\rm r }+a_{z}^{\rm MDM}\vec{e}_{z}\,. \tag{158}\]
The interaction of the magnetic dipole moment (MDM) of the stored particles with imperfection fields will overwhelm the EDM effect in the SF tune \(\nu_{\rm SF}\).
Nevertheless, one can resort to an active compensation of the intrinsic imperfections by two artificial imperfections - this approach was suggested in [24; 5] and has been used in the recent JEDI experiment with deuterons stored in COSY ring [41]. Specifically, what matters in the cross product \(|\vec{c}\times\vec{w}|\) is the relative orientation of \(\vec{c}\) and \(\vec{w}\). The spin stable axis \(\vec{c}\) is tilted by the static magnetic field of the Siberian snake in the straight section opposite the Wien filter which rotates the spins around the \(z\)-axis by an angle \(\chi^{\rm sol}\), while the magnetic field axis \(\vec{w}\) of the Wien filter is tilted around the \(z\) axis by an angle \(\phi^{\rm WF}\). Since the solenoid fields affect the idle spin precession tune, the Wien filter frequency has to be corrected accordingly.
In the case of the exact resonance, one finds
\[\nu_{\rm SF}=\frac{\chi_{\rm WF}}{4\pi}\,|\vec{c}\times\vec{w}|=\frac{\chi_{ \rm WF}}{4\pi}\left[\left(\xi^{\rm MDM}+a_{x}^{\rm MDM}-\phi^{\rm WF}\right) ^{2}+\left(a_{z}^{\rm MDM}+\frac{1}{2\sin\pi\nu_{\rm s}}\chi^{\rm sol}\right) ^{2}\right]^{1/2}\,. \tag{159}\]
As a function of the artificial imperfection parameters, \(\phi^{\rm WF}\) and \(\chi^{\rm sol}\), the SF tune \(\nu_{\rm SF}\) describes an elliptic cone. The accuracy with which the location of the cone apex at \(\nu_{\rm SF}^{0}\) can be determined defines the best accuracy with which \(\xi_{\rm EDM}\) can be determined using the described technique [41]. Barring accidental cancellations, one can reinterpret this accuracy as a tentative upper bound for \(\xi^{\rm EDM}\).
At finite detuning, the observed SF tune will be modified according to Eq. (32)
\[\nu_{\rm SF}=\frac{1}{4\pi}\left\{\chi_{\rm WF}^{2}\left[\left(\xi^{\rm MDM}+a_{x}^ {\rm MDM}-\phi^{\rm WF}\right)^{2}+\left(a_{z}^{\rm MDM}+\frac{1}{2\sin\pi\nu_{ \rm s}}\chi^{\rm so}\right)^{2}\right]+\frac{1}{4}\delta^{2}\right\}^{1/2}. \tag{160}\]
As far as the detuning is relatively weak, it should not affect the location of the cone apex. To this end, we emphasize that the detuning parameter \(\delta\) is not a free parameter as the detuning angle \(\rho\) can be determined _independently_ from the combined analysis of the evolution of the vertical and horizontal polarizations. However, one should be wary of the effects of the feedback effect described in Sect. - here one needs more experimental input from the spin-precession phase walk studies.
In the exclusive regime of exact spin resonance and vanishing spin decoherence, the SF tune \(\nu_{\rm SF}\) defines the slope of the time dependence of the SF phase,
\[\frac{{\rm d}p_{\rm c}(x)}{{\rm d}t}\Big{|}_{\rm t=0}=-\sin\Phi_{\rm in}\frac{ {\rm d}x}{{\rm d}t}=-2\pi f_{\rm c}\sin\Phi_{\rm in}\nu_{\rm SF}\,. \tag{161}\]
For instance, this is the case in the exponential decoherence model. In the case of spin decoherence dominated by synchrotron oscillation, the phase response \(\varphi_{\rm sy}(x)\) must be taken into account [see Eq. (142)]. As far as the experimental data were taken in the regime of \(Q_{\rm sy}x<1\), as suggested by the analysis given in Appendix A, the net effect is a minor renormalization of the visible spin flip tune
\[\nu_{\rm SF}^{\rm(exp)}\approx\nu_{\rm SF}^{\rm(sy)}(1-Q_{\rm sy})\,. \tag{162}\]
Here \(\nu_{\rm SF}^{\rm(exp)}\) is the spin tune which one will get if the spin-flip data were treated within the exponential model, where it is given by Eqs. (30,32). In the regime of \(Q_{\rm sy}x<1\), Eq. (162) entails simple overall rescaling of the spin-flip tune without affecting the location of the apex of the map in Eq. (159). However, were \(Q_{\rm sy}x\sim 1\), then it would have been necessary to directly use the nonlinear \(\varphi_{\rm sy}(x)\) in the extraction of \(\nu_{\rm SF}^{\rm(sy)}\) from the experimental spin flip data. The same point refers to the spin decoherence controlled by betatron oscillations. Here we reiterate that neither \(\phi_{\beta}(x)\) nor \(\varphi_{\rm sy}(x)\) can be eliminated by the feedback set to maintain the phase locking between Wien filter and spin precession as accurately as possible.
## VIII Summary and Conclusions
Inspired by the JEDI studies of high-precision spin dynamics in storage rings, we have developed a theoretical description of RF-driven spin rotations that accounts for detuning with respect to the exact spin resonance. Such a description serves in part as the theoretical basis for the first search for the EDM of deuterons and for tests of the pilot-bunch approach to co-magnetometry recently performed at COSY. The fully analytical description of the multiple spin flips, complemented by in-plane polarization precession and various spin depolarization mechanisms, is essential for data analysis down to the smallest detail, since fitting the experimental data requires multiple calls to the spin rotation and depolarization codes.
As part of our generic approach to RF-driven spin rotations, we have presented results for three different mechanisms of spin decoherence. We found great similarities between synchrotron oscillations and betatron oscillations as driving spin decoherence, with detuned spin precession being a common denominator. Interestingly, in the presence of ring instabilities, detuning is an integral part of the feedback mechanism to maintain the most accurate phase locking between the RF Wien filter and the spin precession.
Parameters common to the two spin-decoherence mechanisms considered include the magnitude and orientation of the stored initial polarization, the detuning, and the spin-decoherence parameter. It has been shown that different spin-decoherence models result in different patterns of depolarization of different components of the continuously flipping polarization. We emphasized the importance of a concurrent analysis of vertical and in-plane precessing polarization components, in particular the previously unexplored phase of the in-plane polarization envelope, as an insight into RF-driven spin dynamics in storage rings.
The synchrotron oscillation mechanism of decoherence is shown to be governed by the bunch length and we suggest a spin-flip based tomography of the synchrotron oscillation-driven spin dynamics. Within the statistical accuracy currently achieved, the main results of the JEDI pilot bunch experiment are consistent with the quantitative expectations of the synchrotron oscillation model, and we commented on the possibility of improving the sensitivity of spin-flip tomography.
###### Acknowledgements.
The work presented here has been performed in the framework of the JEDI collaboration and was supported by an ERC Advanced Grant of the European Union (proposal No. 694340: Search for electric dipole moments using storage rings) and by the Shota Rustaveli National Science Foundation of the Republic of Georgia (SRNSFG Grant No. DI-18-298: High precision polarimetry for charged particle EDM searches in storage rings). This research is part of a project that has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement STRONG-2020, No. 824093. The work of A. Aksentev, A. Melnikov and
N. Nikolaev on the topic was supported by the Russian Science Foundation (Grant No. 22-42-04419). Thanks are due to A. Zelenski for useful discussions.
## Appendix A Phenomenology of spin decoherence driven by synchrotron oscillations
Here we present a brief phenomenology of the experimental results of the pilot bunch experiment [17] in the framework of the model of spin decoherence mediated by synchrotron oscillations. The main parameters of the COSY ring are listed in Table 3.
Within the model, the main source of spin decoherence is the longitudinal momentum spread, which is related to the angular length of the bunch by Eq. (120). With the momentum spread \(\Delta p/p\) and the slip factor \(\eta\) from Table 3, we obtain \(\sigma_{\rm sy}=0.177\pm 0.018\), which agrees with the RMS value \(\sigma_{\rm s}\) of the Gaussian approximation to the longitudinal density of the signal bunch, varying from \(\sigma_{\rm s}=0.11\) at the beginning of the measurement cycle after the cooling was turned off, through \(\sigma_{\rm s}=0.18\) in the middle of the cycle to \(\sigma_{\rm s}=0.20\) at the end of the cycle [17]. The Wien filter was operated in the \(K=-1\) sideband, and from Eq. (134), we expect to find
\[Q_{\rm sy}\approx 0.0211\pm 0.0043\,. \tag{121}\]
The obvious feature of the synchrotron oscillation mechanism is that the head and tail particles have larger synchrotron oscillation amplitudes, entailing stronger spin decoherence, and here we focus on the determination of \(Q_{\rm sy}\) from the pilot-bunch experimental data. In Fig. 9, we show the polarization left-right asymmetry with a \(\pm 2\sigma_{\rm s}\) cut on the signal bunch distribution, so that the experimental data are exactly the same as shown in Fig. 2 of Ref. [17]. For the purposes of our discussion, it is not necessary to convert the polarization asymmetry to the actual polarization, as this only adds an overall normalization uncertainty from the dC analyzing power to all data points.
A fit to the asymmetry with the formula describing the synchrotron oscillations,
\[A_{\rm sy}(t)=a(t-t_{0})+b+\frac{c}{\sqrt{1+\left[2\pi Q_{\rm sy}f_{\rm SF}(t-t _{0})\right]^{2}}}\times\cos\left[2\pi f_{\rm SF}(t-t_{0})-\arctan(2\pi Q_{\rm sy }f_{\rm SF}(t-t_{0}))\right]\,, \tag{122}\]
resulted in \(Q_{\rm sy}(\pm 2\sigma_{\rm s})=(0.0077\pm 0.0036)\), which is in the ballpark of the model expectation of Eq. (121). In this fit, we kept fixed \(t_{0}=85.5\,\)s, as determined in Ref. [17], and where the same data were fitted to the exponential decoherence
Figure 10: The same graph as shown in Fig. 9, but here for particles with synchrotron oscillation amplitudes outside of the \(\pm 2\sigma_{\rm s}\) cut on the longitudinal bunch distribution.
Figure 9: Measured WF induced vertical oscillation of the signal bunch polarization in terms of the left-right asymmetry in the polarimeter (not normalized for the dC analyzing power) for a cycle with two bunches stored in the machine. The RF Wien filter is switched ON at \(t_{0}=85.55\,\)s. The blue points indicate the vertical polarization asymmetry for events within the \(\pm 2\sigma_{s}\) boundary of the signal bunch. The results of corresponding fits within the synchrotron oscillation model are presented in Table 4. The red points reflect the case for the pilot bunch, _i.e._, when the RF of the Wien filter is gated out. The black points indicate the situation when, during a different cycle, the WF is completely switched OFF. The blue line indicates a fit with Eq. (122), using events from within the \(\pm 2\sigma_{\rm s}\) boundary of the signal bunch distribution, it is practically indistinguishable from the exponential decoherence fit shown in Ref. [17], see also the discussion in the text.
formula, given by
\[A_{\rm exp}(t)=a(t-t_{0})+b+c\exp\left[-\Gamma(t-t_{0})\right]\times\cos\left[2 \pi f_{\rm SF}(t-t_{0})\right]\,. \tag{10}\]
The quality of the synchrotron oscillation model fit, \(\chi^{2}/{\rm ndf}=136.936/158=0.867\), is basically identical to \(\chi^{2}/{\rm ndf}=136.071/157=0.867\) for the exponential attenuation model, applied in Ref. [17], and for all practical purposes, the synchrotron oscillation model in Fig. 9 is indistinguishable from the exponential-decoherence model. Indeed, in view of the weak signal of attenuation, the two parametrizations can not be discriminated with the present accuracy of the experimental data. In order to not confuse the two formula-wise different fits, we changed the color code of the fit curve and of the related data points, so that the blue curve in in Fig. 9 must be compared to the red curve in Fig. 2 of Ref. [17].
According to the discussion in Sec. VI, for the head and tail particles, we expect an enhancement of the parameter \(Q_{\rm sy}\) by a factor up to \(\approx 9/4\). As a subsample of events with the largest attainable synchrotron oscillations, we considered separately the head and tail particles outside of the \(\pm 2\sigma_{\rm s}\) cut. The experimental results for the corresponding polarization asymmetry are shown in Fig. 10. With low statistics in the head-and-tail sample, a fit to the data using Eq. (11) yields \(Q_{\rm sy}(|\phi_{\rm s}|>2\sigma_{\rm s})=0.0098\pm 0.0108\), which is consistent with the estimate given in Eq. (12).
As a further check of the synchrotron oscillation model, following Ref. [17], we considered still grouping of signal bunch events within the \(\pm 2\sigma_{\rm s}\) cut into set I and set II, shown in Table 5. The boundary of \(0.6\sigma_{\rm s}\) between the two sets was chosen as to have about equal number of events in each of the sets. It should be noted that the two sets are not entirely statistically independent, as particles from set II spend part of their time in set I. Again, within the present experimental accuracy, the corresponding results for \(Q_{\rm sy}\) of our interest from fits to the parametrization of synchrotron oscillations, given in Eq. (11), are in the ballpark of our estimate, given in Eq. (12).
Some comments on the interpretation of results for the spin-flip frequency are in order. In the ad hoc phenomenological model of exponential attenuation, the spin-flip phase motion is decoupled from the strength of the attenuation. Within this model, fits to the spin-flip pattern of events within the \(\pm 2\sigma_{\rm s}\) boundary, observed in the pilot-bunch experiment yielded the spin-flip frequency \(f_{\rm SF}^{\rm(exp)}\) to about one per mille accuracy, \(f_{\rm SF}^{\rm(exp)}(\pm 2\sigma_{\rm s})=0.079442\pm 0.000096\,{\rm Hz}\)[17]. In contrast to that, the synchrotron oscillation dominance is a dynamical model with a well-defined correlation between spin decoherence and spin-flip phase motion. Here we capture on the point that in spite of 7 full spin-flip periods observed, the pilot-bunch experimental data still correspond to the regime of small \(Q_{\rm sy}x<1\). Then we can invoke the approximation of Eq. (162) to relate \(f_{\rm SF}^{\rm(sy)}\) to \(f_{\rm SF}^{\rm(exp)}\). Specifically, with entry for \(Q_{\rm sy}\) in Table 4, we find
\[f_{\rm SF}^{\rm(sy)}\approx\frac{f_{\rm SF}^{\rm(exp)}}{1-Q_{\rm sy}}=0.080067 \pm 0.000304, \tag{11}\]
which agrees with the fit result for \(f_{\rm SF}^{\rm(sy)}\) in Table 4. Evidently, it is the present uncertainty of \(\Delta Q_{\rm sy}\approx 3.6\cdot 10^{-3}\) which entails the about \(4\,\%\) uncertainty in the determination of the \(f_{\rm SF}^{\rm(sy)}\) (\(\pm 2\sigma_{\rm s}\)) in Table 4.
The achieved precision of the JEDI pilot-bunch experiment is close to, but does not yet allow a decisive test of the discussed spin tomography of the longitudinal structure of the bunch. We point out again that for more systematic studies it is advisable to increase the synchrotron oscillation parameter \(Q_{\rm sy}\) at the expense of either larger \(\Delta p/p\) and correspondingly longer bunches, or to run the Wien filter at sidebands \(K=\pm 2\) or at still larger \(K\).
\begin{table}
\begin{tabular}{l l l} Parameter & Symbol [Unit] & Value \\ \hline Deuteron momentum (lab) & \(P\) [MeV/c] & 970.000 \\ Lorentz factor & \(\gamma\)[1] & 1.126 \\ Beam velocity & \(\beta\) [c] & 0.460 \\ Nominal COSY orbit circumference & \(\ell_{\rm COSY}\) [m] & 183.572 \\ Revolution frequency & \(f_{\rm c}\) [Hz] & 750 602.6 \\ Spin precession frequency & \(f_{\rm s}\) [Hz] & \(-120\,860.5\) \\ Slip factor & \(\eta\)[1] & 0.6545 \\ Momentum spread in middle of cycle & \(\Delta p/p\)[1] & \(7.397\cdot 10^{-5}\) \\ Synchrotron oscillation frequency & \(f_{\rm sy}\) [Hz] & \(205\pm 21\) \\ \end{tabular}
\end{table}
Table 3: Parameters of the deuteron kinematics, the COSY ring and the synchrotron motion in the pilot bunch experiment. |
2309.15293 | Maximum diffusion reinforcement learning | Robots and animals both experience the world through their bodies and senses.
Their embodiment constrains their experiences, ensuring they unfold
continuously in space and time. As a result, the experiences of embodied agents
are intrinsically correlated. Correlations create fundamental challenges for
machine learning, as most techniques rely on the assumption that data are
independent and identically distributed. In reinforcement learning, where data
are directly collected from an agent's sequential experiences, violations of
this assumption are often unavoidable. Here, we derive a method that overcomes
this issue by exploiting the statistical mechanics of ergodic processes, which
we term maximum diffusion reinforcement learning. By decorrelating agent
experiences, our approach provably enables single-shot learning in continuous
deployments over the course of individual task attempts. Moreover, we prove our
approach generalizes well-known maximum entropy techniques, and robustly
exceeds state-of-the-art performance across popular benchmarks. Our results at
the nexus of physics, learning, and control form a foundation for transparent
and reliable decision-making in embodied reinforcement learning agents. | Thomas A. Berrueta, Allison Pinosky, Todd D. Murphey | 2023-09-26T22:14:56Z | http://arxiv.org/abs/2309.15293v5 | # Maximum Diffusion Reinforcement Learning
###### Abstract
The assumption that data are independent and identically distributed underpins all machine learning. When data are collected sequentially from agent experiences this assumption does not generally hold, as in reinforcement learning. Here, we derive a method that overcomes these limitations by exploiting the statistical mechanics of ergodic processes, which we term maximum diffusion reinforcement learning. By decorrelating agent experiences, our approach provably enables agents to learn continually in single-shot deployments regardless of how they are initialized. Moreover, we prove our approach generalizes well-known maximum entropy techniques, and show that it robustly exceeds state-of-the-art performance across popular benchmarks. Our results at the nexus of physics, learning, and control pave the way towards more transparent and reliable decision-making in reinforcement learning agents, such as locomoting robots and self-driving cars.
## 1 Introduction
Deep reinforcement learning (RL) is a powerful and flexible decision-making framework based on the experiences of artificial agents. From controlling nuclear fusion reactors [1] to besting Olympic curling champions [2] and StarCraft grandmasters [3], deep RL agents have achieved remarkable feats when they are able to exhaustively explore how their actions impact the state of their environment. Despite its impressive achievements, deep RL suffers from limitations preventing its widespread deployment in the real world: its performance varies across initial conditions, its sample inefficiency demands the use of simulators, and its agents struggle to learn outside of episodic problem structures [4; 5; 6]. At the heart of these shortcomings lies a violation of the assumption that data are independent and identically distributed (_i.i.d._), which underlies all of deep learning. While deep learning requires _i.i.d._ data, the experiences of RL agents are unavoidably sequential and correlated. It is no wonder, then, that many of deep RL's most impactful advances have sought to overcome precisely this roadblock [7; 8; 9; 10].
Over the past decade, researchers have started to converge onto an understanding that destroying temporal correlations is essential to agent performance. In offline RL, where learning occurs by sampling from a fixed database of agent experiences, the development of experience replay was a major breakthrough [11]. Experience replay and its many variants [12; 13; 14] found that sampling agent experiences in random batches can reduce temporal correlations, resulting in large performance gains across tasks and algorithms [15; 16; 17]. This simple insight--merely sampling agent experiences out of order--led to one of deep RL's landmark triumphs, achieving superhuman performance in Atari video game benchmarks [7]. Nevertheless, overcoming the effect of strong temporal correlations cannot be accomplished with sampling alone. Correlations must be destroyed during data acquisition as well,
as online RL techniques have attempted to do. In this regard, maximum entropy (MaxEnt) RL has emerged as a key advance [18; 19; 20; 21; 22; 23; 24; 25; 26]. These methods seek to destroy correlations by maximizing the entropy of an agent's policy. In doing so, MaxEnt RL techniques have been able to achieve better exploration and more robust performance [27]. However, does maximizing the entropy of an agent's policy actually decorrelate their experiences?
Here, we prove that this is generally not the case. To address this gap we introduce maximum diffusion (MaxDiff) RL, a framework that provably decorrelates agent experiences and realizes statistics indistinguishable from _i.i.d._ sampling by exploiting the statistical mechanics of ergodic processes. Our approach efficiently exceeds state-of-the-art performance by diversifying agent experiences and improving state exploration. By articulating the relationship between an agent's properties, diffusion, and learning, we prove that MaxDiff RL agents learn in single-shot deployments regardless of how they are initialized. We additionally prove that MaxDiff RL agents exhibit seed-invariance, which enables robust and reliable performance with low-variance across agent deployments and learning tasks. Our work sheds a light on foundational issues holding back the field, highlighting the impact that agent properties and data acquisition can play on downstream learning tasks, and paving the way towards more transparent and reliable decision-making in deep RL agents.
## 2 Results
### Temporal correlations hinder performance
Whether temporal correlations can be avoided depends on the properties of the underlying agent being controlled. Completely destroying correlations between an agent's state transitions requires the ability to discontinuously jump from state to state without continuity of experience. For some RL agents, this poses no issue. Particularly in settings where agents are disembodied, there may be nothing preventing effective exploration through jumps between uncorrelated states. This is one of the reasons why deep RL recommender systems have been successful in a broad range of applications, such as YouTube video suggestions [28; 29; 30]. However, continuity of experience is an essential element of many RL problem domains. For instance, the smoothness of Newton's laws makes correlations unavoidable in the motions of most physical systems, even in simulation. This suggests that for systems like robots or self-driving cars overcoming the impact of temporal correlations presents a major challenge [6].
To illustrate the impact this can have on learning performance, we devised a toy task to evaluate deep RL algorithms as a function of correlations intrinsic to the agent's state transitions. Our toy task and agent dynamics are shown in Fig. 1(a), corresponding to a double integrator system with parametrized momentum anisotropy. The task requires learning reward, dynamics, and policy models from scratch in order to move a planar point mass from a fixed initial position to a goal location. The true linear dynamics are simple enough to explicitly write down, which allows us to rigorously study temporal correlations in the agent's state transitions through the lens of controllability. Controllability is a formal property of control systems that describes their ability to reach arbitrary states in an environment [31; 32]. In linearizable systems, state transitions become pathologically correlated when they are uncontrollable. However, when the agent is controllable these correlations can be overcome, at least in principle. While the relationship between controllability and temporal correlations has been studied for decades [33], it is only recently that researchers have begun to study its impact on learning processes [34; 35; 36].
Figure 1 parametrically explores the relationship between our toy system's controllability properties and the learning performance of state-of-the-art deep RL algorithms. The point mass dynamics are parametrized by \(\beta\in[0,1]\), which determines the relative difficulty of translating horizontally on the \(x\)-axis (Fig. 1(a)). When \(\beta=0\) the system is uncontrollable and can only translate vertically along the \(y\)-axis, which illustrates the sense in which our agent's state transitions become pathologically correlated. While the system is formally controllable for all non-zero \(\beta\), its reachable states can only satisfy the exploration statistics specified by its action distribution when it is equal to 1 (see Supplementary Figure 1). We evaluated the performance of state-of-the-art model-based and model-free deep RL algorithms on our task--model-predictive path integral control (NN-MPPI) [37] and soft actor-critic (SAC) [9], respectively--at varying values of \(\beta\), from 1 to 0.001. As expected, at \(\beta=1\) both NN-MPPI and SAC are able to accomplish the toy task (Fig. 1(b)). However, as \(\beta\to 0\) the performance of NN-MPPI and SAC degrades parametrically (Fig. 1(c)), up until the point that neither
algorithm can solve the task, as shown in Fig. 1(d). Hence, temporal correlations can completely hinder the learning performance of the state-of-the-art in deep RL even in toy problem settings such as this one, where a globally optimal policy can be analytically computed in closed form.
Failure to overcome correlations between state transitions can prevent effective exploration, severely impacting the performance of deep RL agents. As Fig. 1(d) illustrates, neither NN-MPPI nor SAC agents are able to sufficiently explore in the \(x\)-dimension of their state space as a result of their decreasing degree of controllability (see Supplementary Note 1.1). This is the case despite the fact that NN-MPPI and SAC are both MaxEnt RL algorithms [9; 38], designed specifically to achieve improved exploration outcomes by decorrelating their agent's action sequences. In contrast, our
Figure 1: **Temporal correlations break the state-of-the-art in RL.** Controllability is a property of control systems that can determine how correlated state transitions are in linearizable systems (Supplementary Note 1.1). **a,** planar point mass system whose dynamics are simple enough to write down explicitly and whose policy admits a globally optimal analytical solution. The systemβs 4-dimensional state space is comprised of its planar positions and velocities. We parametrize its controllability through \(\beta\in[0,1]\), where \(\beta=0\) produces a formally uncontrollable system. The task is to translate the point mass from \(p_{0}\) to \(p_{g}\) within a fixed number of steps at different values of \(\beta\), and the reward is specified by the negative squared Euclidean distance between the agentβs state and the goal. We compare state-of-the-art model-based and model-free algorithms, NN-MPPI and SAC respectively, to our proposed maximum diffusion (MaxDiff) RL framework (see Supplementary Note 3 for implementation details). **b, d**, Representative snapshots of MaxDiff RL, NN-MPPI, and SAC agents (top to bottom) in well-conditioned (\(\beta=1\)) and poorly-conditioned (\(\beta=0.001\)) controllability settings. **c,** Even in this simple system, poor controllability can break the performance of RL agents. As \(\beta\to 0\) the systemβs ability to move in the \(x\)-direction diminishes, hindering the performance of NN-MPPI and SAC, while MaxDiff RL remains task-capable (10 seeds each).
proposed approach--MaxDiff RL--is able to consistently succeed at the task and is guaranteed to realize effective exploration by focusing instead on decorrelating agent experiences, i.e., their state sequences (see purple curves in Fig. 1(b-d)), as we discuss in the following section.
### Maximum diffusion exploration and learning
Due to its history in the study of multi-armed bandits, most methods in the field of RL presuppose that taking random actions produces effective exploration [39; 40]. Even sophisticated techniques like MaxEnt RL implicitly rely on this assumption. Rather than sampling actions from a fixed uniform or Gaussian distribution, MaxEnt RL algorithms seek to maximize the entropy of a learned action distribution (i.e., a policy) in hopes of decorrelating agent experiences and improving exploration outcomes. However, as we have illustrated in the previous section, whether this is actually possible depends on the agent's controllability properties and the temporal correlations these spontaneously induce in their experiences (see Fig. 2(c) and Supplementary Note 1.1). To overcome these limitations, in this work we decorrelate agent experiences as opposed to their action sequences, which forms the starting point to our derivation of the MaxDiff RL framework.
Prior to synthesizing policies that try to decorrelate agent experiences, we start by asking what is the most decorrelated that agent experiences can get to begin with? To answer this question we draw from the statistical physics literature on maximum caliber [41; 42; 43], which generalizes the variational principle of maximum entropy [44] to distributions over trajectories or paths of agent states or experiences, \(x(t)\), which we take to be continuous in time for the purposes of our derivation. Using this framework, we may derive a probability distribution over agent paths, \(P[x(t)]\), by optimizing an entropy functional, \(S[P[x(t)]]\). The optimal distribution, \(P_{max}[x(t)]\), would describe the statistics of the least correlated agent paths, but its specific form and properties depend on how the variational optimization is constrained. In the absence of trajectory constraints, agents can sample states discontinuously and uniformly in a way that is equivalent to _i.i.d._ sampling, but is not consistent with the continuous experiences of embodied agents in the real world or in simulation (Fig. 2(a,b)). Hence, to ensure our optimization produces a distribution over continuous paths, we constrain the volume of states reachable within any finite time interval by accounting for the system's controllability properties (see Methods).
Surprisingly, this constrained variational optimization admits an analytical solution for the maximum entropy path distribution. The derived optimal path distribution is
\[P_{max}[x(t)]=\frac{1}{Z}\exp\Big{[}-\frac{1}{2}\int_{-\infty}^{\infty}\dot{x }(t)^{T}\mathbf{C}^{-1}[x(t)]\dot{x}(t)dt\Big{]}, \tag{1}\]
where \(\mathbf{C}[x^{*}]=Cov[x(t)]_{x(t_{0})=x^{*}}\) captures the local magnitude of temporal correlations induced by the agent's controllability properties, and \(Z\) is a normalization constant (see Methods). This distribution describes the statistics of an agent with minimally correlated continuous paths, subject to the constraints imposed by their controllability. Moreover, Eq. 1 is equivalent to the path distribution of an anisotropic, spatially-inhomogeneous diffusion process. Thus, minimizing correlations among agent trajectories leads to diffusion-like exploration, whose properties can actually be analyzed through the lens of statistical mechanics (see Supplementary Figure 3). This also means that the sample paths of the optimal agent are Markovian and ergodic (see Supplementary Note 1.4 for associated theorems, corollaries, and their proofs). Unlike alternative RL frameworks, our approach does not assume the Markov property, but rather enforces it as a property intrinsic to the optimal agent's path statistics.
Satisfying ergodicity has profound implications for the behavior of resulting agents. Ergodicity is a formal property of dynamical systems that guarantees that the statistics of individual trajectories are asymptotically equivalent to those of a large ensemble of trajectories [45; 46]. Put in terms of our problem setting, while the sequential nature of RL agent experience can make _i.i.d._ sampling technically impossible, the global statistics of an ergodic RL agent are indistinguishable from those of an _i.i.d._ sampling process. In this sense, ergodic Markov sampling is the best possible alternative to _i.i.d._ sampling in sequential decision-making processes. Beyond resolving the issue of generating _i.i.d._ samples in RL, ergodicity forms the basis of many of MaxDiff RL's theoretical guarantees, as we show in the following sections.
When an agent satisfies the statistics of Eq. 1, we describe the agent as maximally diffusive. However, agents do not satisfy maximally diffusive statistics spontaneously. Matching these statistics requires
finding a policy capable of realizing them, which forms the core of what we term MaxDiff RL. While any given policy induces a path distribution, finding policies that realize maximally diffusive statistics requires optimization and learning (Fig. 2(d)). To satisfy the requirements of RL as a problem setting, we define:
\[\begin{split} P_{\pi}[x_{0:T},u_{0:T}]&=\prod_{t=0}^{T -1}p(x_{t+1}|x_{t},u_{t})\pi(u_{t}|x_{t})\\ P^{r}_{max}[x_{0:T},u_{0:T}]&=\prod_{t=0}^{T-1}p_{ max}(x_{t+1}|x_{t})e^{r(x_{t},u_{t})},\end{split} \tag{2}\]
where we discretized the distribution in Eq. 1 as \(p_{max}(x_{t+1}|x_{t})\), and analytically rederived the optimal path distribution under the influence of a reward landscape, \(r(x_{t},u_{t})\) (see Methods). Given the distributions in Eq. 2, the goal of MaxDiff RL can be framed as minimizing the Kullback-Leibler (KL) divergence between them--that is, between the agent's current path distribution and the maximally diffusive one--as in the KL-control literature.
To draw connections between our framework and the broader MaxEnt RL literature, we recast the KL-control formulation of MaxDiff RL as an equivalent stochastic optimal control (SOC) problem. In SOC, the goal is to find a policy that maximizes the expected cumulative rewards of an agent in an
Figure 2: **Maximum diffusion RL exploits controllability to achieve effective exploration.****a,b,** Systems with different planar controllability properties leading to different possible trajectories despite both systems being formally controllable. **c,** Whether action randomization leads to effective state exploration depends on an agentβs controllability (see Supplementary Note 1.1), as in our illustration of a complex bipedal robot falling over and failing to explore. **d,** While any given policy induces a path distribution (left), MaxDiff RL produces policies that maximize the path distributionβs entropy (right). The projected support of the robotβs path distribution is illustrated by the shaded gray region. We prove that maximizing the entropy of an agentβs state transitions results in effective exploration (see Supplementary Notes 1.4 and 2.5). **e,** Our approach generalizes the MaxEnt RL paradigm by considering agent dynamics in addition to their policy. We prove that maximizing a policyβs entropy does not generally to maximize the entropy of an agentβs state transitions (see Supplementary Note 2.2). **f,** This approach leads to distinct learning outcomes because agents can reason about the impact of their actions on state transitions, rather than their actions in isolation.
environment. In this way, we can express the MaxDiff RL objective as
\[\pi^{*}_{\text{MaxDiff}}=\underset{\pi}{\text{argmax}}\;E_{(x_{0:T},u_{0:T}) \sim P_{\pi}}\Bigg{[}\sum_{t=0}^{T-1}\hat{r}(x_{t},u_{t})\Bigg{]}, \tag{3}\]
with modified rewards given by
\[\hat{r}(x_{t},u_{t})=r(x_{t},u_{t})-\alpha\log\frac{p(x_{t+1}|x_{t},u_{t})\pi( u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t})}, \tag{4}\]
where \(\alpha\) is a positive temperature-like parameter we introduce to balance diffusive exploration and reward exploitation, as we discuss in the following section. With these results in hand, we may now state one of our main theorems.
**Theorem 1**.: _MaxEnt RL is a special case of MaxDiff RL with the added assumption that state transitions are decorrelated._
Proving this result is simple and only relies on the sense in which state transitions are decorrelated, which we discuss in detail in Supplementary Note 2.2.
Completely destroying correlations generally requires discontinuous jumps between states, which can only be achieved by fully controllable agents [23]. When an agent is fully controllable, there always exists a policy that enables it to reach every state and specify the statistics of how each state is reached. If this condition is met, then the optimum of Eq. 3 is attained when \(p(x_{t+1}|x_{t},u_{t}^{*})=p_{\pi^{*}}(x_{t+1}|x_{t})=p_{max}(x_{t+1}|x_{t})\), where \(u_{t}^{*}\) are actions drawn from an optimized policy \(\pi^{*}\). In turn, this simplifies Eq. 4 and recovers the MaxEnt RL objective [9], as shown in Supplementary Note 2.2. This proves not only that MaxDiff RL is a generalization of the MaxEnt RL framework to agents with correlations in their state transitions, but also makes clear that maximizing policy entropy cannot decorrelate agent experiences in general. In contrast, MaxDiff RL actively enforces the decorrelation of state transitions at all points in time. We can think of this intuitively by noting that MaxDiff RL simultaneously accounts for the effect of the policy and of the temporal correlations induced by agent dynamics in its optimization (Fig. 2(e)). As such, MaxDiff RL typically produces distinct learning outcomes from MaxEnt RL (Fig. 2(f)). Our result also implies that all theoretical robustness guarantees of MaxEnt RL (e.g., [27]) should be interpreted as guarantees of MaxDiff RL when state transitions are decorrelated. Moreover, we suggest that many of the gaps between MaxEnt RL's theoretical results and their practical performance may be explained by the controllability properties of the underlying agent, as we saw in Fig. 1.
Finally, while our results seem to suggest that model-free implementations of MaxDiff RL are not feasible, we note that it is possible to indirectly account for the agent's controllability properties by learning local estimates of the agent's entropy generation from observations. Similar entropy estimates have been used in model-free RL [47] and more broadly in the autoencoder literature [48]. For the results presented in this manuscript, we derived a model-agnostic objective that uses an analytical expression for the state transition entropy,
\[\underset{\pi}{\text{argmax}}\;E_{(x_{0:T},u_{0:T})\sim P_{\pi}}\Bigg{[}\sum _{t=0}^{T-1}r(x_{t},u_{t})+\frac{\alpha}{2}\log\det\mathbf{C}[x_{t}]\Bigg{]}, \tag{5}\]
whose optimum realizes the same maximally diffusive statistics as Eq. 3. We note that there are many ways to formulate the MaxDiff RL objective, each of which may have implementation-specific advantages (see Fig. 3(a) and Supplementary Note 2.3). In this sense, MaxDiff RL is not a specific algorithm implementation but rather a general problem statement and solution framework, similar to MaxEnt RL. In this work, our MaxDiff RL implementation is exactly identical to NN-MPPI except for the additional entropy term shown above. However, as we will demonstrate, this simple modification can have a drastic effect on agent behavior.
### Seed-invariance in ergodic agents
The introduction of an entropy term in Eq. 5 means that MaxDiff RL agents must balance between two aims: achieving the task and embodying diffusion (Fig. 3(a)). While asymptotically there is no trade-off between maximally diffusive exploration and task exploitation, managing the relative balance between these two aims is important over finite time horizons, which we achieve with a
temperature-like parameter, \(\alpha\). Unlike similar parameters in other RL frameworks, the role of \(\alpha\) in MaxDiff RL can often be understood without the need for analogy through the lens of statistical mechanics. For simple MaxDiff RL agents with fixed controllability properties in a reward landscape, \(\alpha\) sets the temperature of a heat bath induced by the policy (see Supplementary Figure 6). Hence, in line with the statistical mechanics of diffusion, the value of \(\alpha\) can play a role in establishing the ergodicity of MaxDiff RL agents. If \(\alpha\) is set too high, then the system's fluctuations can overpower the influence of the reward and break ergodicity, which has been shown in the context of diffusion processes in potential fields [49].
Since ergodicity provides many of MaxDiff RL's desirable properties and guarantees, tuning the value of \(\alpha\) is essential. In Fig. 3 and Supplementary Movie 1, we explore the effect of tuning \(\alpha\) on the learning performance of MaxDiff RL agents in MuJoCo's swimmer environment. The swimmer system is comprised of three rigid links of nominally equal mass, \(m=1\), with two degrees of actuation at the joints. The agent's objective is to swim in the goal direction as fast as possible within a fixed time interval, while in the presence of viscous drag forces (Fig. 3(a)). In Fig. 3(b), we vary \(\alpha\) across multiple orders of magnitude and examine its impact on the terminal rewards of MaxDiff RL swimmer agents. As we modulate the value of \(\alpha\) from 1 to 100, we observe that diffusive exploration leads to greater task rewards. However, after \(\alpha=100\) we cross a critical threshold, beyond which the strength of the system's diffusive exploration overpowers the reward (see inset dotted line in Fig 3(b)), thereby breaking the ergodicity of our agents with respect to the underlying potential and performing poorly at the task--just as predicted by our theoretical framework.
Figure 3: **Diffusive exploration produces robust seed-invariant performance.****a**, Illustration of MuJoCo swimmer environment (left panel). The swimmer has 2 degrees of actuation, \(u_{1}\) and \(u_{2}\), that rotate its limbs at the joints, with tail mass \(m_{s}\) and \(m=1\) for other limbs. MaxDiff RL synthesizes robust agent behavior by learning policies that balance task-capability and diffusive exploration (right panel). In practice this balance is tuned by a temperature-like parameter, \(\alpha\). **b**, To explore the role that \(\alpha\) plays in the performance of MaxDiff RL, we examine the terminal rewards of swimmer agents (10 seeds each) across values of \(\alpha\) with \(m_{s}=1\). Diffusive exploration leads to greater rewards until a critical point (inset dotted line), after which the agent starts valuing diffusing more than accomplishing the task (see also Supplementary Movie 1). **c**, Using \(\alpha=100\), we compared MaxDiff RL against SAC and NN-MPPI with \(m_{s}=0.1\). We observe that SAC consistently achieves suboptimal performance, whereas NN-MPPI can achieve competitive performance but not reliably as there is substantial variation across seeds (see shaded area, 10 seeds each). Our approach performs robustly across seeds, since seed-invariance is a formal property of MaxDiff RL agents (see also Supplementary Movie 2).
Given a constant temperature of \(\alpha=100\) that preserves the swimmer's ergodicity, we compared the performance of MaxDiff RL to NN-MPPI and SAC across 10 seeds each. To ensure that the task was solvable by all agents, we lowered the mass of the swimmer's third link (i.e., its tail) to \(m_{s}=0.1\). We find that while SAC struggles to succeed at this task within a million environment interactions, NN-MPPI achieves good performance but with high variance across seeds. This is in stark contrast to MaxDiff RL, whose performance is near-identical and competitive across all random seeds (see Fig. 3(c) and Supplementary Movie 2). Hence, merely by decorrelating state transitions, our agent was able to exhibit robustness to model and environment randomization beyond what is typically possible in deep RL. Moreover, since our implementation of MaxDiff RL is identical to that of NN-MPPI, we can completely attribute any performance gains and variance reduction to the properties of MaxDiff RL's theoretical framework.
This robustness to model and environmental randomizations is referred to as seed-invariance, and is a highly desirable feature of deep RL agents. However, guaranteeing seed-invariance is generally challenging because it requires modeling the impact of neural representation variability on learning outcomes. Nonetheless, we can still provide model-independent guarantees through the probably approximately correct (PAC) learning framework--one of the most successful and widely applied mathematical formalizations of learning [50]. The PAC framework assesses an agent's ability to learn function classes from data with a given likelihood and margin of error. Under this framework, we are able to provide formal seed-invariance guarantees.
**Theorem 2**.: _MaxDiff PAC learners are seed-invariant._
We refer the reader to Supplementary Note 1.5 for details, but the proof follows from treating PAC generalization risk as an observable in Birkhoff's ergodic theorem [45]. Since maximally diffusive agents are ergodic, any system initialization will eventually realize identical learning outcomes, leading to seed-invariance. Despite excluding neural representations from our analysis, Fig. 3(c) suggests that our guarantees hold empirically. Beyond PAC learning, we note that maximally diffusive agents are still provably robust to environmental randomization (see Supplementary Note 1.6).
### Generalization across agent embodiments
As we saw in a previous section, when agents are capable of finding optimal policies, the MaxDiff RL objective in Eq. 4 becomes independent of the underlying agent's state transition statistics. This suggests that successful MaxDiff RL models and policies may exhibit favorable generalization properties across agent embodiments. To explore this question, as well as the robustness of MaxDiff RL agents to variations in their neural network models, we devised a transfer experiment in the MuJoCo swimmer environment. We designed two variants of the swimmer agent: one with a heavy, less controllable tail of \(m_{s}=1\), and another with a light, more controllable tail of \(m_{s}=0.1\) (Fig. 4(a)). We trained two sets of models for each algorithm included in the comparison. One set was trained with the light-tailed swimmer, and another set was trained with the heavy-tailed swimmer. Then, we deployed and evaluated each set of models on both the swimmer variant that they observed during training, as well as its counterpart. Our experiment's outcomes are shown in Fig. 4(b,c), where the results are categorized as "baseline" if the trained and deployed swimmer variants match, or "transfer" if they were swapped. The baseline experiments validate other results shown throughout the manuscript: all algorithms benefit from working with a more controllable system (see Fig. 4(b) and Supplementary Movie 2). However, as MaxDiff RL is the only approach that takes into account system controllability, it is the only method that remains task-capable with a heavy-tailed swimmer.
For the transfer experiments, all of the learned neural representations of the reward function, control policy, and agent dynamics were deployed on the swimmer variant that was not seen during training (Fig. 4(a)). First, we note that for both NN-MPPI and SAC model transfer leads to degrading performance across the board. This is the case even when the swimmer variant they were deployed onto was more controllable, which is counterintuitive and undesirable behavior. In contrast, our MaxDiff RL agents can actually benefit and improve their performance when deployed on the more controllable swimmer variant, as desired (see "Heavy-to-Light" transfer in Fig. 4(c) and Supplementary Movie 3). In other words, as the task becomes easier in this way, we can expect the performance of MaxDiff RL agents to improve. Another crucial MaxDiff RL transfer result is the performance increase between the baseline heavy-tailed swimmer and the "Light-to-Heavy" transfer swimmer (Fig. 4(c) and Supplementary Movie 3). We found that training with a more controllable swimmer increased the performance of agents when deployed on the heavy-tailed swimmer, showing
that system controllability during training matters more to overall performance that the particular embodiment of the deployed system. In part, this occurs because greater controllability leads to improved exploration, which increases the diversity of data observed during training. While formalizing this result is challenging, we note that MaxDiff RL encourages generalizable policies by focusing on agent outcomes instead of input actions. By forcing agent path statistics to match those of an ergodic diffusion process, MaxDiff RL implicitly minimizes the effect of agent dynamics on performance.
### Single shot learning in ergodic agents
A longstanding challenge in the field of RL is the development of methods capable of supporting learning in single-shot agent deployments. Most methods in deep RL are designed to work in multi-shot settings (Fig. 5(b)), where randomized instantiations of tasks and environments, reset
Figure 4: **Trained system controllability dominates deployed system performance.****a**, Two variants of the MuJoCo swimmer environment: one with \(m_{s}=1\) and another with \(m_{s}=0.1\). As a baseline, we deploy learned models on the same swimmer variant trained on. Then, we carry out a transfer learning experiment where the trained and deployed swimmer variants are swapped. **b**, Baseline experiments confirm our previous resultsβall algorithms benefit from a more controllable swimmer. Since MaxDiff RL optimizes system controllability, it is the only method capable of achieving the task with a heavy-tailed swimmer (see also Supplementary Movie 2). **c**, Both NN-MPPI and SAC performance degrades when deployed on a more controllable system than was trained on, which is not desirable behavior. In contrast, MaxDiff RL benefits from the βHeavy-to-Lightβ transfer because it learns policies that take advantage of a more capable system during deployment. We also observe that MaxDiff RL performance further increases in the βLight-to-Heavyβ transfer experiment, showing that system controllability during training is more important to overall performance than the particular embodiment of the system it is ultimately deployed on (see also Supplementary Movie 3).
across distinct agent deployments, provide a kind of passive variability that is essential to learning processes. However, episodic problem structures of this kind are very rare in real-world applications. As a result, most multi-shot learning methods need simulations to work in practice, requiring the aid of sim-to-real techniques to bridge performance gaps in real-world deployment [6]. Thus, techniques capable of enabling continual learning from scratch, without resetting the agent or environment, are crucial to future applications of deep RL.
Despite the challenges associated with studying the behavior of agents based on neural network models, the ergodic properties of MaxDiff RL enables one to provide model-independent guarantees on the feasibility of single-shot learning through the PAC learning framework.
**Theorem 3**.: _MaxDiff multi-shot PAC learners are also single-shot PAC learners._
This theorem follows directly from the seed-invariance of maximally diffusive agents. Since ergodicity guarantees that the learning performance of any given PAC learner is indistinguishable from that of an ensemble of learner initializations, it also necessarily implies that the learning outcomes of single-shot
Figure 5: **Maximally diffusive RL agents are capable of single-shot learning.****a**, Illustration of MuJoCo ant environment. **b**, Typical algorithms learn across many different initializations and deployments of an agent, which is known as multi-shot learning. In contrast, single-shot learning insists on a single agent deployment, which requires learning through contiguous experiences. Here, we prove that MaxDiff RL agents are equivalently capable of single-shot and multi-shot learning in a broad variety of settings. **c**, Single-shot learning depends on the ability to generate data samples ergodically, which MaxDiff RL guarantees when there are no irreversible state transitions in the environment. **d**, Single-shot learning in the swimmer MuJoCo environment. We find that MaxDiff RL achieves robustly seed-invariant performance comparable to its multi-shot counterpart (see also Supplementary Movie 4). **e**, In contrast to the swimmer, the MuJoCo ant environment contains irreversible state transitions (e.g., flipping upside down) preventing ergodic trajectories. Nonetheless, MaxDiff RL remains state-of-the-art in single-shot learning.
and multi-shot PAC learners are identical, asymptotically (see Supplementary Note 1.5). Because the ergodicity of maximally diffusive agents is central to this proof, we expect this equivalence to fail when ergodicity is broken by either the agent or the environment.
Figure 5 demonstrates the single-shot learning capabilities of MaxDiff RL agents, and explores what happens when ergodicity is broken by the topological properties of the environment. Here, we examine both the MuJoCo swimmer and ant environments (Fig. 5(a)). The primary difference between these two environments is the existence of irreversible state transitions that can violate the ergodicity requirement of our single-shot learning guarantees topologically (Fig. 5(c)). Unlike the swimmer, the ant is capable of transitioning into such states by flipping upside down, thereby breaking ergodicity. Irreversible state transitions are common in real-world applications because they can arise as a result of unsafe behavior, such as a robot breaking or malfunctioning during learning. While such transitions can be prevented in principle through the use of safety-preserving methods [51, 52, 53], we omit their implementation to illustrate our point. As expected, the MaxDiff RL single-shot swimmer is capable of learning in continuous deployments (see Fig. 5(d) and Supplementary Movie 4), retaining the same seed-invariance of its multi-shot counterpart in Fig. 3(c), and achieving similar task performance. Despite ergodicity-breaking in the single-shot ant environment, MaxDiff RL still leads to improved outcomes over NN-MPPI and SAC, as in Fig. 5(e), where we plot the final distance traveled to ensure that no reward hacking took place. However, the loss of ergodicity leads to an increase in the variance of MaxDiff RL agent performance, which we expect as a result of seed-invariance no longer holding.
## 3 Discussion
Throughout this work, we have highlighted the ways in which deep RL is fragile to correlations intrinsic to many sequential decision-making processes. We introduced a framework based on the statistical mechanics of ergodic processes to overcome these limitations, which we term MaxDiff RL. Our framework offers a generalization of the current state-of-the-art in deep RL, and addresses many foundational issues holding back the field: the ergodicity of MaxDiff RL agents enables data acquisition that is indistinguishable from _i.i.d._ sampling, performance that is robust to seeds, and single-shot learning. Through its roots in statistical physics, our work forms a starting point for a more scientific study of deep RL--one in which falsifiable predictions can be made about the properties of deep RL agents and their performance. However, much more interdisciplinary work at the nexus of physics, learning and control remains to be done in pursuit of this goal. For one, approaches grounded in statistical physics for tuning or annealing temperature-like parameters during learning will be necessary to achieve effective exploration without sacrificing agent performance. Additionally, work in computational learning theory will be crucial to certifying the performance of agents subject to real-world conditions, such as distribution shift. And control techniques capable of enforcing ergodicity in the face of environmental irreversibility are needed to guarantee desirable agent properties like seed-invariance in complex problem settings. Taken together, our work paves the way towards more transparent and reliable decision-making with deep learning agents, which will be crucial to the long-term viability of deep RL as a field.
## Methods
### Reinforcement learning preliminaries
In this work, we make use of various notational conventions across multiple fields. Here, we summarize some basic notational norms of RL as a decision-making framework. RL problems are modeled as Markov decision processes (MDPs). MDPs are typically defined according to a 4-tuple, \((\mathcal{X},\mathcal{U},p,r)\), where we take both the state space, \(\mathcal{X}\), and the action space, \(\mathcal{U}\), to be continuous. Then, \(p:\mathcal{X}\times\mathcal{X}\times\mathcal{U}\rightarrow[0,\infty)\) represents the probability density of transitioning from state \(x_{t}\in\mathcal{X}\) to state \(x_{t+1}\in\mathcal{X}\) after taking action \(u_{t}\in\mathcal{U}\). At every state and for each action taken, the environment emits a bounded reward \(r:\mathcal{X}\times\mathcal{U}\rightarrow[r_{min},r_{max}]\). In general, the goal is to learn a policy \(\pi:\mathcal{U}\times\mathcal{X}\rightarrow[0,\infty)\) capable of producing actions that maximize an agent's expected cumulative rewards over the course of \(T\) discrete time stages, where \(t\in\{0,\cdots,T-1\}\).
### Maximum caliber variational optimization
Here, we restate the objective function of our maximum caliber functional optimization and some key analytical results. For a complete motivation and derivation of the results that follow, we refer the reader to Supplementary Note 1. We begin by defining a stochastic control process with continuous state trajectories \(x(t)\) over a compact measurable probability space \((\mathcal{X},\mathcal{F},P)\), where \(\mathcal{X}\) is the agent's state space, \(\mathcal{F}\) is the space of all possible paths through said state space, and \(P\) is a probability measure (see Supplementary Note 1.2 for more details). The objective consists of three components: first, a path integral entropy functional; then, a normalization term to ensure valid probability distributions; and finally a constraint on the local magnitude of the agent's velocity fluctuations. We define the local magnitude of these fluctuations up to a proportionality constant in the following way,
\[\langle\dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}\propto\int_{\mathcal{F}}P[x(t)] \int_{-\infty}^{\infty}\dot{x}(t)\dot{x}(t)^{T}\delta(x(t)-x^{*})dt\mathcal{D} x(t),\]
where \(\mathcal{D}x(t)\) denotes path integration, \(\delta(\cdot)\) is a Dirac delta function, and \(x^{*}\) is a particular point in \(\mathcal{X}\). We note that \(\dot{x}(t)\) should not be interpreted as a statement on the differentiability of agent paths, but rather as a shorthand for the integral representation of the agent's evolution, as is standard in the Langevin process literature. This allows us to write the objective of our variational optimization as,
\[\underset{P[x(t)]}{\text{argmax}} -\int_{\mathcal{F}}P[x(t)]\log P[x(t)]\mathcal{D}x(t)-\lambda_{0} \Big{(}\int_{\mathcal{F}}P[x(t)]\mathcal{D}x(t)-1\Big{)}\] \[-\int_{\mathcal{X}}Tr\Big{(}\Lambda(x^{*})^{T}\big{(}\langle\dot{ x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}-\mathbf{C}[x^{*}]\big{)}\Big{)}dx^{*},\]
where \(\lambda_{0}\) is a Lagrange multiplier, \(\Lambda(x^{*})\) is a matrix-valued Lagrange multiplier at all points \(x^{*}\in\mathcal{X}\), and \(\mathbf{C}[x^{*}]=Cov[x(t)]_{x(t_{0})=x^{*}}\) empirically captures the local controllability properties of agents (see Supplementary Note 1.1). Our constraint on the system's velocity fluctuations locally bounds their magnitude to the strength of temporal correlations induced by the system's controllability properties, as measured by \(\mathbf{C}[x^{*}]\). This effectively limits the volume of states reachable by the system within a finite time interval. Crucially, we note that it would not be sufficient to merely constrain the velocities' magnitudes because such a constraint would admit degenerate path distributions, with support on lower-dimensional manifolds. We also note that we omit boundary conditions in our problem statement for simplicity. The solution to this variational optimization is the following distribution describing the statistics of agent paths,
\[P_{max}[x(t)]=\frac{1}{Z}\exp\Big{[}-\frac{1}{2}\int_{-\infty}^{\infty}\dot{x }(t)^{T}\mathbf{C}^{-1}[x(t)]\dot{x}(t)dt\Big{]}.\]
As discussed in the main text, this expression describes the path statistics of an anisotropic, spatially-inhomogenous, Markov, diffusion process (see [54], Ch. 9), the ergodicity of which we prove in Supplementary Note 1.4.
When agents are influenced by a potential field or a reward function, the maximum caliber objective above can be easily adapted to account for this influence under mild assumptions, as discussed in Supplementary Note 1.5. The new objective is a variational free energy minimization,
\[\underset{P[x(t)]}{\text{argmin}}\ \langle V[x(t)]\rangle_{P}-S[P[x(t)]],\]
where \(S[P[x(t)]]\) is the original maximum caliber objective shown previously in this section, and the potential is defined in the following way:
\[\langle V[x(t)]\rangle_{P}=\int_{\mathcal{F}}P[x(t)]\int_{-\infty}^{\infty}V[ x(t)]dt\mathcal{D}x(t).\]
The derivation of its solution is similar to the one for the original objective, attaining the following optimal path distribution up to normalization:
\[P_{max}^{r}[x(t)]=P_{max}[x(t)]\cdot e^{\int r[x(t)]dt},\]
where \(r[x(t)]=-V[x(t)]\) is a bounded and Lipschitz instantaneous reward function defined over continuous agent paths. Notably, the resulting distribution continues to satisfy the Markov property
and ergodicity (see Supplementary Note 1.5 for proofs), and the influence of the reward is factorable from the optimal path statistics. Discretizing this distribution and accounting for control actions (see Supplementary Note 2.2), we have the following analytically-derived distribution as well,
\[P^{r}_{max}[x_{0:T},u_{0:T}]=\prod_{t=0}^{T-1}p_{max}(x_{t+1}|x_{t})e^{r(x_{t},u _{t})},\]
which we make use of to adapt our approach to maximally diffusive trajectory synthesis to RL and MDP problem settings, as in Supplementary Note 2.2.
#### Changes of coordinates and partial observability
Throughout this work, we have intentionally equated agent states and their experiences. In most problems in RL, the state space of the underlying agent and the space where learning and exploration takes place are different. While we have omitted this complication from the main text to retain notational simplicity, this does not necessarily pose issues to our framework. For example, when we are interested in exploring or learning in some space, \(\mathcal{Y}\), defined by a coordinate transformation of our state variables, \(y_{t}=\psi(x_{t})\), the guarantees of MaxDiff RL can still hold depending on the coordinate transformation's properties. If our transformation is linearizable, then as long as \(\mathbf{C}[y_{t}]=\mathbf{J}_{\psi}\mathbf{C}[x_{t}]\mathbf{J}_{\psi}^{T}\) is full rank everywhere in \(\mathcal{X}\), where \(\mathbf{J}_{\psi}\) is the Jacobian of the transformation, our results will hold. For all results presented in the manuscript, we made use of linear projections of our state variables to select the domain of maximally diffusive exploration (e.g., \([x,y,\dot{x},\dot{y}]\) for the results in Fig. 3). All exploration variables used in our results are listed in Supplementary Table 1. More broadly, the formal observability properties of the underlying system should be characterized and accounted for to rigorously study the role of partial observability under our framework, which can be done with a Gramian approach similar to the way we have characterized controllability, but such an investigation is outside of the scope of this work [32].
## References
* Degrave et al. [2022] Jonas Degrave, Federico Felici, Jonas Buchli, Michael Neunert, Brendan Tracey, Francesco Carpanese, Timo Ewalds, Roland Hafner, Abbas Abdolmaleki, Diego de las Casas, Craig Donner, Leslie Fritz, Cristian Galperti, Andrea Huber, James Keeling, Maria Tsimpoukelli, Jackie Kay, Antoine Merle, Jean-Marc Moret, Seb Noury, Federico Pesamosca, David Pfau, Olivier Sauter, Cristian Sommariva, Stefano Coda, Basil Duval, Ambrogio Fasoli, Pushmeet Kohli, Koray Kavukcuoglu, Demis Hassabis, and Martin Riedmiller. Magnetic control of tokamak plasmas through deep reinforcement learning. _Nature_, 602(7897):414-419, 2022. doi: 10.1038/s41586-021-04301-9.
* Won et al. [2020] Dong-Ok Won, Klaus-Robert Muller, and Seong-Whan Lee. An adaptive deep reinforcement learning framework enables curling robots with human-like performance in real-world conditions. _Science Robotics_, 5(46), 2020. doi: 10.1126/scirobotics.abb9764.
* Vinyals et al. [2019] Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Remi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wunsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. Grandmaster level in StarCraft II using multi-agent reinforcement learning. _Nature_, 575(7782):350-354, 2019. doi: 10.1038/s41586-019-1724-z.
* Irpan [2018] Alex Irpan. Deep reinforcement learning doesn't work yet. [https://www.alexirpan.com/2018/02/14/rl-hard.html](https://www.alexirpan.com/2018/02/14/rl-hard.html), 2018.
* Henderson et al. [2018] Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. _Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)_, 32(392), 2018.
* Ibarz et al. [2021] Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, and Sergey Levine. How to train your robot with deep reinforcement learning: lessons we have learned. _The International Journal of Robotics Research_, 40(4):698-721, 2021. doi: 10.1177/0278364920987859.
* Mnih et al. [2015] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nature_, 518(7540):529-533, 2015. doi: 10.1038/nature14236.
* Lillicrap et al. [2016] Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. _Proceedings of the International Conference on Learning Representations (ICLR)_, 2016.
* Haarnoja et al. [2018] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. _Proceedings of the International Conference on Machine Learning (ICML)_, 80:1861-1870, 2018.
* Plappert et al. [2018] Matthias Plappert, Rein Houthooft, Prafulla Dhariwal, Szymon Sidor, Richard Y. Chen, Xi Chen, Tamim Asfour, Pieter Abbeel, and Marcin Andrychowicz. Parameter space noise for exploration. _Proceedings of the International Conference on Learning Representations (ICLR)_, 2018.
* Lin [1992] Long-Ji Lin. _Reinforcement learning for robots using neural networks_. Carnegie Mellon University, 1992.
* Schaul et al. [2016] Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. _Proceedings of the International Conference on Learning Representations (ICLR)_, 2016.
* Andrychowicz et al. [2017] Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. _Advances in Neural Information Processing Systems (NeurIPS)_, 30, 2017.
* Zhang and Sutton [2017] Shangtong Zhang and Richard S. Sutton. A deeper look at experience replay. _NeurIPS Deep Reinforcement Learning Symposium_, 2017.
* Wang et al. [2017] Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample efficient actor-critic with experience replay. _Proceedings of the International Conference on Learning Representations (ICLR)_, 2017.
* Hessel et al. [2018] Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. _Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)_, 32(1), 2018.
* Fedus et al. [2020] William Fedus, Prajit Ramachandran, Rishabh Agarwal, Yoshua Bengio, Hugo Larochelle, Mark Rowland, and Will Dabney. Revisiting fundamentals of experience replay. _Proceedings of the International Conference on Machine Learning (ICML)_, pages 3061-3071, 2020.
* Ziebart et al. [2008] Brian D. Ziebart, Andrew L. Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy inverse reinforcement learning. _Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)_, 8:1433-1438, 2008.
* Ziebart et al. [2010] Brian D. Ziebart, J. Andrew Bagnell, and Anind K. Dey. Modeling interaction via the principle of maximum causal entropy. _Proceedings of the 27th International Conference on Machine Learning (ICML)_, pages 1255---1262, 2010.
* Ziebart [2010] Brian D. Ziebart. _Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy_. Carnegie Mellon University, 2010.
* Todorov [2009] Emanuel Todorov. Efficient computation of optimal actions. _Proceedings of the National Academy of Sciences_, 106(28):11478-11483, 2009. doi: 10.1073/pnas.0710743106.
* Toussaint [2009] Marc Toussaint. Robot trajectory optimization using approximate inference. _Proceedings of the 26th International Conference on Machine Learning (ICML)_, pages 1049---1056, 2009. doi: 10.1145/1553374.1553508.
* Rawlik et al. [2012] Konrad Rawlik, Marc Toussaint, and Sethu Vijayakumar. On stochastic optimal control and reinforcement learning by approximate inference. _Proceedings of Robotics: Science and Systems (RSS)_, pages 353-361, 2012.
* Levine and Koltun [2013] Sergey Levine and Vladlen Koltun. Guided policy search. _Proceedings of the 30th International Conference on Machine Learning (ICML)_, 28(3):1-9, 2013.
* Haarnoja et al. [2017] Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. _Proceedings of the International Conference on Machine Learning (ICML)_, 70:1352-1361, 2017.
* Haarnoja et al. [2018] Tuomas Haarnoja, Sehoon Ha, Aurick Zhou, Jie Tan, George Tucker, and Sergey Levine. Learning to walk via deep reinforcement learning. _Proceedings of Robotics: Science and Systems (RSS)_, 2018.
* Eysenbach and Levine [2022] Benjamin Eysenbach and Sergey Levine. Maximum entropy RL (provably) solves some robust RL problems. _Proceedings of the International Conference on Learning Representations (ICLR)_, 2022.
* Chen et al. [2019] Minmin Chen, Alex Beutel, Paul Covington, Sagar Jain, Francois Belletti, and Ed H. Chi. Top-k off-policy correction for a REINFORCE recommender system. _Proceedings of the 12th ACM International Conference on Web Search and Data Mining (WSDM)_, pages 456---464, 2019. doi: 10.1145/3289600.3290999.
* Afsar et al. [2022] M. Mehdi Afsar, Trafford Crump, and Behrouz Far. Reinforcement learning based recommender systems: A survey. _ACM Computing Surveys_, 55(7), 2022. doi: 10.1145/3543846.
* Chen et al. [2023] Xiaocong Chen, Lina Yao, Julian McAuley, Guanglin Zhou, and Xianzhi Wang. Deep reinforcement learning in recommender systems: A survey and new perspectives. _Knowledge-Based Systems_, 264:110335, 2023. doi: 10.1016/j.knosys.2023.110335.
* Sontag [2013] Eduardo D. Sontag. _Mathematical Control Theory: Deterministic Finite Dimensional Systems_, volume 6. Springer, 2013. ISBN 9781461205777.
* Hespanha [2018] J. P. Hespanha. _Linear Systems Theory: Second Edition_. Princeton University Press, 2018. ISBN 9780691179575.
* Mitra [1969] D. Mitra. \(W\) matrix and the geometry of model equivalence and reduction. _Proceedings of the Institution of Electrical Engineers_, 116:1101-1106, 1969.
* Dean et al. [2020] Sarah Dean, Horia Mania, Nikolai Matni, Benjamin Recht, and Stephen Tu. On the sample complexity of the linear quadratic regulator. _Foundations of Computational Mathematics_, 20(4):633-679, 2020. doi: 10.1007/s10208-019-09426-y.
* Tsiamis and Pappas [2021] Anastasios Tsiamis and George J. Pappas. Linear systems can be hard to learn. _2021 60th IEEE Conference on Decision and Control (CDC)_, pages 2903-2910, 2021. doi: 10.1109/CDC45484.2021.9682778.
* Tsiamis et al. [2022] Anastasios Tsiamis, Ingvar M Ziemann, Manfred Morari, Nikolai Matni, and George J. Pappas. Learning to control linear systems can be hard. In _Proceedings of 35th Conference on Learning Theory (COLT)_, volume 178, pages 3820-3857, 2022.
* Williams et al. [2017] Grady Williams, Nolan Wagener, Brian Goldfain, Paul Drews, James M Rehg, Byron Boots, and Evangelos A. Theodorou. Information theoretic MPC for model-based reinforcement learning. _2017 IEEE International Conference on Robotics and Automation (ICRA)_, pages 1714-1721, 2017.
* So et al. [2022] Oswin So, Ziyi Wang, and Evangelos A. Theodorou. Maximum entropy differential dynamic programming. In _2022 IEEE International Conference on Robotics and Automation (ICRA)_, pages 3422-3428, 2022. doi: 10.1109/ICRA46639.2022.9812228.
* Amin et al. [2021] Susan Amin, Maziar Gomrokchi, Harsh Satija, Herke van Hoof, and Doina Precup. A survey of exploration methods in reinforcement learning. _arXiv preprint arXiv:2109.00157_, 2021.
* Sutton and Barto [2018] Richard S. Sutton and Andrew G. Barto. _Reinforcement learning: An introduction_. MIT press, 2018.
* Jaynes [1957] E. T. Jaynes. Information theory and statistical mechanics. _Phys. Rev._, 106:620-630, 1957.
* Dixit et al. [2018] Purushottam D. Dixit, Jason Wagoner, Corey Weistuch, Steve Presse, Kingshuk Ghosh, and Ken A. Dill. Perspective: Maximum caliber is a general variational principle for dynamical systems. _The Journal of Chemical Physics_, 148(1):010901, 2018.
* Chvykov et al. [2021] Pavel Chvykov, Thomas A. Berrueta, Akash Vardhan, William Savoie, Alexander Samland, Todd D. Murphey, Kurt Wiesenfeld, Daniel I. Goldman, and Jeremy L. England. Low rattling: A predictive principle for self-organization in active collectives. _Science_, 371(6524):90-95, 2021.
* Kapur [1989] J.N. Kapur. _Maximum Entropy Models in Science and Engineering_. Wiley, 1989. ISBN 9788122402162.
* Moore [2015] Calvin C. Moore. Ergodic theorem, ergodic theory, and statistical mechanics. _Proceedings of the National Academy of Sciences_, 112(7):1907-1911, 2015. doi: 10.1073/pnas.1421798112.
* Taylor et al. [2021] Annalisa T. Taylor, Thomas A. Berrueta, and Todd D. Murphey. Active learning in robotics: A review of control principles. _Mechatronics_, 77:102576, 2021. doi: 10.1016/j.mechatronics. 2021.102576.
* Seo et al. [2021] Younggyo Seo, Lili Chen, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. State entropy maximization with random encoders for efficient exploration. In _Proceedings of the 38th International Conference on Machine Learning (ICML)_, pages 9443-9454, 2021.
* Prabhakar and Murphey [2022] Ahalya Prabhakar and Todd Murphey. Mechanical intelligence for learning embodied sensor-object relationships. _Nature Communications_, 13(1):4108, 2022. doi: 10.1038/s41467-022-31795-2.
* Wang et al. [2019] Xudong Wang, Weihua Deng, and Yao Chen. Ergodic properties of heterogeneous diffusion processes in a potential well. _The Journal of Chemical Physics_, 150(16):164121, 2019. doi: 10.1063/1.5090594.
* Mohri et al. [2018] M. Mohri, A. Rostamizadeh, and A. Talwalkar. _Foundations of Machine Learning, (2nd Edition)_. Adaptive Computation and Machine Learning series. MIT Press, 2018. ISBN 9780262351362.
* Ames et al. [2014] A. Ames, J. Grizzle, and P. Tabuada. Control barrier function based quadratic programs with application to adaptive cruise control. In _2014 IEEE Conference on Decision and Control (CDC)_, 2014.
* Taylor et al. [2020] Andrew Taylor, Andrew Singletary, Yisong Yue, and Aaron Ames. Learning for safety-critical control with control barrier functions. In _Proceedings of the 2nd Conference on Learning for Dynamics and Control (LADC)_, volume 120, pages 708-717, 2020.
* Xiao et al. [2023] Wei Xiao, Tsun-Hsuan Wang, Ramin Hasani, Makram Chahine, Alexander Amini, Xiao Li, and Daniela Rus. Barriernet: Differentiable control barrier functions for learning of safe robot control. _IEEE Transactions on Robotics_, pages 1-19, 2023. doi: 10.1109/TRO.2023.3249564.
* Kardar [2007] Mehran Kardar. _Statistical Physics of Fields_. Cambridge University Press, 2007. ISBN 9780511815881.
## Data availability
Data supporting the findings of this study are available in the following repository: github.com/MurpheyLab/MaxDiffRL.
## Code availability
Code supporting the findings of this study is available in the following repository: github.com/MurpheyLab/MaxDiffRL.
## Acknowledgements
We thank Annalisa T. Taylor, Jamison Weber, and Pavel Chvykov for their comments on early drafts of this work. We acknowledge funding from the US Army Research Office MURI grant #W911NF-19-1-0233, and the US Office of Naval Research grant #N00014-21-1-2706. We also acknowledge hardware loans and technical support from the Intel Corporation, and T.A.B. is partially supported by the Northwestern University Presidential Fellowship.
## Author contributions
T.A.B. derived all theoretical results, performed supplementary data analyses and control experiments, supported reinforcement learning experiments, and wrote the manuscript. A.P. developed and tested the reinforcement learning algorithm, carried out all reinforcement learning experiments, and supported manuscript writing. T.D.M. secured funding and guided the research program.
**Maximum Diffusion Reinforcement Learning**
Supplementary Information
**Thomas A. Berrueta* Allison Pinosky Todd D. Murphey Center for Robotics and Biosystems, Northwestern University, Evanston, IL, USA.**
Footnote *: Corresponding author: [email protected]
###### Contents
* 1 Theoretical framework for maximum diffusion
* 1.1 The role of controllability in exploration and learning
* 1.2 Exploration as trajectory sampling
* 1.3 Undirected exploration as variational optimization
* 1.4 Maximizing path entropy produces diffusion
* 1.5 Directed exploration as variational optimization
* 1.6 Minimizing path free energy produces diffusive gradient descent
* 2 Synthesizing maximally diffusive trajectories
* 2.1 Maximally diffusive trajectories via KL control
* 2.2 Maximally diffusive trajectories via stochastic optimal control
* 2.3 Alternative synthesis approach via entropy maximization
* 2.4 Simplified synthesis via local entropy maximization
* 2.5 Example applications of MaxDiff trajectory synthesis
* 3 Reinforcement learning implementation details
* 3.1 General
* 3.2 Point mass
* 3.3 Swimmer
* 3.4 Ant
* 3.5 Half-cheetah
* **Supplementary tables**
* **Supplementary movies**
* **Supplementary figures**
* **Supplementary references**
## Supplementary notes
### Theoretical framework for maximum diffusion
Throughout this section we analytically derive and establish the theoretical properties of maximally diffusive agents and their trajectories, as well as their relationship to _i.i.d._ data, temporal correlations, controllability, and exploration. We do not directly discuss reinforcement learning within this section beyond framing our results, but rather establish mathematical foundations that elucidate the relationship between an agent's properties and its ability to explore and learn. For our implementation of these principles within a reinforcement learning framework, refer to Supplementary Note 2.
### The role of controllability in exploration and learning
Exploration is a process by which agents become exposed to new experiences, which is of broad importance to their learning performance. While many learning systems can function as abstract processes insulated from the challenges and uncertainties associated with embodied operation [1], physical agents--simulated or otherwise--have no such luxury [2; 3; 4; 5]. The laws of physics, an agent's material properties, and dynamics all impose fundamental constraints on what can be achieved by a learning system. In particular, as discussed throughout the main text, the state transition dynamics of learning agents often introduce temporal correlations that can hinder agent performance. To illustrate this point broadly, here we consider the effect of an agent's controllability properties on the efficacy of a widely used exploration strategy in reinforcement learning: taking actions at random and observing what happens.
Drawing inspiration from the study of multi-armed bandits [6], the most common exploration strategy in reinforcement learning is randomized action exploration. The simplest of these methods merely requires that agents randomly sample actions from either uniform or Gaussian distributions to produce exploration. More sophisticated methods, such as maximum entropy reinforcement learning [7; 8; 9], elaborate on this basic idea by learning the distribution from which to sample random actions. For the purposes of our analysis, these more advanced methods are functionally equivalent to each other--they assume that taking random actions produces effective exploration of outcomes. However, from the perspective of control theory we know that this is not always the case. For an agent to be able to arbitrarily reach desired states, it must be _controllable_[10].
To illustrate the role controllability plays in the temporal correlations induced by an agent's controllability properties, we will briefly consider randomized action exploration in a linear time-varying (LTV) control system:
\[\dot{x}(t)=A(t)x(t)+B(t)u(t), \tag{1}\]
where \(A(t)\) and \(B(t)\) are appropriately dimensioned matrices with state and control vectors \(x(t)\in\mathcal{X}\) and \(u(t)\in\mathcal{U}\), and \(x(t_{0})=x^{*}\) for \([t_{0},t]\subset T\). For now, we omit technical specification of \(T\), \(\mathcal{X}\), and \(\mathcal{U}\). The general form of solutions to this system of linear differential equations is expressed in terms of a convolution with the system's state-transition matrix, \(\Psi(t,t_{0})\), in the following way:
\[x(t)=\Psi(t,t_{0})x(t_{0})+\int_{t_{0}}^{t}\Psi(t,\tau)B(\tau)u(\tau)d\tau. \tag{2}\]
The state-transition matrix itself is the solution to an algebraic Riccati equation. We consider these dynamics because by working with LTV dynamics we implicitly consider a very broad class of systems--all while retaining the simplicity of linear controllability analysis [11]. This is due to the fact that the dynamics of any nonlinear system that is locally linearizable along its trajectories can be effectively captured by LTV dynamics. Hence, any results applicable to the dynamics in Eq. 1 will apply to linearizable nonlinear systems. However, we note that our derivations in subsequent sections do _not_ assume dynamics of this form. We only consider them to motivate our approach in this section.
To develop an understanding of the exploration capabilities of a given LTV system, we may ask what states are reachable by this system. After all, states that are not reachable cannot be explored or learned from. This is precisely what controllability characterizes:
**Definition 1.1**.: _A system is said to be controllable over a time interval \([t_{0},t]\subset T\) if given any states \(x^{*},x_{1}\in\mathcal{X}\), there exists a controller \(u(t):[t_{0},t]\to\mathcal{U}\) that drives the system from state \(x^{*}\) at time \(t_{0}\) to \(x_{1}\) at time \(t\)._
While this definition intuitively captures what is meant by controllability, it does not immediately seem like an easily verifiable property. To this end, different computable metrics have been developed that equivalently characterize the controllability properties of certain classes of systems (e.g., the Kalman controllability rank condition [12]). In particular, here we will analyze the controllability Gramian of our system, as well as its rank and determinant as metrics on the controllability of our system.
For our class of LTV systems, characterizing controllability with this method is simple:
\[W(t,t_{0})=\int_{t_{0}}^{t}\Psi(\tau,t_{0})B(\tau)B(\tau)^{T}\Psi(\tau,t_{0})^{ T}d\tau, \tag{3}\]
where the Gramian is a symmetric positive semidefinite matrix that depends on the state-control matrix \(B(t)\) and the state-transition matrix \(\Psi(t,t_{0})\). The Gramian is a controllability metric that quantifies the amount of energy required to actuate the different degrees of freedom of the system [13, 14]. For any given finite time interval, the controllability Gramian also characterizes the set of states reachable by the system. Importantly, when the controllability Gramian is full-rank, the system is provably controllable in the sense of Definition 1.1[10], and capable of fully exploring its environment. However, when the controllability Gramian is poorly conditioned substantial temporal correlations are introduced into the agent's state transitions, which can prevent effective exploration and--as a direct consequence--learning, as we have shown in the main text and will show in the following sections.
To draw the connection between naive random exploration, controllability, and temporal correlations explicitly, we will now revisit the dynamics in Eq. 1 under a slight modification. We will replace the controller \(u(t)\) with a noise vector \(\xi\sim\mathcal{N}(\mathbf{0},\text{Id})\) taken in the Ito sense, where Id is an identity matrix with diagonal of the same dimension as the control inputs, and \(\mathbf{0}\) is the zero vector of the same dimension:
\[\dot{x}(t)=A(t)x(t)+B(t)\cdot\xi. \tag{4}\]
Here, we abuse notation slightly to minimize the difference between this equation and Eq. 1. The substitution of noise in place of control inputs is precisely what some simple exploration strategies in reinforcement learning do: using random actions to drive the system into previously unobserved states crucial to a given learning task. With these modifications in mind, we are now interested in examining the system's mean trajectory and covariance statistics in hopes of characterizing the structure of temporal correlations induced by the agent dynamics. We begin by taking the expectation over system trajectories described by Eq. 2:
\[E[x(t)] =E\Big{[}\Psi(t,t_{0})x(t_{0})+\int_{t_{0}}^{t}\Psi(t,\tau)B(\tau )\cdot\xi d\tau\Big{]} \tag{5}\] \[=\Psi(t,t_{0})x(t_{0})+E\Big{[}\int_{t_{0}}^{t}\Psi(t,\tau)B(\tau )\cdot\xi d\tau\Big{]}\] \[=\Psi(t,t_{0})x(t_{0}).\]
Hence, the expected sample paths of the dynamics will be centered around the autonomous paths of the system--that is, the paths the system takes in the absence of control inputs. We may also characterize the system's temporal correlations through its covariance statistics, \(\mathbf{C}[x^{*}]=Cov[x(t)]_{x(t_{0})=x^{*}}\),
\[\mathbf{C}[x^{*}] =E\big{[}(x(t)-E[x(t)])(x(t)-E[x(t)])^{T}\big{]}\] \[=E\Big{[}\Big{(}\Psi(t,t_{0})x(t_{0})+\int_{t_{0}}^{t}\Psi(t, \tau)B(\tau)\cdot\xi d\tau-E[x(t)]\Big{)}\] \[\qquad\times\Big{(}\Psi(t,t_{0})x(t_{0})+\int_{t_{0}}^{t}\Psi(t, \tau)B(\tau)\cdot\xi d\tau-E[x(t)]\Big{)}^{T}\Big{]}\] \[=E\Big{[}\Big{(}\int_{t_{0}}^{t}\Psi(t,\tau)B(\tau)\cdot\xi d \tau\Big{)}\Big{(}\int_{t_{0}}^{t}\Psi(t,\tau)B(\tau)\cdot\xi d\tau\Big{)}^{T} \Big{]}\] \[=E\Big{[}\int_{t_{0}}^{t}\Psi(t,\tau)B(\tau)\cdot(\xi\xi^{T}) \cdot B(\tau)^{T}\Psi(t,\tau)^{T}d\tau\Big{]}\] \[=\int_{t_{0}}^{t}\Psi(t,\tau)B(\tau)B(\tau)^{T}\Psi(t,\tau)^{T}d\tau \tag{6}\]
where the \(\times\) operator merely is there to indicate a product with the expression in the line above. By inspection of the above expression and Eq. 3, we arrive at the following important connection:
\[\mathbf{C}[x^{*}]=W(t,t_{0}) \tag{7}\]
which tells us that for LTV dynamics (and by extension for linearizable nonlinear dynamics), the state covariance matrix is exactly equivalent to the controllability Gramian of the system. Thus, for a broad class of systems, an agent's controllability properties completely determine the structure of temporal correlations induced by their dynamics.
Moreover, we can see that our controllability is not a state-dependent property of LTV systems and linearizable nonlinear systems (at least within a neighborhood of their linearization),
\[\nabla_{x}\mathbf{C}[x^{*}]=\nabla_{x}W(t,t_{0})=\mathbf{0}, \tag{8}\]
where \(\mathbf{0}\) is an appropriately dimensioned zero matrix. While our controllability analysis has been restricted to the class of dynamics describable by linear differential equations with time-varying parameters, we note that the connections we observe between state covariance and controllability Gramians have been shown to hold for even more general classes of nonlinear systems through more involved analyses [15]. We note that the results of our manuscript hold regardless of whether there is a formal and easily characterizable relationship between controllability and temporal correlations.
From Eq. 4 we can describe the system's reachable states by analyzing its state probability density function, which can be found analytically by solving its associated Fokker-Planck equation [16]. To do this, we only require the mean and covariance statistics of the process, in Eqs. 5 and 6. Hence, the system's state distribution is
\[p(x,t)=\frac{1}{\sqrt{(2\pi)^{d}\det[W(t,t_{0})]}}\exp\Big{[}-\frac{1}{2}(x-E[ x(t)])^{T}W^{-1}(t,t_{0})(x-E[x(t)])\Big{]} \tag{9}\]
for some choice of initial conditions at \(t_{0}\), where \(d\) is the dimension of \(x\), and we have substituted Eq. 7 to highlight the role of controllability in the density of states reachable by the system through random exploration. Thus, how easy or hard it is to explore in a given direction--as characterized by the distribution of reachable states in Eq. 9--is entirely determined by the controllability properties
of the system as encoded by \(W(t,t_{0})\), and the temporal correlations these induce in the agent's trajectories. Supplementary Fig. 1 illustrates this concept for the toy dynamical system introduced in the main text. We observe that changes in \(\beta\) have an effect on the distribution of reachable states for the system that are consistent with Eq. 9.
On the basis of these results, which have been known for decades [17], we can clearly see that controllability and temporal correlations play a key role in exploration and data acquisition. We cannot assume that random inputs are capable of producing effective exploration of system states without an understanding of its controllability. For example, if \(W(t,t_{0})\) is not full-rank, then exploration would be restricted to a linear subspace of an agent's exploration domain. This amounts to a complete collapse of the _i.i.d._ assumption on the experiences of an agent, because its state transitions become pathologically correlated as a result of the degeneracy of Eq. 9. As we show in future sections, preventing this degeneracy will be crucial for achieving effective exploration. In more complex settings, where the input distribution is not Gaussian and the dynamics are strongly nonlinear, analyzing controllability may be more challenging. However, insofar as learning requires an embodied agent to either collect data or visit desirable states to optimize some objective, it will depend on the controllability properties of said agent.
**Remark 1.1**.: _Controllability can determine whether or not it is possible, and how challenging it is, to learn._
While one can construct proofs that illustrate this in a variety of simplified settings--as others have recently shown [18; 19]--we leave the more general claim as a remark to frame the motivation behind our upcoming derivations. Hence, we should strive to develop exploration and learning strategies that reflect--and try to overcome--the effect of controllability and its induced temporal correlations, as we do in the following sections.
### Exploration as trajectory sampling
In this section we develop the mathematical formalism necessary for framing exploration in a controllability-aware manner that may allow us to overcome temporal correlations. While exploration with disembodied agents can be quite simple (e.g., sampling from a distribution, or performing a random walk), embodied agents must achieve exploration by changing their physical state or configuration through action. Our goal is to achieve exploration by means of a control system, such as a robotic agent or otherwise, where their properties constrain the ways they can explore. While this motivation is most natural for physically-embodied systems, our framing is relevant to any setting in which the underlying agent's dynamics cannot be arbitrarily chosen and obey some notion of continuity of experience. To this end, we need to first define a formal notion of control system from which we can begin to model the behavior of agents.
We think of control systems as stochastic processes that are parametrized by their controllers, or equivalently as a collection of distinct stochastic processes for each choice of controller. Typically, a stochastic process is a collection of random variables \(\{x(t):t\in T\}\) indexed according to some set, \(T\). This collection of random variables will come to define the states and experiences of an underlying agent. We assume that the indexing set \(T\) is continuous and totally ordered so that it is time-like, and throughout this manuscript we will often use "time" to refer to this indexing set. Hence, we can think of the trajectories of an agent with autonomous dynamics as a realization of a continuum of time-indexed random variables taking place on a common measurable probability space, \((\mathcal{X},\mathcal{F},P)\)[20]. For the purpose of framing our problem, we choose the sample space \(\mathcal{X}\) to be a compact, simply connected, metric space. We make this choice to deliberately circumvent the effects of environment topology and boundary conditions on the framework derived herein--as it was alluded to in the main text, these factors do play a role in agent outcomes but we leave a detailed investigation of their impact as future work. Importantly, we choose the sample space to be the same as the space in which the random variables take value. As with the main text, we also define the agent's experiences to take place directly on its state space for convenience. Then, \(\mathcal{F}\) is a Borel \(\sigma\)-algebra of the sample space consisting of all sample paths of the process. Finally, \(P\) is a probability measure describing the likelihood of any sample path of the stochastic process.
Clearly, the likelihood of any sample path of the system \(P[x(t)]\) is strongly dependent on the dynamics that govern the time-evolution of the stochastic process. However, when the dynamics are nonautonomous, as is the case in control systems, the probability measure will also depend on the choice of controller and the effect it has on the dynamics of the process. We define a controller as a
function, \(u(t):T\to\mathcal{U}\), that produces an input to the system dynamics at every point in the index set. At this point, we are not considering the system dynamics themselves, how controllers are synthesized, or how much influence either of these can have in shaping the sample paths of the underlying control system. All we care about is acknowledging the fact that a choice of controller induces a different probability measure over sample paths. With these definitions we can now establish our notion of control system, or stochastic control process. For convenience, we assume that \(x(\cdot)\) and \(u(\cdot)\) are vector-valued and of dimensions \(d\) and \(m\) respectively.
**Definition 1.2**.: _A stochastic control process is a collection of random variables \(\{x(t):t\in T\}\) with index set \(T\), defined on a probability space \((\mathcal{X},\mathcal{F},P_{u(t)})\), where the probability measure \(P_{u(t)}\) is parametrized by a controller \(u(t):T\to\mathcal{U}\)._
In a stochastic control process the controller plays an important role in the resulting behavior observed in the sample paths of the system--clearly, the sample path distribution of a robot with a controller that resists all movements is very different than one with a controller that encourages the robot to explore (see Supplementary Fig. 2 for an illustration). Hence, controllers can affect which regions of the exploration domain the control system is capable of sampling from. With this in mind, we can express the problem of exploration in control systems: to design a controller that maximizes the regions of the exploration domain from which we can sample trajectories. In part, this requires the use of control actions in order to maximize the support of the agent's sample path distribution. The support of a probability measure is the subset of all elements in the Borel \(\sigma\)-algebra \(\mathcal{F}\) with non-zero measure. However, merely maximizing the path distribution's support is not enough. Ideally, we would also like to control how probability mass is spread--if a given task demands that the agent's sample paths are biased towards a given goal, then our agent's path distribution should reflect this. At this point it is helpful to note the basic way in which this approach differs from naive random exploration. Rather than letting \(u(t)\) be substituted by some noise distribution and hoping to see exploration in \(x(t)\), we are interested in deliberately designing \(u(t)\) to maximize our exploration of \(x(t)\). In the following sections, we formalize our exploration problem statement and illustrate the role that an agent's controllability properties and temporal correlations play in enabling--or hindering--effective exploration and learning.
### Undirected exploration as variational optimization
One way of simultaneously controlling the spread of probability mass and the support of a probability distribution is to optimize its entropy [21]. For now, we consider the undirected exploration case, in which no task or objective biases the underlying agent's path distribution. As we will see in
Supplementary Note 1.5, this approach is also able to control the spread of probability mass in a more fine-grained manner that will allow us to achieve directed exploration with respect to an objective or task, and eventually to do reinforcement learning.
Optimizing the entropy of an agent's path distribution through control synthesis can have a profound effect on the resulting behavior of the agent. This can be understood intuitively when there are no constraints on how we can increase the entropy of a sample path distribution. In this case, the maximum entropy distribution would be uniform over the entirety of the agent's compact sample space, leading to complete asymptotic exploration of the domain in a way that is equivalent to _i.i.d._ uniform sampling. However, a process realizing the statistics described by such a path distribution would require teleportation--that is, that points in space be visited uniformly at random at every moment in time. While this may pose no problems for disembodied agents with unconstrained dynamics, this creates issues for any agent whose dynamics are constrained by their embodiment or otherwise. For example, in physical control systems subject to the laws of physics, this is infeasible behavior. Hence, throughout the rest of this section we will take on the work of deriving the maximum entropy distribution for describing the trajectories of agents with continuous paths--a broad class of systems that includes all physical agents and many non-physical agents--as well as analyzing the formal properties of systems that satisfy such statistics. We will see that by maximizing trajectory entropy this distribution also captures the statistics of an agent with minimally-correlated paths. The analytical form of this distribution is crucial to the control and policy synthesis approach we derive in Supplementary Note 2. However, we note that our results can also apply for disembodied agents with discontinuous paths when we consider the uniform distribution as the optimal distribution instead of the one we derive in this section.
We proceed by identifying the analytical form of the maximum entropy sample path distribution with no consideration given to the problem of generating actions that achieve such statistics. Hence, we begin by framing our exploration problem in the maximum caliber formalism of statistical mechanics [22, 23, 24]. Maximum caliber is a generalization of the principle of maximum entropy to probability measures over function spaces, such as distributions over trajectories or sample paths. In this arena, we are interested in finding the distribution which maximizes the entropy of the sample path distribution of our system \(S[P[x(t)]]\). Because in this section we are looking for the unique analytical form of this distribution, we omit the controller-specific notation that was introduced in the previous section. The general form of the maximum entropy (or caliber) functional variational optimization problem is the following:
\[\underset{P[x(t)]}{\text{argmax}}\ -\int_{\mathcal{F}}P[x(t)]\log P[x(t)] \mathcal{D}x(t), \tag{10}\]
where \(\mathcal{D}x(t)\) denotes path integration over all sample paths \(\mathcal{F}\) of our stochastic control process. However, as written the optimization is ill-posed and leads to a trivial solution. We can see this by taking the variation with respect to the sample path distribution, where we would find that the optimal sample path distribution is uniform, yet not a valid probability measure as it is unnormalized. Thus, we need to constrain the optimization problem so that we only consider behavior realizable by the class of agents we are interested in modeling.
Since we are interested in framing our exploration problem for application domains like optimal control and reinforcement learning, we tailor our modeling assumptions to these settings. What sorts of principled constraints could be applied? No constraints based on conservation of energy are applicable because autonomous agents are inherently nonequilibrium systems. Nonetheless, the behavior of many autonomous agents (especially physically embodied ones) is constrained by other aspects of their morphology, such as actuation limits and continuity of movement. In particular, the rates at which agent velocities can vary--and _co-vary_--are typically bounded, which prevents them from discontinuously jumping between states by limiting their local rate of exploration. In fact, this is precisely what we showed in Eq. 3 of Supplementary Note 1.1, where we found that an agent's controllability properties are closely tied to its trajectory fluctuations, as well as its ability to locally explore space. Thus, we choose to constrain the velocity fluctuations of our stochastic control process so that they are finite and consistent with the local covariance statistics of the process, which may be determined empirically, and are related to a system's controllability properties in a broad class of systems. The use of an empirical (or learned) covariance estimate to quantify local exploration rates is important because different agents have different limitations, which may additionally be spatially inhomogeneous and difficult to know a priori. Through this constraint, we both ensure that agents have a bounded local rate of exploration and that their sample paths are continuous in time.
To formulate this constraint on the system's local rate of exploration, we must first express the system's velocity fluctuations at each point in the sample space, \(x^{*}\in\mathcal{X}\). We define the system's local exploration rate in terms of its velocity fluctuations in the following way:
\[\langle\dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}\propto\int_{\mathcal{F}}P[x(t)] \int_{-\infty}^{\infty}\dot{x}(t)\dot{x}(t)^{T}\delta(x(t)-x^{*})dt\mathcal{D} x(t), \tag{11}\]
where we only care about proportionality since normalization takes care of constants, and \(\delta(\cdot)\) denotes the Dirac delta function. We assume that the local exploration rate is not degenerate in the chosen coordinates of the stochastic control process, and hence that the tensor described by Eq. 11 is full-rank. This assumption is crucial because it guarantees that our resulting path distribution is non-degenerate and capable of realizing effective exploration. However, our results would continue to hold for a linear subspace of the original domain if this condition is not met. If we had instead chosen to constrain the system's local exploration rates by directly bounding the magnitude of its velocities, as opposed to its velocity fluctuations, we would not be able to guarantee the non-degeneracy of the resulting path distribution. Nonetheless, assumptions on the degeneracy of the system's fluctuations in state space are irrelevant to the more general problem where the sample space of the stochastic control process is not the same as the state space its random variables take value in. Another important note is that the velocities of the trajectories of the stochastic control process in this expression should be interpreted in the Langevin sense [25]. That is to say, not as expressions of the differentiability of the sample paths of our stochastic control process, but as a shorthand for an integral representation of the stochastic differential equations describing the evolution of the sample paths of the system.
We can now express our constraint on the local rate of exploration as,
\[\langle\dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}=\mathbf{C}[x^{*}],\quad\forall x ^{*}\in\mathcal{X}, \tag{12}\]
where \(\mathbf{C}[x^{*}]=Cov[x(t)]_{x(t_{0})=x^{*}}\) denotes the empirical covariance statistics associated with the local exploration of space over sample paths initialized at \(x^{*}\). Crucially, these statistics are bounded everywhere in the exploration domain, and we assume them to satisfy Lipschitz continuity so that their spatial variations are bounded. We note that linearizability of the underlying agent dynamics is a sufficient condition to satisfy this property. Hence, we now have equality constraints on local exploration rates that can vary at each point in the exploration domain--as one would expect for a complex embodied system, such as a robot. As an additional constraint, we require that \(P[x(t)]\) integrates to 1 so that it is a valid probability measure over paths.
With expressions for each of our constraints, we may now express the complete variational optimization problem using Lagrange multipliers:
\[\underset{P[x(t)]}{\text{argmax}} -\int_{\mathcal{F}}P[x(t)]\log P[x(t)]\mathcal{D}x(t)-\lambda_{0 }\Big{(}\int_{\mathcal{F}}P[x(t)]\mathcal{D}x(t)-1\Big{)}\] \[-\int_{\mathcal{X}}Tr\Big{(}\Lambda(x^{*})^{T}\big{(}\langle \dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}-\mathbf{C}[x^{*}]\big{)}\Big{)}dx^{*}. \tag{13}\]
Here, we express the constraints at all points \(x^{*}\) by taking an integral over all points in the domain. The \(\lambda_{0}\) is a Lagrange multiplier enforcing our constraint that ensures valid probability measures, and \(\Lambda(\cdot)\) is a matrix-valued Lagrange multiplier working to ensure that the rate of exploration constraints hold at every point in the domain. By solving this optimization we can obtain an expression for the maximum entropy distribution over sample paths. The solution to this problem will determine the distribution over sample paths with the greatest support, with the most uniformly spread probability mass, and with the least-correlated sample paths--thereby specifying the statistical properties of our optimal undirected exploration strategy, subject to a path continuity constraint.
### Maximizing path entropy produces diffusion
In this section, we lay out the derivation of our solution to the variational optimization problem in Eq. 13. We begin by stating our main result in the following theorem.
**Theorem 1.1**.: _The maximum entropy sample paths of a stochastic control process (Definition 1.2) engaging in maximum entropy exploration (in the sense of Eq. 13) are given by pure diffusion with spatially-varying coefficients._
Proof.: We begin by substituting Eq. 11 into Eq. 13, taking its variation with respect to the probability measure \(\delta S[P[x(t)]]/\delta P[x(t)]\), and setting it equal to 0:
\[\frac{\delta S}{\delta P[x(t)]}=-1-\log P_{max}[x(t)]-\lambda_{0}-\int_{ \mathcal{X}}\int_{-\infty}^{\infty}Tr\Big{(}\Lambda(x^{*})^{T}(\dot{x}(t) \dot{x}(t)^{T})\Big{)}\delta(x(t)-x^{*})dtdx^{*}=0.\]
Then, taking advantage of the following linear algebra identity, \(a^{T}Ba=Tr(B^{T}(aa^{T}))\), for any \(a\in\mathbb{R}^{m}\) and \(B\in\mathbb{R}^{m\times m}\); as well as the properties of the Dirac delta, we can simplify our expression to the following:
\[\frac{\delta S}{\delta P[x(t)]}=-1-\log P_{max}[x(t)]-\lambda_{0}-\int_{- \infty}^{\infty}\dot{x}(t)^{T}\Lambda(x(t))\dot{x}(t)dt=0,\]
which allows us to solve for the maximum entropy probability distribution over the sample paths of our stochastic control process. The solution will then be of the form:
\[P_{max}[x(t)]=\frac{1}{Z}\exp\Big{[}-\int_{-\infty}^{\infty}\dot{x}(t)^{T} \Lambda(x(t))\dot{x}(t)dt\Big{]}, \tag{14}\]
where we have subsumed the constant and Lagrange multiplier, \(\lambda_{0}\), into a normalization factor, \(Z\). We note that even without determining the form of our Lagrange multipliers, the maximum entropy probability measure in Eq. 14 is already equivalent to the path probability of a diffusing particle with a (possibly anisotropic) spatially-inhomogeneous diffusion tensor (see [25], Ch. 9). In other words, Eq. 14 describes the probability of a continuous random walk with increments determined by a Gaussian distribution with space-dependent variance [20]. We also note that the measure in Eq. 14 has infinite support. While there is more work needed to characterize the diffusion tensor of this process, \(\Lambda^{-1}(\cdot)\), this completes our proof.
So what does this result tell us? The least-correlated sample paths, which optimally sample from the exploration domain, are statistically equivalent to diffusion. This is to say that the distribution of paths with the greatest support over the sample space describes the paths of a diffusion process. Hence, if the goal of our stochastic control process is to optimally sample from its sample space, the best strategy is to move randomly--that is, to decorrelate its sample paths. As an exercise, let's consider how such exploration may relate to a learning problem. If the goal of our agent is to learn something about its dynamics or its environment, then this result suggests that the best strategy is to try to move randomly according to a diffusion process. At first glance, this seems to validate the strategy of taking random actions that many reinforcement learning algorithms use for exploration; however, this is not the case. Our result requires that we choose a controller \(u(\cdot)\) such that the sample paths of our agent are random, which is not the same as choosing a controller that is random, as discussed in Supplementary Note 1.1. An additional benefit of our diffusive exploration strategy is that we did not have to presuppose that our agent dynamics were Markov or ergodic, or any of the typical assumptions that reinforcement learning algorithms make. Instead, we find that these properties emerge through our derivation as intrinsic properties of the optimal exploration strategy itself.
The following corollaries of Theorem 1.1 follow from the connection to diffusion processes and Markov chains, and as such more general forms of these proofs may be found in textbooks on stochastic processes and ergodic theory. Here, we assume that the diffusion tensor in Eq. 14, \(\Lambda^{-1}(\cdot)\), is full-rank and invertible everywhere in the exploration domain. Otherwise, these results would still hold but only for a linear subspace of the exploration domain. Additionally, for now we will assume that \(\Lambda^{-1}(\cdot)\) is Lipschitz and bounded everywhere on \(\mathcal{X}\). We will later find that these are not in fact different assumptions from those made about the local exploration rate in Eqs. 11 and 12.
**Corollary 1.1.1**.: _The sample paths of a stochastic control process (Definition 1.2) with a maximum entropy exploration strategy (in the sense of Eq. 13) satisfy the Markov property._
Proof.: This follows trivially from the temporal discretization of our path distribution in Eq. 14, or alternatively from the properties of Langevin diffusion processes. We can see that,
\[p_{max}(x_{t+\delta t}|x_{t}) =\frac{1}{Z}\exp\Big{[}-\int_{t}^{t+\delta t}\dot{x}(\tau)^{T} \Lambda(x(\tau))\dot{x}(\tau)d\tau\Big{]}\] \[\approx\frac{1}{Z_{d}}\exp\Big{[}-\frac{1}{2}||x_{t+\delta t}-x_{ t}||_{\Lambda(x_{t})}^{2}\Big{]}, \tag{15}\]
where we subsumed \(\delta t\) into a new normalization constant \(Z_{d}\) for convenience, and note that the support of \(p_{max}(x_{t+\delta t}|x_{t})\) remains infinite. Importantly, our local Lagrange multiplier \(\Lambda(x_{t})\) enforces our velocity fluctuation constraint within a neighborhood of states reachable from \(x_{t}\) for a sufficiently small time interval \(\delta t\), which is guaranteed by our Lipschitz continuity assumption. In what remains of this manuscript we will assume that \(\delta t=1\) for notational convenience, but without loss of generality. In summary, our distribution over future states in Eq. 15 depends only on the current state, which concludes our proof.
**Corollary 1.1.2**.: _A stochastic control process (Definition 1.2) in a bounded, simply connected, exploration domain with a maximum entropy exploration strategy (in the sense of Eq. 13) is ergodic._
Proof.: To prove the ergodicity of the process described by the path distribution in Eq. 14, we take advantage of Corollary 1.1.1 and the properties of our exploration domain \(\mathcal{X}\). We begin by discretizing our optimal stochastic control process such that \(P_{max}[x_{1:N}]=\prod_{t=1}^{N}p_{max}(x_{t+1}|x_{t})\), which we can do without loss of generality as a result of Corollary 1.1.1, using the conditional measure defined therein. Importantly, since \(p_{max}(x_{t+1}|x_{t})>0,\ \forall x_{t},x_{t+1}\in\mathcal{X},\ \forall t\in T\), and \(\mathcal{X}\) is compact and simply connected, the transition operator induced by this Markov chain, \(M_{max}\), is irreducible and aperiodic. Finally, making use of the well-known Perron-Frobenius theorem (see, e.g. [20], Ch. 4), we see that our stochastic control process admits an invariant measure with respect to which our process' sample paths are ergodic, which concludes our proof.
In the context of optimal control and reinforcement learning, Corollary 1.1.2 is particularly important. For one, ergodicity guarantees that as time goes on the stochastic control process will sample from every non-zero measure set of the exploration domain. Depending on the details of the learning task, this can serve as an asymptotic guarantee on the learning process. Lastly, because Corollary 1.1.2 implies that our stochastic control process also satisfies Birkhoff's pointwise ergodic theorem as well as other ergodic theorems [26], the time-averaged behavior of the process' sample paths and their ensemble-average behavior are asymptotically the same. This is to say that the outcome of a single long rollout and the outcome of many rollouts should be equivalent in the limit to a reinforcement learning agent engaging in our exploration strategy, as we will prove in the following sections. Moreover, satisfying Birkhoff's theorem also guarantees that the statistics of data generated by such an agent will be asymptotically equivalent to those generated by _i.i.d._ sampling from the underlying data distribution.
To finish our derivation and fully characterize the nature of our maximum entropy exploration strategy, we must return to Eq. 14 and determine the form of the matrix-valued Lagrange multiplier \(\Lambda(\cdot)\). Hence, we will return to our expression for \(\langle\dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}\) in Eq. 11 and discretize our continuous sample paths, which we can do without loss of generality due to Corollary 1.1.1. Since Eq. 11 represents a proportionality, we take out many constant factors throughout the derivation. Additionally, any constant factor of \(\Lambda(\cdot)\) would be taken care of by the normalization constant \(Z\) in the final expression for Eq. 14. We proceed by discretizing Eq. 11, using \(i\) and \(j\) as time indices and \(p_{max}(\cdot|\cdot)\) as the conditional probability measure defined in Eq. 15. Our resulting expression is the following
\[\langle\dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}\propto\prod_{i=-\infty}^{ \infty}\Big{[}\int_{\mathcal{X}}dx_{i+1}\,p_{max}(x_{i+1}|x_{i})\Big{]}\sum_{ j=-\infty}^{\infty}(x_{j+1}-x_{j})(x_{j+1}-x_{j})^{T}\delta(x_{j}-x^{*}), \tag{16}\]
where the path integrals are discretized according to the Feynman formalism [27], using the same discretization as in our proof of Corollary 1.1.1 for convenience.
From this expression in Eq. 16, we take the following two steps. First, we switch out the order of summation and product, which we can do since there are no mutual dependencies between their arguments. Then, we factor out two integrals from the product expression--one capturing the probability flow _into_\(x_{j}\) and one capturing the flow _out of_ it:
\[=\sum_{j=-\infty}^{\infty}\prod_{i\neq j,j-1}\Big{[}\int_{ \mathcal{X}}dx_{i+1}\,p_{max}(x_{i+1}|x_{i})\Big{]}\\ \times\int_{\mathcal{X}}p_{max}(x_{j}|x_{j-1})\int_{\mathcal{X}} p_{max}(x_{j+1}|x_{j})(x_{j+1}-x_{j})(x_{j+1}-x_{j})^{T}\delta(x_{j}-x^{*}) dx_{j+1}dx_{j}, \tag{17}\]
where the \(\times\) operator indicates a product with the expression in the line above. Then we can apply the Dirac delta function to simplify our expression and get:
\[=\sum_{j=-\infty}^{\infty}\prod_{i\neq j,j-1}\Bigl{[}\int_{\mathcal{ X}}dx_{i+1}\;p_{max}(x_{i+1}|x_{i})\Bigr{]}\\ \times p_{max}(x^{*}|x_{j-1})\int_{\mathcal{X}}p_{max}(x_{j+1}|x^{ *})(x_{j+1}-x^{*})(x_{j+1}-x^{*})^{T}dx_{j+1}. \tag{18}\]
To simplify further we will tackle the following integral as a separate quantity:
\[I=\int_{\mathcal{X}}p_{max}(x_{j+1}|x^{*})(x_{j+1}-x^{*})(x_{j+1}-x^{*})^{T}dx_ {j+1}. \tag{19}\]
where we can substitute Eq. 15 into Eq. 19 up to proportionality to get:
\[I=\int_{\mathcal{X}}e^{-(x_{j+1}-x^{*})^{T}\Lambda(x^{*})(x_{j+1}-x^{*})}(x_{j +1}-x^{*})(x_{j+1}-x^{*})^{T}dx_{j+1}.\]
This integral can then be tackled using integration by parts and closed-form Gaussian integration. Thus far, we have not had any need to specify the domain in which exploration takes place. However, in order to evaluate this multi-dimensional integral-by-parts we require integration limits. To this end, we will assume that the domain of exploration is large enough so that the distance between \(x^{*}\) and \(x_{j+1}\) makes the exponential term approximately decay to 0 at the limits, which we shorthand by placing the limits at infinity:
\[I=\frac{1}{2}\Lambda(x^{*})^{-1}\Bigl{[} \sqrt{\det(2\pi\Lambda^{-1}(x^{*}))}\] \[-(x_{j+1}-x^{*})^{T}\mathbf{1}e^{-(x_{j+1}-x^{*})^{T}\Lambda(x^{* })(x_{j+1}-x^{*})}\Bigr{|}_{x_{j+1}=-\infty}^{x_{j+1}=\infty}\Bigr{]}, \tag{20}\]
where \(\mathbf{1}\) is the vector of all ones, and the exponential term vanishes when evaluated at the limits. Note that our assumption on the domain of integration implies that we do not consider boundary effects, and that the quantity within the brackets is a scalar that can commute with our Lagrange multiplier matrix.
We are now ready to put together our final results. By combining Eq. 20 and plugging it into Eq. 18 we have
\[\langle\dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}\propto\frac{1}{2} \sum_{j=-\infty}^{\infty}\prod_{i\neq j,j-1}\Bigl{[}\int_{\mathcal{X}}dx_{i+1} \;p_{max}(x_{i+1}|x_{i})\Bigr{]}\\ \times p_{max}(x^{*}|x_{j-1})\sqrt{\det(2\pi\Lambda^{-1}(x^{*}))} \Lambda(x^{*})^{-1}. \tag{21}\]
Since \(\langle\dot{x}(t)\dot{x}(t)^{T}\rangle_{x^{*}}\) is everywhere full-rank and \(p_{max}(\cdot|\cdot)\) has infinite support, neither \(\det(2\pi\Lambda(x^{*})^{-1})\) nor \(p_{max}(x^{*}|x_{j-1})\) can evaluate to 0. Thus, the full-rankness of our exploration implies the full-rankness of \(\Lambda(x^{*})^{-1}\). As the rest of this expression consists of constants independent of \(x^{*}\), this means that we may consolidate all scalars in Eq. 21 and subsume them into the normalization constant \(Z\) in our final expression. Now, we can determine the form of our Lagrange multiplier by making use of the constraint in Eq. 12, leading us to
\[\Lambda(x^{*})=\mathbf{C}^{-1}[x^{*}]. \tag{22}\]
This result is significant because for a broad class of systems it allows us to make a direct connection between an agent's ability to explore, its controllability properties, and the temporal correlations these induce, as discussed in Supplementary Note 1.1. We now have the final form of the maximum entropy exploration sample path distribution in terms of the covariance matrix:
\[P_{max}[x(t)]=\frac{1}{Z}\exp\Big{[}-\frac{1}{2}\int_{-\infty}^{\infty}\dot{x }(t)^{T}\mathbf{C}^{-1}[x(t)]\dot{x}(t)dt\Big{]}, \tag{23}\]
where we have added a factor of one half to precisely match the path probability of purely diffusive spatially-inhomogeneous dynamics. This final connection can be made rigorous by thinking of the covariance matrix as an estimator of the diffusion tensor through the following relation:
\(\mathbf{C}[\cdot]=\frac{1}{2}\mathbf{D}[\cdot]\mathbf{D}[\cdot]^{T}\) for some diffusion tensor \(\mathbf{D}[\cdot]\)[28, 29]. Hence, when faced with path continuity constraints the optimal exploration strategy is given by diffusion, which concludes our derivation. In line with this, we describe systems that satisfy these statistics as _maximally diffusive_.
Throughout this derivation, we have assumed for convenience that the local exploration rate of the stochastic control process is everywhere full-rank. This is equivalent to saying that the control system is capable of generating variability along all dimensions of its degrees of freedom--or equivalently, as shown in Supplementary Note 1.1 for linearizable nonlinear systems, that our system is controllable. However, this assumption is somewhat artificial because typically we are not interested in exploring directly on the full state space of our control system. For example, if we have a differential drive vehicle whose state space is its position and orientation, exploration in some planar environment usually only requires that we can fully vary its position. The orientation, while key to describing the microscopic dynamics of the process, may not matter to the broader exploration task. Instead, we may consider some differentiable coordinate transformation \(y(t)=\psi(x(t))\) that maps our states in \(\mathcal{X}\) onto the desired exploration domain \(\mathcal{Y}\). In this case, all results described thus far will still hold and we will have a valid expression for \(P_{max}[y(t)]\) with diffusion tensor \(\mathbf{C}[y(t)]\), so long as \(\mathbf{C}[y(t)]=\mathbf{J}_{\psi}[x(t)]\mathbf{C}[x(t)]\mathbf{J}_{\psi}[x( t)]^{T}\) is everywhere full-rank, where \(\mathbf{J}_{\psi}[\cdot]\) is the Jacobian matrix corresponding to the coordinate transformation \(\psi\). Hence, we only require that the new system coordinates are controllable. This is particularly useful when we are dealing with high-dimensional systems with which we are interested in exploring highly coarse-grained domains.
### Directed exploration as variational optimization
In the previous section we derived the analytical form of our maximum entropy exploration strategy, which describes agents with maximally-decorrelated trajectories and whose path statistics are equivalent to those of controllability-dependent ergodic diffusion. Thus far, we have only discussed exploration as an undirected (or passive) process. This is to say, as a process that is blind to any notion of importance or preference ascribed to regions of the exploration domain [30]. However, under a simple reformulation of our exploration problem we will see that we can also achieve efficient directed exploration with theoretical guarantees on its asymptotic performance.
In many exploration problems, we have an a priori understanding of what regions of the exploration domain are important or informative. For example, in reinforcement learning this is encoded by the reward function [7], and in optimal control this is often encoded by a cost function or an expected information density [31, 32]. In such settings, one may want an agent to explore states while taking into account the measure of information or importance of that state, which is known as directed (or active) exploration. In order to realize directed exploration, we require a notion of the "importance" of states that is amenable to the thermodynamic construction of our approach. To this end, we reformulate our maximum entropy objective into a "free energy" minimization objective by introducing a bounded potential function \(V[\cdot]\). Across fields, potential functions are used to ascribe (either a physical or virtual) cost to system states. A potential function is then able to encode tasks in control theory, learning objectives in artificial intelligence, desirable regions in spatial coverage problems, etc. Hence, we will extend the formalism presented in the previous sections to parsimoniously achieve goal-directed exploration by considering the effect of potential functions.
Since our maximum entropy functional is an expression over all possible trajectories, we need to adapt our definition of a potential to correctly express our notion of "free energy" over possible system realizations. To this end, we define our potential in the following way,
\[\langle V[x(t)]\rangle_{P}=\int_{\mathcal{F}}P[x(t)]\int_{-\infty}^{\infty}V[x (t)]dt\mathcal{D}x(t), \tag{24}\]
which captures the average cost over all possible system paths (integrated over each possible state and time for each possible path). Formally, we must assume that \(\langle V[x(t)]\rangle_{P}\) is bounded, which in practice will be the case for policies and controllers derived from these principles. Our new free energy functional objective is
\[\underset{P[x(t)]}{\text{argmin}}\ \langle V[x(t)]\rangle_{P}-S[P[x(t)]], \tag{25}\]
where we use \(S[P[x(t)]]\) as a short-hand for the argument to Eq. 13. Thankfully, to find the optimal path distribution all of the work carried out in Supplementary Notes 1.3 and 1.4 remains unchanged. All that's needed is to take the variation of Eq. 24 with respect to \(P[x(t)]\) and integrate it into the
optimal path distribution. As this arithmetic is very similar to the derivation provided in the proof of Theorem 1.1, we omit it here. The resulting minimum free energy path distribution is then
\[P_{max}^{V}[x(t)]\propto\exp\Big{[}-\int_{-\infty}^{\infty}\Big{(}V[x(t)]+\frac{ 1}{2}\dot{x}(t)^{T}\mathbf{C}^{-1}[x(t)]\dot{x}(t)\Big{)}dt\Big{]}, \tag{26}\]
which corresponds to the path distribution of a diffusion process in a potential field. Hence, the optimal directed exploration strategy is to scale the strength of diffusion with respect to the desirability of the state. In this sense, the net effect of the potential is merely to bias the diffusion process. We refer to systems satisfying such statistics as _maximally diffusive with respect to the underlying potential_. As an aside, we note that,
\[P_{max}^{V}[x(t)]=P_{max}[x(t)]\cdot e^{-\int V[x(t)]dt} \tag{27}\]
up to normalization, from which we can recover \(P_{max}[x(t)]\) in the absence of a potential (i.e., \(V[\cdot]=0\)). We note that we can manipulate the above expression into a form amenable to Markov decision processes (MDPs) by letting \(l(\cdot)=V[\cdot]\) be a standard cost function, which leads us to the following equivalent expression:
\[P_{max}^{l}[x_{1:N}]=\prod_{t=1}^{N}p_{max}(x_{t+1}|x_{t})e^{-l(x_{t})}, \tag{28}\]
where we have discretized agent trajectories without loss of generality to match the formal requirements of MDPs. Remarkably, this path distribution resembles the form of those used in the control-as-inference literature [33]. We will find that the form of this distribution we derived is crucial to the approach we take in trajectory synthesis and reinforcement learning, particularly once we introduce a dependence on agent actions into the cost function.
What are the properties of such an exploration strategy? Since we already know that the sample paths of agents applying our exploration strategy are Markovian, as long as the potential function and its interactions with our agent are memory-less the sample paths generated by Eq. 25 will continue to be as well. However, ergodicity is a more challenging property to ascertain as it depends on the properties of the underlying potential function and of our diffusion process. Nonetheless, in the following theorem we show that the trajectories of an agent successfully diffusing according to our exploration strategy in a non-singular potential will continue to be ergodic under some mild assumptions.
**Theorem 1.2**.: _A stochastic control process (Definition 1.2) achieving maximum diffusion exploration in a potential (in the sense of Eq. 26) is ergodic with respect to the measure induced by the potential._
Proof.: The proof of this theorem can be easily arrived at by extending the proof of Corollary 1.1. So long as \(V[\cdot]\) is non-singular and has bounded spatial derivatives, we may discretize Eq. 27 in the same way we discretized Eq. 14. This ensures that the potential is integrable and that it does overpower the system's diffusion by diverging. To this end, we assume \(V[\cdot]\) is Lipschitz and, as before, that \(V[x_{t}]\) applies to a neighborhood of \(x_{t}\) containing all states reachable between \(t\) and \(t+\delta t\) for a sufficiently small \(\delta t\). From these assumptions we can see that \(p_{max}^{V}(x_{t+\delta t}|x_{t})=p_{max}(x_{t+\delta t}|x_{t})e^{-\int V[x_{t }]dt}>0,\ \forall x_{t},x_{t+\delta t}\in\mathcal{X},\ \forall t\in T\) with sufficiently small \(\delta t\). This is because we have already shown that \(p_{max}(\cdot|\cdot)>0\) in Corollary 1.1.2, and because of the properties of our potential. As before, the transition operator induced by Markov chain and the potential, \(M_{max}^{V}\), is aperiodic and irreducible, which allows us to establish the process' ergodicity through the Perron-Frobenius theorem and concludes our proof. Thus, the net effect of the potential is to reshuffle probability mass in the stationary distribution of our agent's Markov chain. We note that these proofs can be carried out without discretizations by instead invoking the physics of diffusion processes, as in [34] where the authors proved that heterogeneous diffusion processes in a broad class of non-singular potentials are ergodic when the strength of the potential exceeds the strength of diffusion-driven fluctuations. However, here we limit ourselves to methods from the analysis of stochastic processes.
Thus, minimum free energy exploration leads to ergodic coverage of the exploration domain in proportion to the measure induced by the potential function. This is an important result when it comes to the applicability of our results in robotics and reinforcement learning, as it is effectively an asymptotic guarantee on learning when the learning task is encoded by the choice of potential
function--as we will illustrate in the following section. Another important note is that ergodicity guarantees that the outcomes of single-shot and multi-shot learning processes can be formally the same in a broad class of learning processes. A longstanding challenge in the field of reinforcement learning has been the development of frameworks and algorithms that can--both formally and in practice--learn in single-shot settings without the need for resetting the environment. However, if ergodicity guarantees that the statistical properties of an agent's sample paths are asymptotically equivalent in single-rollout and multi-rollout learning, then single-shot learning must be possible in some settings.
To briefly demonstrate the equivalence of single-shot vs. multi-shot learning in ergodic processes, we illustrate its effect in the Probably Approximately Correct (PAC) learning framework (see [35], Ch. 2).
**Theorem 1.3**.: _A stochastic control process (Definition 1.2) achieving maximum diffusion exploration (in the sense of Eq. 23) and capable of PAC learning in a multi-shot setting is equivalently capable of single-shot learning asymptotically._
Proof.: We begin by reviewing some essential aspects of PAC learning [35]. The PAC learning framework is concerned with providing sample complexity bounds on the learning of function classes from data. These function classes \(\mathcal{C}\) are comprised of "target concepts" \(c(\cdot)\), which map from an input space \(\mathcal{X}\) to some output space \(\mathcal{Y}\). We assume the input space to be the same as the exploration domain of our maximally diffusive stochastic control process. In this abstract framework, a learning algorithm is successful if it is able to find a hypothesis \(h(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\) within some class of functions \(\mathcal{H}\) that matches the target concept. To determine whether this is the case, we define the generalization error or risk associated with a hypothesis.
**Definition 1.3**.: _Given a hypothesis \(h\in\mathcal{H}\), a target concept \(c\in\mathcal{C}\), and an underlying data distribution \(\mathcal{D}\), the generalization error or risk of \(h\) is defined as_
\[R(h)=P_{x\sim\mathcal{D}}[h(x)\neq c(x)]=E_{x\sim\mathcal{D}}[\mathbf{1}_{h(x )\neq c(x)}].\]
We note that in the above definition \(\mathbf{1}_{h(x)\neq c(x)}\) is an indicator function that evaluates to 1 when the condition in the subscript is met and is 0 otherwise. We now state the formal definition of PAC-learnability.
**Definition 1.4**.: _A concept class \(\mathcal{C}\) is said to be PAC-learnable if there exists an algorithm \(\mathcal{A}\) and a polynomial function \(poly(\cdot,\cdot)\) such that for any \(\epsilon>0\) and \(\delta>0\), for all data distributions \(\mathcal{D}\) on \(\mathcal{X}\) and for any target concept \(c\in\mathcal{C}\), the following holds for any sample size \(N\geq poly(1/\epsilon,1/\delta)\):_
\[P[R(h)\leq\epsilon]\geq 1-\delta.\]
In other words, a class of functions is PAC-learnable if an algorithm can produce a function that recreates the input-output mapping of an arbitrary target function with high probability (at least \(1-\delta\)) and low error (at most \(\epsilon\)). With these definitions in hand, we may now prove how ergodicity enables single-shot PAC learning.
A crucial assumption underlying the concept of PAC-learnability is the independence and identical distribution (_i.i.d._) of data samples. Ideally, an agent (or algorithm) would exhaustively sample from all regions of the data distribution \(\mathcal{D}\) simultaneously and then produce a hypothesis from this spatial ensemble of data samples. This is equivalent to multi-shot learning--an ensemble of several agents are initialized in parallel to gather experience and feed a learning process. When we instead consider an embodied single-shot learning process, an agent such as a robot must navigate the exploration domain in order to gather samples in what is now a time-ordered sequential sampling process. In general, such a sampling process does not produce _i.i.d._ data [36]. However, ergodicity can give us a way around this through Birkhoff's well-known pointwise ergodic theorem [26], which we restate below:
**Theorem 1.4**.: _(Birkhoff's Ergodic Theorem) Let \(f\) be a measurable observable with \(E[|f|]<\infty\), and \(M\) be an ergodic measure-preserving map on a measure space \((\mathcal{X},\mathcal{F},\mathcal{D})\). Then with probability 1:_
\[\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{t=1}^{N}f(M^{t}x)=E_{x\sim\mathcal{ D}}[f].\]
Informally, this theorem states that the ensemble averages of observable functions of an ergodic process are equal to their time averages asymptotically. In general, we can think of the ergodic map \(M\) as a discrete-time representation of the dynamics of some dynamical system, where its superscript implies applying the map \(M\) iteratively \(t\) times. However, here we can substitute this generic definition with the transition operators \(M_{max}\) or \(M_{max}^{V}\) induced by the Markov chains of our discretized stochastic control processes, as defined in Corollary 1.1.2 and Theorem 1.2, respectively. Crucially, we have proven that both \(M_{max}\) and \(M_{max}^{V}\) are ergodic maps, which allows us use them along with Birkhoff's theorem. Then, from the expression of generalization error in Definition 1.3, we proceed by applying Birkhoff's theorem to the learning dynamics of a maximally diffusive stochastic control process:
\[R(h) =E_{x\sim\mathcal{D}}[\mathbf{1}_{h(x)\neq c(x)}] \tag{29}\] \[=\lim_{N\to\infty}\frac{1}{N}\sum_{t=1}^{N}\mathbf{1}_{h(M_{max}^ {t}x)\neq c(M_{max}^{t}x)}. \tag{30}\]
This result shows that due to the ergodicity of maximally diffusive stochastic control processes, the generalization error of single-shot PAC learning (i.e., Eq. 30) is asymptotically equal to that of multi-shot PAC learning (i.e., Eq. 29). Thus, given an ergodic agent capable of multi-shot learning (in the sense of Definition 1.4), we have shown that it can asymptotically achieve the same generalization error in a single-shot setting, which concludes our proof.
Instead of providing guarantees on the ergodicity of reinforcement learning generally, our proof examines the relationship between ergodicity and PAC learning, as others have done in similar contexts [37]. Our theorem states that the generalization error of any PAC learning algorithm is equivalent in single-shot and multi-shot settings. We note that this equivalently applies to empirical risk minimization problems since empirical risk is also a valid observable under Birkhoff's ergodic theorem. Hence, rather than applying to a particular class of reinforcement learning problems, our result applies to all algorithms formally capable of PAC learning, which includes many reinforcement learning algorithms (e.g., [38; 39; 40; 41; 42]).
As a final note, the following theorem also follows directly from our proof of Theorem 1.3.
**Theorem 1.5**.: _A stochastic control process (Definition 1.2) achieving maximum diffusion exploration (in the sense of Eq. 23) and capable of PAC learning is asymptotically seed-invariant._
Proof.: The proof follows directly from the ergodicity of maximally diffusive PAC learners and the application of Birkhoff's ergodic theorem in our proof of Theorem 1.3. In providing guarantees that any single-shot PAC learner asymptotically attains the same generalization risk as a statistical ensemble of multi-shot learners, we also proved that ergodic PAC learners are able to learn from any random initialization (see Eq. 29), which establishes seed-invariance and concludes our proof.
We note that we have focused on providing PAC guarantees because the use of deep learning architectures makes providing formal model-specific guarantees very challenging. Hence, by instead opting for model-independent guarantees, such as those that PAC provides, we can work around this limitation. Nonetheless, our results in the main text suggest that our guarantees hold empirically when deep learning architectures are applied. Outside of PAC learning, maximally diffusive sampling processes will more broadly lead to the same single-shot and multi-shot learning outcomes in learning algorithms that preserve ergodicity, such as ergodic mirror descent [43] or incremental subgradient methods [44].
### Minimizing path free energy produces diffusive gradient descent
To develop further intuition about the sense in which systems satisfying the statistics of Eq. 26 are achieving goal-directed exploratory behavior, we can examine the maximum likelihood trajectory of our minimum free energy path distribution. To do this, we begin by calculating the negative
log-likelihood of \(P_{max}^{V}[x(t)]\)
\[-\log[P_{max}^{V}[x(t)]] =\int_{-\infty}^{\infty}V[x(t)]+\frac{1}{2}\dot{x}(t)^{T}\mathbf{C} ^{-1}[x(t)]\dot{x}(t)dt \tag{31}\] \[=\int_{-\infty}^{\infty}\mathcal{H}(t,x(t),\dot{x}(t))dt\] \[=\int_{-\infty}^{\infty}\dot{x}(t)^{T}\mathbf{C}^{-1}[x(t)]\dot{ x}(t)-\mathcal{L}(t,x(t),\dot{x}(t))dt,\]
where we noted that the integral's argument is a Hamiltonian whose Legendre transform we can take, and arrive at an equivalent Lagrangian description of the system. Then, to derive an expression for the maximum likelihood trajectories of our path distribution we can extremize the Lagrangian's associated action functional:
\[\mathcal{A}=\int_{-\infty}^{\infty}\mathcal{L}(t,x(t),\dot{x}(t))dt=\int_{- \infty}^{\infty}V[x(t)]-\frac{1}{2}\dot{x}(t)^{T}\mathbf{C}^{-1}[x(t)]\dot{x }(t)dt. \tag{32}\]
Assuming that our potential is differentiable, which the rest of our analysis does not require, we can find the dynamics of the maximum likelihood trajectory by using the Euler-Lagrange equations:
\[0 =\nabla_{x}\mathcal{L}-\frac{d}{dt}\big{[}\nabla_{\dot{x}} \mathcal{L}\big{]} \tag{33}\] \[=\nabla_{x}V[x(t)]-\frac{1}{2}\dot{x}(t)^{T}\nabla_{x}\mathbf{C} ^{-1}[x(t)]\dot{x}(t)-\Big{[}-\ddot{x}(t)^{T}\mathbf{C}^{-1}[x(t)]-\dot{x}(t)^ {T}\nabla_{x}\mathbf{C}^{-1}[x(t)]\dot{x}(t)\Big{]}\] \[=\nabla_{x}V[x(t)]+\ddot{x}(t)^{T}\mathbf{C}^{-1}[x(t)]+\frac{1}{2 }\dot{x}(t)^{T}\nabla_{x}\mathbf{C}^{-1}[x(t)]\dot{x}(t)\]
which we can rearrange into our final expression,
\[\ddot{x}(t)=-\mathbf{C}[x(t)]\Big{[}\nabla_{x}V[x(t)]+\frac{1}{2}\dot{x}(t)^{ T}\nabla_{x}\mathbf{C}^{-1}[x(t)]\dot{x}(t)\Big{]} \tag{34}\]
This last expression represents the maximum likelihood dynamics of a system whose trajectories satisfy our minimum free energy path statistics. We note that \(\nabla_{x}\mathbf{C}^{-1}[x(t)]=-\mathbf{C}^{-1}[x(t)]\nabla_{x}\mathbf{C}[x (t)]\mathbf{C}^{-1}[x(t)]\), which we omitted from Eq. 34 for notational simplicity. Our expression is comprised of two gradient-like terms. The first of these terms points in directions of descent for the potential, and the second in directions that increase the system's local exploration rate (or controllability, when applicable).
To simplify these dynamics further, we can make one of two assumptions: either that our local exploration rate varies slowly over space (at least relative to \(\nabla_{x}V\)), or that our system dynamics are linearizable. Taken together, our assumptions imply that \(\nabla_{x}\mathbf{C}\approx\mathbf{0}\), which leads to a simplification of the final expression in Eq. 34. For the sake of making a connection to controllability, consider simplifying the maximum likelihood dynamics by assuming their linearizability. For this class of dynamics, Eq. 8 tells us that our system's controllability properties do not vary abruptly over state-space, as discussed in Supplementary Note 1.1. For systems with fixed or quasi-static morphologies this assumption holds well. Then, we have the following simplified dynamics:
\[\ddot{x}(t) =-\mathbf{C}[x(t)]\nabla_{x}V[x(t)] \tag{35}\] \[=-W(t,t_{0})\nabla_{x}V[x(t)].\]
By inspection we see that these second order dynamics resemble those of inertial gradient descent [45, 46, 47, 48], with two key differences. First, the absence of a damping term in the expression, which can be artificially introduced and tuned to guarantee and optimize convergence. Alternatively, we can note that any physical system approximately satisfying maximally diffusive trajectory statistics will experience dissipation, which means there may be no need to introduce it artificially. Second, and more importantly, that our system's ability to produce descent directions that optimize the potential is affected by its controllability properties. Thus, our results show that controllable agents can minimize arbitrary potentials merely through noisy exploration, suggesting that under our theoretical framework for maximum diffusion there is no formal trade-off between exploration and exploitation asymptotically, as we discuss in the following section. However, over finite time horizons we do not
expect this to be the case, as discussed in the main text. Nonetheless, this motivation will form the basis of our approach to optimization and learning in following sections.
For a moment, we consider the implications of a learning agent satisfying maximally diffusive trajectory statistics in the presence of a goal-encoding potential field (e.g., a reward function). Because such an agent will asymptotically realize the same path statistics as an ensemble of agents initialized from any initial condition, ergodicity formally requires robustness to randomized conditions of the agent and environment even outside of the PAC formalism, which we term "environmental seed-invariance," and state in the following proposition.
**Proposition 1.1**.: _A stochastic control process (Definition 1.2) achieving maximum diffusion exploration in a potential (in the sense Eq. 26) is environmentally seed-invariant asymptotically._
Proof.: The proof of this proposition follows directly from the ergodicity of maximally diffusive agents in Theorem 1.2, but can be also motivated by our derivation in this section. The ergodicity of maximally diffusive agents formally requires that agents realize the same behavior regardless of their initialization, which alone guarantees environmental seed-invariance.
Beyond ergodicity, however, our results in this section show that realizing maximally diffusive exploration with respect to a potential field spontaneously leads to goal-directed behavior regardless of how the agent and its environment are initialized. Their behavior will lead to the same outcomes in maximum likelihood--gradient descent on the potential. Moreover, because the \(n\)th moments of the maximally diffusive path distribution are zero for all \(n>2\)--that is, our distribution is not heavy-tailed--we also know that maximum likelihood trajectories are representative of typical agent behavior. In other words, as long as our agent is controllable, we can reliably expect the same outcomes across different random realizations of the agent and its environment, which also establishes environmental seed-invariance. As discussed in the main text, formally guaranteeing seed-invariance in the sense that is usually entailed by the reinforcement learning community would require including deep learning models of agent dynamics, policies, and rewards in our analysis. However, analyzing such systems is exceedingly challenging and out of the scope of our work, which limits us to providing either model-independent seed-invariance guarantees, as we do in Theorem 1.5, or environmental seed-invariance guarantees, as we do above. Nonetheless, as our results in the main text show, our approach may be able to overcome these issues in practice.
As a final note on the derivations carried out throughout all of Supplementary Note 1, we point out that most of the work we have done largely amounts to formulating an exploration problem and deriving the optimal trajectory statistics for a sufficiently broad class of agents, \(P_{max}[x(t)]\), as well as exploring its formal properties. We chose to tailor our modelling assumptions to capture the behavior of embodied agents--a class of agents historically underexplored in reinforcement learning theorycrafting--only considering agents whose trajectories are continuous. In doing so, we paid particular attention to the ways in which an agent's properties (i.e., its controllability-induced temporal correlations) affect its ability to generate optimal path statistics. However, it is entirely possible for disembodied agents to have constraints on their state transitions as well. In other words, just because an agent may be capable of teleporting from one state to another (e.g., in a digital environment) it does not mean that it is equally easy to teleport to and from every state in the environment. Thus, as a final note we point out that every formal result we have proven throughout Supplementary Note 1 still holds when we remove the continuity constraint (except for the analysis in Supplementary Note 1.6). However, in this case the optimal distribution will be uniform over the state space, i.e., \(p^{U}_{max}(x_{t+1}|x_{t})=1/|\mathcal{X}|\). In the presence of a potential our agent would also provably realize ergodic Markov exploration with respect to a cost or potential function. In this case, the optimal path distribution would take a similar form as Eq. 28, i.e., \(p^{U,l}_{max}(x_{t+1}|x_{t})=p^{U}_{max}(x_{t+1}|x_{t})e^{-l(x_{t})}\). However, realizing these path statistics is only possible when the underlying agent is fully controllable in the sense of Definition 2.1, as we discuss in the following section. We note that it is only under these conditions that agents can completely overcome correlations between state transitions. Moreover, all of the control and policy synthesis results we derive in the following sections will still hold and work well in a broad range of disembodied reinforcement learning applications, except for those in Supplementary Notes 2.4 and 2.5. While agents satisfying these statistics will still achieve sequential sampling that is asymptotically _i.i.d._, the connection to the statistical mechanics of diffusion processes will no longer hold.
Synthesizing maximally diffusive trajectories
Throughout the previous section, we have been studying the properties of a theoretical agent whose trajectories spontaneously satisfy the statistics of a maximally diffusive stochastic control process. However, the autonomous dynamics of control systems will typically not satisfy these statistics on their own. Hence, we require an approach from which to synthesize controllers (and policies) that generate maximally diffusive trajectories. In this section, we provide a general formulation of such an approach as well as simplifications amenable to use in real-time optimal control synthesis and reinforcement learning. All results derived herein form part of what we refer to as _maximum diffusion (MaxDiff) trajectory synthesis_.
### Maximally diffusive trajectories via KL control
In previous sections, we derived the maximally diffusive path distribution, \(P^{V}_{max}[x(t)]\), and characterized the properties of sample paths drawn from it in the presence of a potential that ascribes a cost to system states, \(V[\cdot]\). Now, we turn to the question of synthesizing policies and controllers that can actually achieve these statistics. To this end, we recall that in Supplementary Note 1.2 we defined a path probability measure for an arbitrary stochastic control process, \(P_{u(t)}[x(t)]\). Equipped with this measure, we are able to express the most general form of the MaxDiff trajectory synthesis objective. To synthesize maximally diffusive trajectories, it suffices to generate policies and controllers that minimize the Kullback-Leibler (KL) divergence between the analytical optimum we derived in Supplementary Note 1 and the system's current path distribution. Equivalently, we can express this as,
\[\underset{u(t)}{\text{argmin}}\ D_{KL}(P_{u(t)}[x(t)]||P^{V}_{max}[x(t)]), \tag{36}\]
which we can reformulate into many alternative forms through simple manipulations, as we illustrate throughout the following sections. Here, we first manipulate the objective into a form that highlights the different roles of the terms comprising it. Importantly, we note that taking the KL divergence is a well-defined operation in this context because the support of \(P^{V}_{max}[x(t)]\) is infinite, and we have assumed that \(\mathcal{X}\) is a compact domain. Using the definition of the KL divergence over path distributions, we can factor our objective in the following way:
\[\begin{split} D_{KL}(P_{u(t)}||P^{V}_{max})&=\int_{ \mathcal{F}}P_{u(t)}[x(t)]\log\frac{P_{u(t)}[x(t)]}{P^{V}_{max}[x(t)]}\mathcal{ D}x(t)\\ &=\int_{\mathcal{F}}P_{u(t)}[x(t)]\Big{[}\log P_{u(t)}[x(t)]-\log P ^{V}_{max}[x(t)]\Big{]}\mathcal{D}x(t)\\ &=\int_{\mathcal{F}}P_{u(t)}[x(t)]\Big{[}\log P_{u(t)}[x(t)]-\log P _{max}[x(t)]+\int_{-\infty}^{\infty}V[x(t)]dt\Big{]}\mathcal{D}x(t)\\ &=\langle V[x(t)]\rangle_{P_{u(t)}}+D_{KL}(P_{u(t)}[x(t)]||P_{max }[x(t)]),\end{split} \tag{37}\]
where we used Eq. 27 to arrive at our final expression. Now, we can rewrite our control synthesis problem as the following
\[\underset{u(t)}{\text{argmin}}\ \langle V[x(t)]\rangle_{P_{u(t)}}+D_{KL}(P_{u(t)}[x( t)]||P_{max}[x(t)]), \tag{38}\]
or equivalently
\[\underset{u(t)}{\text{argmin}}\ E_{P_{u(t)}}\big{[}L[x(t),u(t)]\big{]}+D_{KL}( P_{u(t)}[x(t)]||P_{max}[x(t)]), \tag{39}\]
where we replace our potential with a cost function \(L[x(t),u(t)]=\int l(x(t),u(t))dt\) in terms of the running cost \(l(\cdot,\cdot)\). While potential functions are a natural way to ascribe thermodynamic costs to the states of physical systems, such as diffusion processes, there is no reason to restrict ourselves to that formalism now that we are focused on control synthesis. We also replaced our physics-based expected value notation, but note that they are formally equivalent (i.e., \(\langle\cdot\rangle_{p}=E_{p}[\cdot]\)). Finally, we note that we can introduce a temperature-like parameter \(\alpha>0\) to balance between the two terms in our objective: the first, which optimizes task performance; and the second, which optimizes the statistics of the agent's diffusion. Thus, when the system is able to achieve maximally diffusive
trajectory statistics, our approach reduces to solving the task with thorough exploration of the cost landscape.
An interesting property of this result is that in our theoretical approach there is no formal trade-off between exploration and exploitation--at least asymptotically. This is because when an agent is capable of achieving maximally diffusive statistics, the KL divergence term goes to zero. That being said, in practice this is not the case and the introduction of \(\alpha\) will be of practical use in balancing between exploration and exploitation. Moreover, when maximally diffusive statistics are satisfied the expected value of the objective is taken with respect to the optimal maximum entropy trajectory distribution (i.e., \(E_{P_{max}}\big{[}L[x(t),u(t)]\big{]}\)), which is a bias-minimizing estimator of the cost function equivalent to _i.i.d._ sampling of state-action costs (or rewards) as a result of the ergodic properties of \(P_{max}[x(t)]\). This is particularly useful in applications like reinforcement learning where the cost (or reward) function is unknown.
### Maximally diffusive trajectories via stochastic optimal control
We can formulate our KL control problem as an equivalent stochastic optimal control (SOC) problem by making use of their well-known connections [33]. In SOC, the objective is to find a policy \(\pi(\cdot|\cdot)\) over control actions conditioned on the current state that optimizes the expected cost of a given cost-per-stage function over a time-horizon of fixed duration (although extending to an infinite horizon setting can be trivially done through the inclusion of a discount factor). The standard discrete time formulation of the SOC problem is
\[\pi^{*}=\underset{\pi}{\text{argmin }}E_{(x_{1:N},u_{1:N})\sim P_{\pi}}\Big{[} \sum_{t=1}^{N}l(x_{t},u_{t})\Big{]}, \tag{40}\]
where \(l(\cdot,\cdot)\) is a discretized running cost and the expectation is taken with respect to the trajectory measure induced by the policy \(P_{\pi}\), which we will now motivate and define.
To translate our KL control results from the previous section into an equivalent SOC problem, we will have to make some modifications to our approach. In particular, the introduction of a policy \(\pi(\cdot|\cdot)\) that replaces our notion of a controller (as defined in Supplementary Note 1.2) requires careful treatment. Whereas our definition of a path distribution allowed us to express a distribution directly over the trajectories of our agent, the introduction of a policy induces a distribution over actions as well. In other words, instead of \(P_{u_{1:N}}[x_{1:N}]\), we will now have \(P_{\pi}[x_{1:N},u_{1:N}]\). This creates a complication because it makes the KL divergence in Eq. 36 ill-posed--the agent's distribution and our maximally diffusive distribution are now defined over different domains. To solve this issue, we introduce the following distributions:
\[\begin{split} P_{\pi}[x_{1:N},u_{1:N}]&=\prod_{t=1 }^{N}p(x_{t+1}|x_{t},u_{t})\pi(u_{t}|x_{t})\\ P_{max}^{l}[x_{1:N},u_{1:N}]&=\prod_{t=1}^{N}p_{ max}(x_{t+1}|x_{t})e^{-l(x_{t},u_{t})},\end{split} \tag{41}\]
where \(p_{max}(x_{t+1}|x_{t})\propto\exp\big{[}-\frac{1}{2}(x_{t+1}-x_{t})^{T}\mathbf{ C}^{-1}[x_{t}](x_{t+1}-x_{t})\big{]}\) is the discretized maximally diffusive conditional measure. The second of these distributions was analytically derived in Eq. 28, and we can formally introduce an action dependence because the maximally diffusive path distribution is action-independent. Note that for the first time in our derivation we are making use of the Markov property to express our system's dynamics. However, since the analytically-derived optimal transition dynamics are Markovian, the synthesized controller will attempt to make the agent's true dynamics satisfy the Markov property as a result of the underlying optimization, which makes this a benign assumption under our framework. We note that the more general problem description in Eq. 36 does not require us to assume that our dynamics are Markovian because we are minimizing the KL divergence between the trajectory distributions directly.
Taken together, these modifications allow us to rewrite Eq. 36 as,
\[\underset{\pi}{\text{argmin }}D_{KL}(P_{\pi}[x_{1:N},u_{1:N}]||P_{max}^{l}[x_{1:N},u_{1:N}]). \tag{42}\]
Then, working from the definition of the KL divergence we have
\[D_{KL}(P_{\pi}[x_{1:N},u_{1:N}]||P_{max}^{l}[x_{1:N},u_{1:N}]) =E_{P_{\pi}}\Bigg{[}\log\frac{P_{\pi}[x_{1:N},u_{1:N}]}{P_{max}^{l }[x_{1:N},u_{1:N}]}\Bigg{]}\] \[=E_{P_{\pi}}\Bigg{[}\log\prod_{t=1}^{N}\frac{p(x_{t+1}|x_{t},u_{t} )\pi(u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t})e^{-l(x_{t},u_{t})}}\Bigg{]}\] \[=E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}\log\frac{p(x_{t+1}|x_{t},u_{t} )\pi(u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t})e^{-l(x_{t},u_{t})}}\Bigg{]}\] \[=E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})+\log\frac{p(x_{t +1}|x_{t},u_{t})\pi(u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t})}\Bigg{]}.\]
At this point, we explicitly introduce a temperature-like parameter, \(\alpha>0\), to balance between the terms of our objective, as mentioned in the previous section and in the main text. We note that this is a benign modification because equivalent to scaling our costs or rewards by \(1/\alpha\), and leads to the following result:
\[D_{KL}(P_{\pi}||P_{max}^{l})=E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})+ \alpha\log\frac{p(x_{t+1}|x_{t},u_{t})\pi(u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t}) }\Bigg{]}. \tag{43}\]
With this result we are now able to write our final expression for an equivalent SOC representation of the KL control problem in Eq. 36:
\[\pi_{\text{MaxDiff}}^{*}=\underset{\pi}{\text{argmin}}\;E_{(x_{1:N},u_{1:N}) \sim P_{\pi}}\Big{[}\sum_{t=1}^{N}\hat{l}(x_{t},u_{t})\Big{]}, \tag{44}\]
with
\[\hat{l}(x_{t},u_{t})=l(x_{t},u_{t})+\alpha\log\frac{p(x_{t+1}|x_{t},u_{t})\pi (u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t})}, \tag{45}\]
as our modified running cost function, which concludes our derivation of the formal equivalence between the KL control and SOC MaxDiff trajectory synthesis problems. When we modify the objective above by instead maximizing a reward function \(\hat{r}(x_{t},u_{t})\) with \(r(x_{t},u_{t})=-l(x_{t},u_{t})\), we refer to this objective as the _MaxDiff RL_ objective, as we have done in the main text.
Before we conclude this section, we return to the role that temporal correlations and controllability play in the generation of maximally diffusive trajectories. To this end, we first formalize and define a particular notion of controllability in the context of MDPs that was partially introduced in [49], implicit in the results of [50], and explicitly called out in [33].
**Definition 2.1**.: _A state transition model, \(p(x_{t+1}|x_{t},u_{t})\), in an MDP, \((\mathcal{X},\mathcal{U},p,r)\), is fully controllable when there exists a policy, \(\pi:\mathcal{U}\times\mathcal{X}\rightarrow[0,\infty)\), such that:_
\[p_{\pi}(x_{t+1}|x_{t})=E_{u_{t}\sim\pi(\cdot|x_{t})}[p(x_{t+1}|x_{t},u_{t})] \tag{46}\]
_and_
\[D_{KL}\Big{(}p_{\pi}(x_{t+1}|x_{t})\Big{|}\Big{|}\nu(x_{t+1}|x_{t})\Big{)}=0, \quad\forall t\in\mathbb{Z}^{+} \tag{47}\]
_for any arbitrary choice of state transition probabilities, \(\nu:\mathcal{X}\times\mathcal{X}\rightarrow[0,\infty)\)._
Thus, a system is _fully controllable_ when it is simultaneously capable of reaching every state and controlling _how_ each state is reached. In other words, a fully controllable agent can arbitrarily manipulate its state transition probabilities, \(p_{\pi}(x_{t+1}|x_{t})\), by using an optimized policy to match any desired transition probabilities, \(\nu(x_{t+1}|x_{t})\). Whether the underlying policy is deterministic or stochastic is irrelevant to Definition 2.1. However, our interpretation of \(p_{\pi}(x_{t+1}|x_{t})\) is different in either setting. When the policy is stochastic we interpret the agent's controlled state transition model as
\[p_{\pi}(x_{t+1}|x_{t})=\int_{\mathcal{U}}p(x_{t+1}|x_{t},u_{t})\pi(u_{t}|x_{t} )du_{t}, \tag{48}\]
where the integral over control actions arises from the expectation in Eq. 46. Alternatively, in the deterministic case the agent's state transition model is given by
\[p_{\pi}(x_{t+1}|x_{t})=\int_{\mathcal{U}}p(x_{t+1}|x_{t},u_{t})\delta(u_{t}-\tau _{\pi}(x_{t}))du_{t}=p(x_{t+1}|x_{t},\tau_{\pi}(x_{t})), \tag{49}\]
where action sequences are drawn from \(\pi(u_{t}|x_{t})=\delta(u_{t}-\tau_{\pi}(x_{t}))\), which is a Dirac delta where \(u_{t}=\tau_{\pi}(x_{t})\) is some given deterministic function [33].
Equipped with our definition of full controllability, we may now shed a light on the relationship between our MaxDiff RL framework and the broader literature on maximum entropy reinforcement learning (MaxEnt RL) [7, 8, 51], and present one of our main theorems.
**Theorem 2.1**.: _MaxEnt RL is a special case of MaxDiff RL with the added assumption that state transitions are decorrelated._
Proof.: Our goal in this proof will be to take the MaxDiff RL objective function in Eq. 44 and explore its relationship to the MaxEnt RL objective. We begin our proof by algebraically manipulating the MaxDiff RL objective function in Eq. 44:
\[E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}\hat{l}(x_{t},u_{t})\Bigg{]} =E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})+\alpha\log\frac {p(x_{t+1}|x_{t},u_{t})\pi(u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t})}\Bigg{]}\] \[=E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})\Bigg{]}+\sum_{t= 1}^{N}E_{(x_{t},u_{t})\sim p,\pi}\Bigg{[}\alpha\log\frac{p(x_{t+1}|x_{t},u_{t} )\pi(u_{t}|x_{t})}{p_{max}(x_{t+1}|x_{t})}\Bigg{]}\] \[=E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})\Bigg{]}+\sum_{t= 1}^{N}E_{(x_{t},u_{t})\sim p,\pi}\Big{[}\alpha\log\pi(u_{t}|x_{t})\Big{]}\] \[\qquad\qquad\qquad\qquad\qquad+\sum_{t=1}^{N}E_{(x_{t},u_{t})\sim p,\pi}\Bigg{[}\alpha\log\frac{p(x_{t+1}|x_{t},u_{t})}{p_{max}(x_{t+1}|x_{t})} \Bigg{]}.\]
So far, we have merely rearranged the terms in the MaxDiff RL objective by taking advantage of the linearity of expectations and the definition of \(P_{\pi}\) in Eq. 41. Now, we proceed by applying Jensen's inequality to the last term of our expression above--bringing in the expectation over control actions into the logarithm, noting that \(E_{u_{t}\sim\pi}[p_{max}(x_{t+1}|x_{t})]=p_{max}(x_{t+1}|x_{t})\), and doing more algebraic manipulations:
\[\leq E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})\Bigg{]}+\sum _{t=1}^{N}E_{(x_{t},u_{t})\sim p,\pi}\Big{[}\alpha\log\pi(u_{t}|x_{t})\Big{]} +\sum_{t=1}^{N}E_{x_{t}\sim p}\Bigg{[}\alpha\log\frac{E_{u_{t}\sim\pi}[p(x_{t+ 1}|x_{t},u_{t})]}{p_{max}(x_{t+1}|x_{t})}\Bigg{]}\] \[\leq E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})\Bigg{]}+\sum _{t=1}^{N}E_{(x_{t},u_{t})\sim p,\pi}\Big{[}\alpha\log\pi(u_{t}|x_{t})\Big{]} +\sum_{t=1}^{N}E_{x_{t}\sim p}\Bigg{[}\alpha\log\frac{p_{\pi}(x_{t+1}|x_{t})}{ p_{max}(x_{t+1}|x_{t})}\Bigg{]}\] \[\leq E_{P_{\pi}}\Bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})+\alpha\log \pi(u_{t}|x_{t})+\alpha D_{KL}\big{(}p_{\pi}(x_{t+1}|x_{t})\big{|}\big{|}p_{max} (x_{t+1}|x_{t})\big{)}\Bigg{]}, \tag{50}\]
where we also used the definition of \(p_{\pi}(x_{t+1}|x_{t})\) from Eq. 46.
To conclude our proof, we must show that the MaxEnt RL objective emerges from the MaxDiff RL objective under the assumption that an agent's state transitions are decorrelated. We can formalize what decorrelation requires of an agent in one of two contexts--that of agents with continuous paths, or in general. Our derivation throughout Supplementary Note 1 achieves this in the context of agents with continuous paths. Therein, we proved that the least-correlated continuous agent paths uniquely satisfy maximally diffusive statistics, which requires that \(D_{KL}(p_{\pi}|p_{max})=0\) when there exists an optimizing policy \(\pi\). Alternatively, completely decorrelating the state transitions of an agent in general requires being able to generate arbitrary jumps between states--as discussed in the main text--which requires full controllability (see Definition 2.1). Given full controllability, the optimum of Eq. 50 is also reached when \(D_{KL}(p_{\pi}|p_{max})=0\).
Applying the assumption of decorrelated state transitions in either of the two senses expressed above not only simplifies Eq. 50 by removing the KL divergence term but also by saturating Jensen's inequality, which recovers the equality between the left and right hand sides of our equations:
\[E_{P_{\pi}}\bigg{[}\sum_{t=1}^{N}\hat{l}_{c}(x_{t},u_{t})\bigg{]}=E_{P_{\pi}} \bigg{[}\sum_{t=1}^{N}l(x_{t},u_{t})+\alpha\log\pi(u_{t}|x_{t})\bigg{]},\]
where we added the subscript \(c\) to indicate that this applies under the assumption of decorrelated state transitions--either in the context of agents with continuous paths (with maximum diffusivity as a necessary condition) or in general (with full controllability as a sufficient condition). Putting together our final results, we may now write down the simplified MaxDiff RL optimization objective with the added assumption of decorrelated state transitions:
\[\pi^{*}=\underset{\pi}{\text{argmin}}\ E_{(x_{1:N},u_{1:N})\sim P_{\pi}}\Big{[} \sum_{t=1}^{N}\hat{l}_{c}(x_{t},u_{t})\Big{]}, \tag{51}\]
with
\[\hat{l}_{c}(x_{t},u_{t})=l(x_{t},u_{t})+\alpha\log\pi(u_{t}|x_{t}), \tag{52}\]
or equivalently, we can write Eq. 51 as a maximization by replacing the cost with a reward function:
\[\hat{r}_{c}(x_{t},u_{t})=r(x_{t},u_{t})+\alpha\mathcal{H}(\pi(u_{t}|x_{t})), \tag{53}\]
where we briefly changed our entropy notation, using \(\mathcal{H}(\pi(u_{t}|x_{t}))=S[\pi(u_{t}|x_{t})]\), to highlight similarities with other results in the literature. Crucially, we recognize this objective as the MaxEnt RL objective, which proves that MaxDiff RL is a strict generalization of MaxEnt RL to agents with correlations in their state transitions, which includes all physically-embodied agents, as well as many disembodied agents. Moreover, this also proves that maximizing policy entropy does not decorrelate state transitions in general.
In contrast to MaxEnt RL, when the system induces temporal correlations the MaxDiff RL objective continues to prioritize effective exploration by decorrelating state transitions and encouraging the system to satisfy maximally diffusive trajectory statistics. As we have shown above, MaxEnt RL's strategy of decorrelating action sequences is only as effective as MaxDiff RL's strategy of decorrelating state sequences when the underlying agent's properties do not induce temporal correlations on their own. Moreover, when the agent is capable of satisfying \(D_{KL}(p_{\pi}|p_{max})=0\), the agent's state transition dynamics cancel out of the MaxDiff RL objective in Eq. 44. This suggests that successful MaxDiff RL policies will achieve a kind of generalizability across agent embodiments, as we illustrated in Figure 4 of the main text and in Supplementary Movie 3. This should not come as a complete surprise based on our analysis in Supplementary Note 1.6, which shows that the maximum likelihood paths of maximally diffusive dynamics evolve along gradients that optimize their controllability.
An interesting aside is that the MaxDiff RL objective formally requires model-based techniques to optimize because of its dependence on the system's transition model. In this sense, MaxEnt RL is the best one can do in a model-free setting--yet, with model-based techniques better performance is attainable when the system dynamics introduce temporal correlations. However, if one has direct access to state transition entropy estimates, then by reformulating the objective function in Eq. 44, it is technically possible to extend our results to model-free algorithms, as we show in the following sections.
### Alternative synthesis approach via entropy maximization
For convenience, but without lack of generality, here we begin by limiting ourselves to deriving controllers in the absence of a potential or cost function. In Supplementary Note 2.1, we derived a synthesis approach based on KL control that optimizes exploration and task performance by making agents satisfy maximally diffusive trajectory statistics. Alternatively, we can use the fact that in Supplementary Note 1 we derived the unique trajectory distribution \(P_{max}[x(t)]\) with maximum entropy \(S[P_{max}[x(t)]]\) that satisfies our constraints--which merely amount to prohibiting teleportation via infinite velocities. As a result of this, we know that \(S[P_{max}[x(t)]]\geq S[P_{u(t)}[x(t)]]\) with equality
if and only if \(P_{max}[x(t)]=P_{u(t)}[x(t)]\). Thus, instead of minimizing the KL divergence, we can instead maximize \(S[P_{u(t)}[x(t)]]\), leading to the following equivalent optimization problem,
\[\underset{u(t)}{\text{argmax}}\;S[P_{u(t)}[x(t)]], \tag{54}\]
whose optimum satisfies \(S[P_{u^{\star}(t)}[x(t)]]=S[P_{max}[x(t)]]\). Based on this specification, we can define several other equivalent MaxDiff trajectory synthesis problem specifications that may be more or less convenient depending on the details of the application domain:
\[\underset{u(t)}{\text{max}}\;S[P_{u(t)}[x(t)]],\;\;\;\underset {u_{1:N}}{\text{max}}\;S\Big{[}\prod_{t=1}^{N}p(x_{t+1}|x_{t},u_{t})\Big{]},\] \[\underset{\pi}{\text{max}}\;S[P_{\pi}[x(t),u(t)]],\;\;\;\underset {\pi}{\text{max}}\;S\Big{[}\prod_{t=1}^{N}p(x_{t+1}|x_{t},u_{t})\pi(u_{t}|x_{t })\Big{]}, \tag{55}\]
where \(P_{\pi}[x(t),u(t)]\) is a continuous-time distribution over states and control actions analogous to \(P_{\pi}[x_{1:N},u_{1:N}]\), and we can think of a controller as a policy given by a Dirac delta distribution centered at \(u_{t}\). The equivalence between the KL control and SOC formulations of the problem, and the maximum entropy formulation we have produced in this section, leads to
\[\underset{u(t)}{\text{argmin}}\;E_{P_{u(t)}}\big{[}L[x(t),u(t)] \big{]}-\alpha S[P_{u(t)}[x(t)]],\] \[\underset{\pi}{\text{argmin}}\;E_{P_{\pi}}\big{[}L[x(t),u(t)] \big{]}-\alpha S[P_{\pi}[x(t),u(t)]] \tag{56}\]
and
\[\underset{u_{1:N}}{\text{argmin}}\;E_{P_{u_{1:N}}}\Big{[}\sum_{t =1}^{N}l(x_{t},u_{t})-\alpha S[p(x_{t+1}|x_{t},u_{t})]\Big{]},\] \[\underset{\pi}{\text{argmin}}\;E_{P_{\pi}}\Big{[}\sum_{t=1}^{N}l (x_{t},u_{t})-\alpha S[p(x_{t+1}|x_{t},u_{t})\pi(u_{t}|x_{t})]\Big{]} \tag{57}\]
also being formally equivalent to Eq. 36. While the different objectives listed in Eqs. 55-57 may seem redundant, some of these may prove to be more readily applicable in particular domains, or to a given practitioner's preferred policy synthesis approach. In the following section, we derive an additional objective that attains the same optimum as Eqs. 55-57, but is better suited to model-free optimizations.
### Simplified synthesis via local entropy maximization
As currently written, optimizing any of the objectives specified thus far requires access to a model capable of assessing the likelihood of our stochastic control process' trajectories. To avoid this, we can simplify the problem by assuming that our agent's path statistics are already within a _local_ variational neighborhood of the optimally diffusive transition model. We formalize this optimistic assumption by asserting that our agent's path statistics are of the following form,
\[P_{u(t)}^{L}[x(t)]=\frac{1}{Z}\exp\Big{[}-\frac{1}{2}\int_{-\infty}^{\infty} \dot{x}(t)^{T}\mathbf{C}_{u(t)}^{-1}[x(t)]\dot{x}(t)dt\Big{]}, \tag{58}\]
where it is still the case that \(S[P_{max}[x(t)]]\geq S[P_{u(t)}^{L}[x(t)]]\), and that the optimum can only be reached if and only if \(P_{max}[x(t)]=P_{u(t)}^{L}[x(t)]\). Hence, by optimizing \(S[P_{u(t)}^{L}[x(t)]]\) instead of the more general \(S[P_{u(t)}[x(t)]]\), we merely change the direction from which our system approaches the true variational optimum. Furthermore, we note that it is still the case that achieving the true optimum is only possible when the system is controllable (see Supplementary Note 1.1).
We proceed by analytically deriving the functional form of \(S[P_{max}[x(t)]]\), and then using it to formulate our optimization of \(S[P_{u(t)}^{L}[x(t)]]\). We begin by considering a finite path as a collection of \(N\) random variables, where \(x_{1:N}\) is the collection of variables comprising the path, and the conditional measures are as described in previous sections. We only care to describe \(P_{max}[x_{1:N}]\) up
to proportionality because constant offsets will not affect the behavior of the optimal controller. To proceed, we make use of the chain rule of the conditional entropies of random variables. For the reader's convenience, we state the chain rule as it is commonly formulated below:
\[S[P[x_{1:N}]]=\sum_{t=1}^{N}S[p(x_{t+1}|x_{1:t})]. \tag{59}\]
Then, applying this property directly onto \(P_{max}[x_{1:N}]\) we have,
\[S[P_{max}[x_{1:N}]]=\sum_{t=1}^{N}S[p_{max}(x_{t+1}|x_{t})]\propto\frac{1}{2} \sum_{t=1}^{N}\log\det\mathbf{C}[x_{t}], \tag{60}\]
where we made use of the Markov property to simplify our sum over conditional entropies, and then the analytical form of the entropy of a Gaussian distribution (up to a constant offset) to reach our final expression.
Our expression for the entropy of the maximally diffusive trajectory distribution is of a simple form that only depends on the optimal covariance statistics of the process locally. The matrix \(\mathbf{C}[\cdot]\) expresses the optimal covariance statistics achievable within the constraints imposed on the local rate of exploration of a given system, which in many cases are formally related to their controllability properties. Thus, given trajectory statistics \(P_{u(t)}^{L}[x(t)]\), matching the entropy of the maximally diffusive trajectory distribution merely requires synthesizing a controller \(u(t)\) or policy \(\pi(\cdot|\cdot)\) that satisfies \(\mathbf{C}_{u(t)}[x^{*}]=\mathbf{C}_{\pi}[x^{*}]=\mathbf{C}[x^{*}]\) for all \(x^{*}\in\mathcal{X}\), which is only possible when the system is controllable. To achieve this we can optimize either,
\[\underset{u(t)}{\text{argmax}}\ \frac{1}{2}\int_{-\infty}^{\infty}\log\det \mathbf{C}_{u(t)}[x(t)]dt,\quad\text{or}\ \ \underset{\pi}{\text{argmax}}\ E_{(x_{1:N},u_{1:N})\sim P_{\pi}}\Big{[}\frac{1}{2} \sum_{t=1}^{N}\log\det\mathbf{C}_{\pi}[x_{t}]\Big{]}. \tag{61}\]
When a potential or cost encoding a task is introduced we instead have,
\[\underset{u(t)}{\text{argmin}}\ \langle V[x(t)]\rangle_{P_{u(t)}^{L}}- \frac{\alpha}{2}\int_{-\infty}^{\infty}\log\det\mathbf{C}_{u(t)}[x(t)]dt,\quad \text{or}\] \[\underset{u(t)}{\text{argmin}}\ E_{P_{u(t)}^{L}}\big{[}L[x(t),u(t )]\big{]}-\frac{\alpha}{2}\int_{-\infty}^{\infty}\log\det\mathbf{C}_{u(t)}[x(t) ]dt, \tag{62}\]
or their discretized variants expressed with respect to policies instead of controllers.
Finally, we can arrive at the MaxDiff RL objective presented in the main text, which is expressed in terms of an instantaneous reward function, \(r(x_{t},u_{t})\). The implemented MaxDiff RL objective is the following,
\[\underset{\pi}{\text{argmax}}\ E_{(x_{1:N},u_{1:N})\sim P_{\pi}}\Big{[}\sum_{ t=1}^{N}r(x_{t},u_{t})+\frac{\alpha}{2}\log\det\mathbf{C}_{\pi}[x_{t}]\Big{]}, \tag{63}\]
which, again, satisfies the same optimum as our previous objectives and is formally equivalent to them within a variational neighborhood of the optimum. This objective is the one that we used to derive all results in the main text. While it may seem that evaluating \(\mathbf{C}_{\pi}[x_{t}]\) still requires access to predictive system rollouts in a model-based fashion, we first note that \(\mathbf{C}_{\pi}[x_{t}]\) can be empirically estimated from data, and second that local autoregressive estimates of the agent's state covariance statistics can also be used instead. Thus, one could alternatively define \(\hat{\mathbf{C}}[x_{t}]=Cov[x_{t-w:t}]\) in terms of short trajectory windows from time \(t-w\) to the current time \(t\) for all \(t-w>0\)--in other words, looking backward in time instead of forward. So long as samples from this window represent a sufficiently small \(\delta t\) of the system's state transition history, this poses no issues to our optimization and the theoretical guarantees our framework provides. In this sense, the MaxDiff RL objective in Eq. 63 can be implemented in model-free settings.
While we will explore the properties of MaxDiff trajectory synthesis in a variety of problem domains outside of deep RL in the following section, for now discuss the properties of the optimization problem in Eq. 61 (and by extension the one in Eq. 63). We begin by noting that this objective function is concave and submodular due to the properties of the log-determinant, which means that the optimization can be efficiently solved in polynomial time [14, 52]. Nonetheless, computational
challenges arise due to the need to calculate the state covariance matrix and its determinant at every point along the agent's paths. Thus, here we list some details relevant to the numerics of implementing our objective function in an RL pipeline. First, the fact that maximum entropy sample paths are Markovian and that the objective is submodular means that, rather than optimizing the log-determinant over the entire integral, a practitioner may break down the integral and instead optimize the objective locally in an iterative fashion. Second, in practice we may not always have guarantees on the full-rankness of the covariance matrix, which would make its determinant evaluate to zero, thereby creating numerical stability issues for the resulting algorithm. To remedy this (without coordinate changes), we may take advantage of another property of the log-determinant and instead optimize \(\sum_{i=1}^{M}\log\lambda_{i}\), where the sum is taken over the leading \(M\) eigenvalues of \(\mathbf{C}[x(t)]\). However, it is important to note that this effectively restricts the exploration to an \(M\)-dimensional subspace of the full domain. Finally, we note that one can optimize the logarithm of the trace of \(\mathbf{C}[x(t)]\) as an approximation that drastically reduces the complexity of computing the determinant in high dimensional optimizations. However, this approximation can only formally produce equivalent results to the log-determinant when system states vary independently from one another (i.e., when \(\mathbf{C}[x(t)]\) is diagonal), which is generally not the case. Nonetheless, this assumption is routinely made out of convenience in much of the conditional variational autoencoder literature (e.g., [53]), so it may be of help to a practitioner at the cost of some added distance to the assumptions underlying our formal guarantees.
### Example applications of MaxDiff trajectory synthesis
In this section, we implement MaxDiff trajectory synthesis across handful of applications outside of reinforcement learning that require both directed and undirected exploration. These should illustrate the sense in which our theoretical framework can extend beyond a particular algorithmic implementation, or even reinforcement learning as a problem setting. Moreover, here we will analyze the behavior of various dynamical systems made to obey maximally diffusive statistics via MaxDiff trajectory synthesis through the lens of statistical mechanics.
We begin by studying MaxDiff trajectory synthesis in the undirected exploration of a nontrivial control system--a spring-loaded inverted pendulum (SLIP) model. The SLIP model is a popular dynamic model of locomotion and encodes many important properties of human locomotion [55]. In particular,
we will implement the SLIP model as in [56], where it is described as a 9-dimensional nonlinear nonsmooth control system. The SLIP model is shown in Supplementary Fig. 3(a) and consists of a "head" which carries its mass, and a "toe" which makes contact with the ground. Its state-space is defined by the 3D velocities and positions of its head and toe, or \(x=[x_{h},\dot{x}_{h},y_{h},\dot{y}_{h},z_{h},\dot{z}_{h},x_{t},y_{t},q]^{T}\), where \(q=\{c,a\}\) is a variable that tracks whether the system is in contact with the ground or in the air. The SLIP dynamics are the following:
\[\dot{x}=f(x,u)=\begin{cases}f_{c}(x,u),&\text{if }l_{c}<l_{0}\\ f_{a}(x,u),&\text{otherwise}\end{cases},\]
\[f_{c}(x,u)=\begin{bmatrix}\dot{x}_{h}\\ \frac{(k(l_{0}-l_{s})+u_{c})(x_{h}-x_{t})}{ml_{c}}\\ \frac{(k(l_{0}-l_{c})+u_{c})(y_{h}-y_{t})}{2}\\ \frac{(k(l_{0}-l_{c})+u_{c})(z_{h}-z_{t})}{ml_{c}}-g\\ 0\end{bmatrix},\;f_{a}(x,u)=\begin{bmatrix}\dot{x}_{h}\\ 0\\ \dot{y}_{h}\\ 0\\ \dot{z}_{h}\\ -g\\ \dot{y}_{h}+u_{t_{y}}\end{bmatrix}, \tag{64}\]
where \(f_{c}(x,u)\) captures the SLIP dynamics during contact with the ground, and \(f_{a}(x,u)\) captures them while in the air. During contact the SLIP can only exert a force, \(u_{c}\), by pushing along the axis of the spring, whose resting length is \(l_{0}\) and its stiffness is \(k\). During flight the SLIP is subject to gravity, \(g\), and is capable of moving the \(x,y\)-position of its toe by applying \(u_{t_{x}}\) and \(u_{t_{y}}\), respectively. To finish specifying the SLIP dynamics, and determine whether or not the spring is in contact with the ground, we define,
\[l_{c}=\sqrt{(x_{h}-x_{t})^{2}+(y_{h}-y_{t})^{2}+(z_{h}-z_{G})^{2}},\]
which describes the distance along the length of the spring to the ground, and \(z_{G}\) is the ground height. Rather than explore diffusively in the entirety of the SLIP model's 9-dimensional state-space, we will first demand that it only explores a 1-dimensional space described by its \(x\)-coordinate, starting from an initial condition of \(x(0)=0\). We can think of this as a projection to a 1-dimensional subspace of the system, or equivalently as a coordinate transformation with a constant Jacobian matrix. We note that the system's nonsmoothness should break the path continuity constraint that our approach presumes to hold. However, since we use a coordinate transformation to formulate the exploration problem in terms of the system's \(x\)-coordinate we do not violate the assumptions of MaxDiff trajectory synthesis. This is because, while the system's velocities experience discontinuities, its position coordinates do not. In general, the use of coordinate transformations can extend the applicability of MaxDiff trajectory synthesis to even broader classes of systems than those claimed by our theoretical framework throughout Supplementary Note 1. However, this will require a formal analysis of the observability properties of maximally diffusive agents, which lies outside the scope of this work.
In order to realize maximally diffusive exploration, we make use of MPPI in conjunction with the MaxDiff trajectory synthesis objective in Eq. 61. In Supplementary Fig. 3 we illustrate the results of this process. Supplementary Fig. 3(a) depicts the sample paths generated by the maximally diffusive exploration of the SLIP model's \(x\)-coordinate. The sample paths of the SLIP agent resemble the empirical statistics of Brownian particle paths despite the fact that the SLIP model is far from a non-inertial point mass. In Supplementary Fig. 3(b), we study the fluctuations of maximally diffusive exploration from the lens of statistical mechanics. Here, we analyze the mean squared displacement (MSD) statistics of undirected maximally diffusive exploration and compare to the statistics of standard and anomalous diffusion processes. MSD plots capture the deviations of a diffusing agent from some reference position over time. In standard diffusion processes, the relationship between MSD and time elapsed is linear on average. That is, we expect the squared deviation of a diffusing agent from its initial condition to grow linearly in proportion to the time elapsed (see blue line in Supplementary Fig. 3(b)). However, in general there exist other diffusion regimes characterized by the growth of MSD over time. These regimes are typically determined by fitting the exponent \(\gamma\) in MSD\((x)\propto t^{\gamma}\), where normal diffusion has \(\gamma=1\), superdiffusion has \(1<\gamma<2\), and ballistic motion has \(\gamma\geq 2\). The purple line in Supplementary Fig. 3(b) depicts the MSD statistics of the SLIP model. The diffusion generated by the SLIP model's maximally diffusive exploration has superdiffusive displacements over short-time scales owing to the the inertial properties of the system. However,
as we consider longer time-scales, the behavior of the SLIP model becomes indistinguishable from standard diffusion processes with \(\gamma=1\). This difference in scaling exponents has been shown to be a general property of diffusion with inertial particles and should be expected in macroscopic systems [54].
Keeping with the SLIP dynamical system, in Supplementary Fig. 4 we study the behavior of MaxDiff trajectory synthesis across various standard robotics applications. In Supplementary Fig. 4(a), a single SLIP agent is performing undirected MaxDiff exploration within the bounds of an \(\mathbf{N}\)-shaped environment. In this task, the agent must be able to explore its \(x\)-\(y\) plane by hopping along, without falling or exiting the bounds of the exploration domain. To ensure the SLIP model's safety, as well as establish the bounds of the environment, we made use of control barrier functions (CBFs) [57]--a standard technique in the field for guaranteeing safety. Then, to illustrate another application application of the ergodicity guarantees of our method, in Supplementary Fig. 4(b) we apply MaxDiff trajectory synthesis to multiagent exploration in a complex environment--a house floor plan--in conjunction with CBFs. Since maximally diffusive exploration is ergodic, the outcomes of a multiagent execution and a single agent execution are asymptotically identical. In this way, maximally diffusive exploration only incurs a linear scaling in computational complexity as a function of the number of agents. Finally, in Supplementary Fig. 4(c) we return to the single agent case to illustrate directed maximally diffusive exploration in the same complex environment as before. Here, a potential function encoding a goal destination is flat beyond a certain distance, which leads to undirected exploration initially. However, as the agent nears the goal, it can detect variations in the potential and follows its gradients diffusively towards the goal.
Now, we will highlight how the underlying properties of an agent's dynamics can affect the trajectories generated during maximally diffusive exploration. To this end, we consider a simple planar exploration task subject to a bimodal Gaussian potential ascribing a cost to system states far away from the distribution means. In Supplementary Fig. 5, we explore the planar domain with three different systems. First, exploration over the bimodal potential is shown with a single integrator system, which is a controllable first-order linear system. Since this system is effectively identical to a non-inertial point mass, its sample paths are formally the same as those of Brownian particles in a confining potential. In the middle panel of Supplementary Fig. 5, we consider a double integrator system, which is a controllable, linear, second-order system. However, for this system its diffusion tensor is degenerate because the noise only comes into the system as accelerations. Nonetheless,
the system realizes ergodic coverage with respect to the underlying potential (in agreement with the theory of degenerate diffusion [58, 59]). Finally, we consider the differential drive vehicle, which is a simple first-order nonlinear dynamical system with nontrivial controllability properties. Yet, the differential drive vehicle realizes ergodic coverage in the plane, as predicted by the properties of maximally diffusive systems.
As a final look into the properties of directed maximally diffusive exploration, we examine the role that the temperature parameter \(\alpha\) plays on the behavior of the agent in a simpler setting. To this end, we revisit the differential drive vehicle dynamics and make use of MPPI once again to optimize our objective. However, instead of a bimodal Gaussian potential, we consider a quadratic potential centered at the origin with the system initialized at \((x,y)=(-4,-2)\). Quadratic potentials such as these are routinely implemented as cost functions throughout robotics and control theory. In Supplementary Fig. 6, we depict the behavior of the system as a function of the temperature parameter. Initially, with the temperature set to zero the agent's paths are solely determined by the solution to the optimal control problem, smoothly driving towards the potential's minimum at the origin. Then, as we tune up \(\alpha\), we increase diffusivity of our agent's sample paths. While at \(\alpha=1\) the position of the system fluctuates very slightly at the bottom of the quadratic potential, at \(\alpha=100\) the agent diffuses around violently by overcoming its energetic tendency to stay at the bottom of the well. If we were to continue increasing \(\alpha\) to larger and larger values, we would observe that directed maximally diffusive exploration would cease to be ergodic, as predicted by [34]. This occurs as a result of the strength of diffusive fluctuations (here set by our \(\alpha\) parameter) dominating the magnitude of the drift induced by the potential's gradient. This is to say that for a given problem, system, and operator preferences, there should be a range of \(\alpha\) values that best achieve the task.
Throughout this section we have illustrated how maximally diffusive exploration, as formulated in Eqs. 61 and 62, satisfies the behaviors predicted by our theoretical framework. Moreover, we have motivated how MaxDiff trajectory synthesis can be applied in a variety of common robotic applications while simultaneously guaranteeing safety, ergodicity, and task distributability. Broadly speaking, incorporating maximally diffusive exploration into most optimal control or reinforcement learning frameworks should be simple--particularly in light of the effort we have put towards deriving optimization objectives realizable in a broad class of application domains.
Figure 6: **Varying the \(\alpha\) parameter of directed MaxDiff exploration**. Here, we are making a differential drive vehicle explore a quadratic potential centered at the origin under varying choices of \(\alpha\) modulating the strength of the diffusive exploration within the potential. As we increase \(\alpha\) the strength of the diffusion increases as well, leading to greater exploration of the basin of attraction of the quadratic potential well.
## 3 Reinforcement learning implementation details
### General
All simulated examples use the reward functions specified MuJoCo environments unless otherwise specified [60; 61]. Supplementary Table 1 provides a list of all hyperparameters used in all implementations of MaxDiff RL, NN-MPPI, and SAC, for each environment. All experiments were run for a total of 1 million environment steps with each epoch being comprised of 1000 steps. For multi-shot tests, the episode was reset upon satisfying a "done" condition or completing the number of steps in an epoch. For single-shot tests, the environment was never reset and each epoch only constituted a checkpoint for saving cumulative rewards during the duration of that epoch. All models used ReLU activation functions, and 10 seeds were run for each configuration.
For all model-based examples (i.e., MaxDiff RL and NN-MPPI), the system dynamics are represented in the following form, \(x_{t+1}=x_{t}+f(x_{t},u_{t})\), where the transition function \(f(x_{t},u_{t})\) and reward function \(r(x_{t},u_{t})\) are both modeled by fully-connected neural networks. Both the reward function and transition model are optimized using Adam [62].The model is regularized using the negative log-loss of a normal distribution where the variance, \(\Sigma_{\text{model}}\in\mathbb{R}^{n\times n}\), is a hyperparameter that is simultaneously learned based on agent experience. The predicted reward utility is improved by the error between the predicted target and target reward equal to \(\mathcal{L}=\|r_{t}+0.95\ r(x_{t+1},u_{t+1})-r(x_{t},u_{t})\|^{2}\). The structure of this loss function is similar to those used in temporal-difference learning [63; 64]. The inclusion of the reward term from the next state and next action helps the algorithm learn in environments with rewards that do not strictly depend on the current state, as is the case with some MuJoCo examples.
For all model-free examples, we implement SAC to provide updates to our model-free policy. We use the hyperparameters for SAC specified by the parameters shared in [8], including the structure of the soft Q functions, but excluding the batch size parameter and the implemented policy's representation. Instead, we choose to match the batch size used during our model-based learning examples (i.e., with Maxdiff RL and NN-MPPI), and also introduce a simpler policy representation. As an alternative to the representation in [8], our policy is represented by a normal distribution parametrized by a mean function defined as a fully-connected neural network.
Reinforcement learning experiments were run on an Intel(r) Xeon(R) Platinum 8380 CPU @ 2.30GHz x 160 server running Ubuntu 18.04 and Python 3.6 (pytorch 1.7.0 and mujoco_py 2.0). This hardware was loaned by the Intel Corporation, whose technical support we acknowledge.
### Point mass
The goal of the point mass environment is to learn to move to the origin of a 2D environment. This is a custom environment in which the point mass dynamics are simulated as a 2D double integrator with states \([x,y,\dot{x},\dot{y}]\) and actions \([\ddot{x},\ddot{y}]\). Each episode is initialized at state \([-1,-1,0,0]+\epsilon\) where \(\epsilon\sim\mathcal{N}(0,0.01)\). The reward function is specified in terms of location in the environment \(r=-(x^{2}+y^{2})\). For multi-shot tests, the episode was terminated if the point mass exceeded a boundary defined as a square at \(x,y=\pm 5\). The simulation uses RK-4 integration with a time step of 0.1.
### Swimmer
The goal of the swimmer environment is to learn a gait to move forward in a 2D environment as quickly as possible. These tests use the "v3" variant of the OpenAI Gym MuJoCo Swimmer Environment, which includes all configuration states in the observation generated at each step. For the "heavy-tailed" tests, the default xml model file is used, which includes a 3-link body with identical links. For the "light-tailed" tests, we modify the density of the "tail" link to be \(10\) times lighter than other two links. The default link density in the model is \(1000\) and modified tail density is \(100\).
### Ant
The goal of the ant environment is to learn a gait to move forward in a 3D environment as quickly as possible. These tests use the "v3" variant of the OpenAI Gym MuJoCo Ant Environment, which includes all configuration states in the observation generated at each step and includes no contact states. The control cost, contact cost, and healthy reward weights are all set to zero, so the modified
reward function only depends on the change in the \(x\)-position during the duration of the step (with positive reward for progress in the positive \(x\)-direction). We also modified the "done" condition to make it possible for the ant to recover from falling. The "done" condition is triggered if the ant has been upside down for 1 second, and the ant is considered "upside down" if the torso angle that is nominally perpendicular to the ground exceeds 2.7 radians.
### Half-cheetah
The goal of the half-cheetah environment is to learn a gait to move forward by applying torques on the joints in a 2D vertical plane. These tests use the "v3" variant of the OpenAI Gym MuJoCo Half-Cheetah Environment, which includes all configuration states in the observation generated at each step.
**Supplementary tables**
\begin{tabular}{|c|c||c||c|c|c|} \hline \multirow{2}{*}{**Algorithm**} & \multirow{2}{*}{**Hyperparameter**} & \multicolumn{2}{c||}{**Toy Problem**} & \multicolumn{3}{c|}{**MuJoCo Gym (v3)**} \\ \cline{3-6} & & **2D Point mass** & **Swimmer** & **Ant** & **Half-cheetah** \\ \hline \multirow{4}{*}{All} & State Dim & 4 & 10 & 29 & 18 \\ \cline{2-6} & Action Dim & 2 & 2 & 8 & 6 \\ \cline{2-6} & Learning Rate & 0.0005 & 0.0003 & 0.0003 & 0.0003 \\ \cline{2-6} & Batch Size & 128 & 128 & 256 & 256 \\ \hline \multirow{2}{*}{SAC} & Policy Layers & \(128\times 128\) & \(256\times 256\) & \(512\times 512\times 512\) & \(256\times 256\) \\ \cline{2-6} & Reward Scale & 0.25 & 100 & 5 & 5 \\ \hline \multirow{4}{*}{NN-MPPI/} & Model Layers & \(128\times 128\) & \(200\times 200\) & \(512\times 512\times 512\) & \(200\times 200\) \\ \cline{2-6} & Horizon & 30 & 40 & 20 & 10 \\ \cline{2-6} & \multirow{2}{*}{Multi} & Samples & 500 & 500 & 1000 & 500 \\ \cline{2-6} & Lambda & 0.5 & 0.5 & 0.5 & 0.5 \\ \cline{2-6} & SS & Samples & \multirow{2}{*}{NA} & 1000 & 1000 & 1000 \\ \cline{2-6} & Lambda & & 0.1 & 0.5 & 0.5 \\ \hline \multirow{4}{*}{MaxDiff RL (Exploration)} & \multirow{4}{*}{Multi} & Alpha & 5 & 1,5,10,50, & \multirow{4}{*}{15} & \multirow{4}{*}{5} \\ \cline{2-6} & & Dimensions & \([x,y,\dot{x},\dot{y}]\) & \([x,y,\dot{x},\dot{y}]\) & \([x,y,z]\) & \([x,y,\dot{x},\dot{y}]\) \\ \cline{2-6} & & Weights & \([1,1,0.01,0.01]\) & \([1,1,0.05,0.05]\) & \([1,1,0.005,0.05]\) \\ \cline{2-6} & \multirow{2}{*}{SS} & Alpha & 50 & 15 & 5 \\ \cline{2-6} & & Dimensions & NA & \([x,y,\dot{x},\dot{y}]\) & \([x,y,\dot{x},\dot{y}]\) \\ \cline{2-6} & & Weights & & \([1,1,0.05,0.05]\) & \([1,1,0.05,0.05]\) \\ \hline \end{tabular}
Supplementary Table 1: **Simulation hyperparameters for paper results.**_Multi_ parameters only apply to multi-shot runs, and _SS_ parameters only apply to single-shot parameters. All weights are diagonal matrices with the values specified.
## Supplementary movies
Movie 1*: **Effect of temperature parameter on MaxDiff RL.** Here, we depict an application of MaxDiff RL to MuJoCo's swimmer environment. To explore the role of the parameter \(\alpha\) on the performance of agents, we vary it across three orders of magnitude and observe its effect on system behavior (10 seeds each). Tuning \(\alpha\) is crucial because it can determine whether or not the underlying agent is ergodic.
Movie 2: **Robustness of MaxDiff RL across random seeds.** Here, we depict an application of MaxDiff RL to MuJoCo's swimmer environment, comparing with alternative state-of-the-art MaxEnt RL algorithms, NN-MPPI and SAC. We observe that the performance of MaxDiff RL beats the state-of-the-art and does not vary across seeds, which is a formal property of our framework. We test across two different system conditions: one with a light-tailed and more controllable swimmer, and one with a heavy-tailed and less controllable swimmer (10 seeds each).
Movie 3: **Generalization of MaxDiff RL across embodiments.** Here, we depict an application of MaxDiff RL to MuJoCo's swimmer environment. We implement a transfer learning experiment in which RL algorithms train with a system with a given set of physical properties, and are deployed on a system with different properties than those trained. We find that unlike alternative approaches, MaxDiff RL remains task capable across agent embodiments.
Movie 4: **Single-shot learning in MaxDiff RL agents.** Here, we depict an application of MaxDiff RL to MuJoCo's swimmer environment under a significant modification. Agents are unable to reset their environment, which requires all algorithms to learn to solve the task in a single deployment. First, we show representative snapshots of agents using models learned in single-shot deployments, and observe that MaxDiff RL still achieves state-of-the-art seed-invariant performance. Then, for MaxDiff RL we also show a complete playback of a single representative single-shot learning trial. We stagger the playback so that the first swimmer covers environment steps 1-2000, the next one 2001-4000, and so on for a total of 20,000 environment steps across 10 swimmers. In doing so, we can visualize the entirety of the single-shot learning process in real time.
## Supplementary figures
Supplementary Figure 7: **Results of the half-cheetah benchmark.** This figure compares the performance of MaxDiff RL to NN-MPPI and SAC on MuJoCo's HalfCheetahEnv v3 in multi-shot. Since the half-cheetah can fall into an irreversible state (i.e., flipping upside down) this environment breaks the assumptions of MaxDiff RL. Nonetheless, we still achieve state-of-the-art performance with substantially less variance than alternative algorithms (10 seeds each).
## Appendix A
Figure 9: **Results of the ant benchmark.** This figure compares the performance of MaxDiff RL to NN-MPPI and SAC on MuJoCoβs AntEnv v3 in multi-shot. Just as with our main text single-shot example, the ant environment breaks ergodicity, which pushes MaxDiff RL outside of the domain of its assumptions. Nonetheless, MaxDiff RL remains state-of-the-art with comparable performance to NN-MPPI (10 seeds each). This is to be expected because in the worst case scenario where MaxDiffβs additional entropy term in the objective has no effect on agent outcomes, our implementation of MaxDiff RL is identical to NN-MPPI. |